Merge remote-tracking branch 'upstream/main' into wip/6.0

This commit is contained in:
Andrea Boriero 2021-06-07 12:47:53 +02:00
commit 6ced2f0aca
42 changed files with 779 additions and 187 deletions

View File

@ -21,10 +21,10 @@ While we try to keep requirements for contributing to a minimum, there are a few
we ask that you mind. we ask that you mind.
For code contributions, these guidelines include: For code contributions, these guidelines include:
* respect the project code style - find templates for [IntelliJ IDEA](https://community.jboss.org/docs/DOC-15468) or [Eclipse](https://community.jboss.org/docs/DOC-16649) * respect the project code style - find templates for [IntelliJ IDEA](https://hibernate.org/community/contribute/intellij-idea/) or [Eclipse](https://hibernate.org/community/contribute/eclipse-ide/)
* have a corresponding JIRA issue and the key for this JIRA issue should be used in the commit message * have a corresponding JIRA issue and the key for this JIRA issue should be used in the commit message
* have a set of appropriate tests. For bug reports, the tests reproduce the initial reported bug * have a set of appropriate tests. For bug reports, the tests reproduce the initial reported bug
and illustrates that the solution actually fixes the bug. For features/enhancements, the and illustrate that the solution actually fixes the bug. For features/enhancements, the
tests illustrate the feature working as intended. In both cases the tests are incorporated into tests illustrate the feature working as intended. In both cases the tests are incorporated into
the project to protect against regressions the project to protect against regressions
* if applicable, documentation is updated to reflect the introduced changes * if applicable, documentation is updated to reflect the introduced changes
@ -47,14 +47,14 @@ GitHub there are a few pre-requisite steps to follow:
the linked page, this also includes: the linked page, this also includes:
* [set up your local git install](https://help.github.com/articles/set-up-git) * [set up your local git install](https://help.github.com/articles/set-up-git)
* clone your fork * clone your fork
* See the wiki pages for setting up your IDE, whether you use * see the wiki pages for setting up your IDE, whether you use
[IntelliJ IDEA](https://community.jboss.org/wiki/ContributingToHibernateUsingIntelliJ) [IntelliJ IDEA](https://hibernate.org/community/contribute/intellij-idea/)
or [Eclipse](https://community.jboss.org/wiki/ContributingToHibernateUsingEclipse)<sup>(1)</sup>. or [Eclipse](https://hibernate.org/community/contribute/eclipse-ide/)<sup>(1)</sup>.
## Create the working (topic) branch ## Create the working (topic) branch
Create a [topic branch](http://git-scm.com/book/en/Git-Branching-Branching-Workflows#Topic-Branches) Create a [topic branch](https://git-scm.com/book/en/Git-Branching-Branching-Workflows#Topic-Branches)
on which you will work. The convention is to incorporate the JIRA issue key in the name of this branch, on which you will work. The convention is to incorporate the JIRA issue key in the name of this branch,
although this is more of a mnemonic strategy than a hard-and-fast rule - but doing so helps: although this is more of a mnemonic strategy than a hard-and-fast rule - but doing so helps:
* remember what each branch is for * remember what each branch is for
@ -87,7 +87,7 @@ appreciated btw), please use rebasing rather than merging. Merging creates
## Submit ## Submit
* push your changes to the topic branch in your fork of the repository * push your changes to the topic branch in your fork of the repository
* initiate a [pull request](http://help.github.com/articles/creating-a-pull-request) * initiate a [pull request](https://help.github.com/articles/creating-a-pull-request)
* update the JIRA issue by providing the PR link in the **Pull Request** column on the right * update the JIRA issue by providing the PR link in the **Pull Request** column on the right

View File

@ -3,17 +3,17 @@ to applications, libraries, and frameworks.
It also provides an implementation of the JPA specification, which is the standard Java specification for ORM. It also provides an implementation of the JPA specification, which is the standard Java specification for ORM.
This is the repository of its source code; see http://hibernate.org/orm/[Hibernate.org] for additional information. This is the repository of its source code; see https://hibernate.org/orm/[Hibernate.org] for additional information.
image:http://ci.hibernate.org/job/hibernate-orm-main-h2-main/badge/icon[Build Status,link=http://ci.hibernate.org/job/hibernate-orm-main-h2-main/] image:https://ci.hibernate.org/job/hibernate-orm-main-h2-main/badge/icon[Build Status,link=https://ci.hibernate.org/job/hibernate-orm-main-h2-main/]
image:https://img.shields.io/lgtm/grade/java/g/hibernate/hibernate-orm.svg?logo=lgtm&logoWidth=18[Language grade: Java,link=https://lgtm.com/projects/g/hibernate/hibernate-orm/context:java] image:https://img.shields.io/lgtm/grade/java/g/hibernate/hibernate-orm.svg?logo=lgtm&logoWidth=18[Language grade: Java,link=https://lgtm.com/projects/g/hibernate/hibernate-orm/context:java]
== Continuous Integration == Continuous Integration
Hibernate uses both http://jenkins-ci.org[Jenkins] and https://github.com/features/actions[GitHub Actions] Hibernate uses both https://jenkins-ci.org[Jenkins] and https://github.com/features/actions[GitHub Actions]
for its CI needs. See for its CI needs. See
* http://ci.hibernate.org/view/ORM/[Jenkins Jobs] * https://ci.hibernate.org/view/ORM/[Jenkins Jobs]
* https://github.com/hibernate/hibernate-orm/actions[GitHub Actions Jobs] * https://github.com/hibernate/hibernate-orm/actions[GitHub Actions Jobs]
== Building from sources == Building from sources
@ -25,7 +25,7 @@ Gradle.
Contributors should read the link:CONTRIBUTING.md[Contributing Guide]. Contributors should read the link:CONTRIBUTING.md[Contributing Guide].
See the guides for setting up http://hibernate.org/community/contribute/intellij-idea/[IntelliJ] or See the guides for setting up https://hibernate.org/community/contribute/intellij-idea/[IntelliJ] or
https://hibernate.org/community/contribute/eclipse-ide/[Eclipse] as your development environment. https://hibernate.org/community/contribute/eclipse-ide/[Eclipse] as your development environment.
== Gradle Primer == Gradle Primer

View File

@ -7,6 +7,9 @@ pipeline {
tools { tools {
jdk 'OpenJDK 8 Latest' jdk 'OpenJDK 8 Latest'
} }
parameters {
booleanParam(name: 'NO_SLEEP', defaultValue: true, description: 'Whether the NO_SLEEP patch should be applied to speed up the TCK execution')
}
stages { stages {
stage('Build') { stage('Build') {
steps { steps {
@ -39,18 +42,21 @@ pipeline {
steps { steps {
sh """ \ sh """ \
docker rm -f tck || true docker rm -f tck || true
docker run -v ~/.m2/repository/org/hibernate:/root/.m2/repository/org/hibernate:z -e NO_SLEEP=true -e HIBERNATE_VERSION=$HIBERNATE_VERSION --name tck jakarta-tck-runner docker rm -f tck-vol || true
docker cp tck:/tck/persistence-tck/tmp/JTreport/ ./JTreport docker volume create tck-vol
docker run -v ~/.m2/repository/org/hibernate:/root/.m2/repository/org/hibernate:z -v tck-vol:/tck/persistence-tck/tmp/:z -e NO_SLEEP=${params.NO_SLEEP} -e HIBERNATE_VERSION=$HIBERNATE_VERSION --name tck jakarta-tck-runner
docker cp tck:/tck/persistence-tck/tmp/ ./results
""" """
archiveArtifacts artifacts: 'JTreport/**' archiveArtifacts artifacts: 'results/**'
script { script {
failures = sh ( failures = sh (
script: """ \ script: """ \
set +x
while read line; do while read line; do
if [[ "\$line" != *"Passed." ]]; then if [[ "\$line" != *"Passed." ]]; then
echo "\$line" echo "\$line"
fi fi
done <JTreport/text/summary.txt done <results/JTreport/text/summary.txt
""", """,
returnStdout: true returnStdout: true
).trim() ).trim()

View File

@ -7,6 +7,12 @@ pipeline {
tools { tools {
jdk 'OpenJDK 8 Latest' jdk 'OpenJDK 8 Latest'
} }
parameters {
choice(name: 'IMAGE_JDK', choices: ['jdk8', 'jdk11'], description: 'The JDK base image version to use for the TCK image.')
string(name: 'TCK_VERSION', defaultValue: '3.0.0', description: 'The version of the Jakarta JPA TCK i.e. `2.2.0` or `3.0.1`')
string(name: 'TCK_SHA', defaultValue: 'b08c8887f00306f8bb7ebe54c4c810f3452519f5395733637ccc639b5081aebf', description: 'The SHA256 of the Jakarta JPA TCK that is distributed under https://download.eclipse.org/jakartaee/persistence/3.0/jakarta-persistence-tck-${TCK_VERSION}.zip.sha256')
booleanParam(name: 'NO_SLEEP', defaultValue: true, description: 'Whether the NO_SLEEP patch should be applied to speed up the TCK execution')
}
stages { stages {
stage('Build') { stage('Build') {
steps { steps {
@ -30,7 +36,7 @@ pipeline {
dir('tck') { dir('tck') {
checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/main']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/hibernate/jakarta-tck-runner.git']]] checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/main']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/hibernate/jakarta-tck-runner.git']]]
sh """ \ sh """ \
cd jpa-3.0; docker build -t jakarta-tck-runner . cd jpa-3.0; docker build -f Dockerfile.${params.IMAGE_JDK} -t jakarta-tck-runner --build-arg TCK_VERSION=${params.TCK_VERSION} --build-arg TCK_SHA=${params.TCK_SHA} .
""" """
} }
} }
@ -39,18 +45,21 @@ pipeline {
steps { steps {
sh """ \ sh """ \
docker rm -f tck || true docker rm -f tck || true
docker run -v ~/.m2/repository/org/hibernate:/root/.m2/repository/org/hibernate:z -e NO_SLEEP=true -e HIBERNATE_VERSION=$HIBERNATE_VERSION --name tck jakarta-tck-runner docker rm -f tck-vol || true
docker cp tck:/tck/persistence-tck/tmp/JTreport/ ./JTreport docker volume create tck-vol
docker run -v ~/.m2/repository/org/hibernate:/root/.m2/repository/org/hibernate:z -v tck-vol:/tck/persistence-tck/tmp/:z -e NO_SLEEP=${params.NO_SLEEP} -e HIBERNATE_VERSION=$HIBERNATE_VERSION --name tck jakarta-tck-runner
docker cp tck:/tck/persistence-tck/tmp/ ./results
""" """
archiveArtifacts artifacts: 'JTreport/**' archiveArtifacts artifacts: 'results/**'
script { script {
failures = sh ( failures = sh (
script: """ \ script: """ \
set +x
while read line; do while read line; do
if [[ "\$line" != *"Passed." ]]; then if [[ "\$line" != *"Passed." ]]; then
echo "\$line" echo "\$line"
fi fi
done <JTreport/text/summary.txt done <results/JTreport/text/summary.txt
""", """,
returnStdout: true returnStdout: true
).trim() ).trim()

View File

@ -122,14 +122,14 @@ task aggregateJavadocs(type: Javadoc) {
overview = project.file( 'src/main/javadoc/overview.html' ) overview = project.file( 'src/main/javadoc/overview.html' )
windowTitle = 'Hibernate JavaDocs' windowTitle = 'Hibernate JavaDocs'
docTitle = "Hibernate JavaDoc ($project.version)" docTitle = "Hibernate JavaDoc ($project.version)"
bottom = "Copyright &copy; 2001-$currentYear <a href=\"http://redhat.com\">Red Hat, Inc.</a> All Rights Reserved." bottom = "Copyright &copy; 2001-$currentYear <a href=\"https://redhat.com\">Red Hat, Inc.</a> All Rights Reserved."
use = true use = true
options.encoding = 'UTF-8' options.encoding = 'UTF-8'
links = [ links = [
'https://docs.oracle.com/javase/8/docs/api/', 'https://docs.oracle.com/javase/8/docs/api/',
'http://docs.jboss.org/hibernate/beanvalidation/spec/2.0/api/', 'https://docs.jboss.org/hibernate/beanvalidation/spec/2.0/api/',
'http://docs.jboss.org/cdi/api/2.0/', 'https://docs.jboss.org/cdi/api/2.0/',
'https://javaee.github.io/javaee-spec/javadocs/' 'https://javaee.github.io/javaee-spec/javadocs/'
] ]

View File

@ -1,7 +1,7 @@
[[preface]] [[preface]]
== Preface == Preface
Hibernate is an http://en.wikipedia.org/wiki/Object-relational_mapping[Object/Relational Mapping] solution for Java environments. Hibernate is an https://en.wikipedia.org/wiki/Object-relational_mapping[Object/Relational Mapping] solution for Java environments.
Hibernate not only takes care of the mapping from Java classes to database tables (and from Java data types to SQL data types), but also provides data query and retrieval facilities. Hibernate not only takes care of the mapping from Java classes to database tables (and from Java data types to SQL data types), but also provides data query and retrieval facilities.
It can significantly reduce development time otherwise spent with manual data handling in SQL and JDBC. It can significantly reduce development time otherwise spent with manual data handling in SQL and JDBC.

View File

@ -9,14 +9,14 @@ hibernate-core:: The main (core) Hibernate module. Defines its ORM features and
hibernate-envers:: Hibernate's historical entity versioning feature hibernate-envers:: Hibernate's historical entity versioning feature
hibernate-spatial:: Hibernate's Spatial/GIS data-type support hibernate-spatial:: Hibernate's Spatial/GIS data-type support
hibernate-osgi:: Hibernate support for running in OSGi containers. hibernate-osgi:: Hibernate support for running in OSGi containers.
hibernate-agroal:: Integrates the http://agroal.github.io/[Agroal] connection pooling library into Hibernate hibernate-agroal:: Integrates the https://agroal.github.io/[Agroal] connection pooling library into Hibernate
hibernate-c3p0:: Integrates the http://www.mchange.com/projects/c3p0/[C3P0] connection pooling library into Hibernate hibernate-c3p0:: Integrates the https://www.mchange.com/projects/c3p0/[C3P0] connection pooling library into Hibernate
hibernate-hikaricp:: Integrates the https://github.com/brettwooldridge/HikariCP/[HikariCP] connection pooling library into Hibernate hibernate-hikaricp:: Integrates the https://github.com/brettwooldridge/HikariCP/[HikariCP] connection pooling library into Hibernate
hibernate-vibur:: Integrates the http://www.vibur.org/[Vibur DBCP] connection pooling library into Hibernate hibernate-vibur:: Integrates the https://www.vibur.org/[Vibur DBCP] connection pooling library into Hibernate
hibernate-proxool:: Integrates the http://proxool.sourceforge.net/[Proxool] connection pooling library into Hibernate hibernate-proxool:: Integrates the https://proxool.sourceforge.net/[Proxool] connection pooling library into Hibernate
hibernate-jcache:: Integrates the https://jcp.org/en/jsr/detail?id=107$$[JCache] caching specification into Hibernate, hibernate-jcache:: Integrates the https://jcp.org/en/jsr/detail?id=107$$[JCache] caching specification into Hibernate,
enabling any compliant implementation to become a second-level cache provider. enabling any compliant implementation to become a second-level cache provider.
hibernate-ehcache:: Integrates the http://ehcache.org/[Ehcache] caching library into Hibernate as a second-level cache provider. hibernate-ehcache:: Integrates the https://ehcache.org/[Ehcache] caching library into Hibernate as a second-level cache provider.
=== Release Bundle Downloads === Release Bundle Downloads
@ -43,10 +43,10 @@ synced to Maven Central as part of an automated job (some small delay may occur)
The team responsible for the JBoss Maven repository maintains a number of Wiki pages that contain important information: The team responsible for the JBoss Maven repository maintains a number of Wiki pages that contain important information:
* http://community.jboss.org/docs/DOC-14900 - General information about the repository. * https://community.jboss.org/docs/DOC-14900 - General information about the repository.
* http://community.jboss.org/docs/DOC-15170 - Information about setting up the JBoss repositories in order to do * https://community.jboss.org/docs/DOC-15170 - Information about setting up the JBoss repositories in order to do
development work on JBoss projects themselves. development work on JBoss projects themselves.
* http://community.jboss.org/docs/DOC-15169 - Information about setting up access to the repository to use JBoss * https://community.jboss.org/docs/DOC-15169 - Information about setting up access to the repository to use JBoss
projects as part of your own software. projects as part of your own software.
The Hibernate ORM artifacts are published under the `org.hibernate` groupId. The Hibernate ORM artifacts are published under the `org.hibernate` groupId.

View File

@ -7,14 +7,14 @@ Working with both Object-Oriented software and Relational Databases can be cumbe
Development costs are significantly higher due to a number of "paradigm mismatches" between how data is represented in objects Development costs are significantly higher due to a number of "paradigm mismatches" between how data is represented in objects
versus relational databases. Hibernate is an Object/Relational Mapping (ORM) solution for Java environments. The versus relational databases. Hibernate is an Object/Relational Mapping (ORM) solution for Java environments. The
term Object/Relational Mapping refers to the technique of mapping data between an object model representation to term Object/Relational Mapping refers to the technique of mapping data between an object model representation to
a relational data model representation. See http://en.wikipedia.org/wiki/Object-relational_mapping for a good a relational data model representation. See https://en.wikipedia.org/wiki/Object-relational_mapping for a good
high-level discussion. Also, Martin Fowler's link:$$http://martinfowler.com/bliki/OrmHate.html$$[OrmHate] article high-level discussion. Also, Martin Fowler's link:$$https://martinfowler.com/bliki/OrmHate.html$$[OrmHate] article
takes a look at many of the mismatch problems. takes a look at many of the mismatch problems.
Although having a strong background in SQL is not required to use Hibernate, having a basic understanding of the Although having a strong background in SQL is not required to use Hibernate, having a basic understanding of the
concepts can help you understand Hibernate more quickly and fully. An understanding of data modeling principles concepts can help you understand Hibernate more quickly and fully. An understanding of data modeling principles
is especially important. Both http://www.agiledata.org/essays/dataModeling101.html and is especially important. Both https://www.agiledata.org/essays/dataModeling101.html and
http://en.wikipedia.org/wiki/Data_modeling are good starting points for understanding these data modeling https://en.wikipedia.org/wiki/Data_modeling are good starting points for understanding these data modeling
principles. If you are completely new to database access in Java, principles. If you are completely new to database access in Java,
https://www.marcobehler.com/guides/a-guide-to-accessing-databases-in-java contains a good overview of the various parts, https://www.marcobehler.com/guides/a-guide-to-accessing-databases-in-java contains a good overview of the various parts,
pieces and options. pieces and options.
@ -33,6 +33,6 @@ logic in the Java-based middle-tier. However, Hibernate can certainly help you t
vendor-specific SQL code and streamlines the common task of translating result sets from a tabular vendor-specific SQL code and streamlines the common task of translating result sets from a tabular
representation to a graph of objects. representation to a graph of objects.
See http://hibernate.org/orm/contribute/ for information on getting involved. See https://hibernate.org/orm/contribute/ for information on getting involved.
IMPORTANT: The projects and code for the tutorials referenced in this guide are available as link:hibernate-tutorials.zip[] IMPORTANT: The projects and code for the tutorials referenced in this guide are available as link:hibernate-tutorials.zip[]

View File

@ -98,7 +98,7 @@ any mapping information associated with `title`.
.Practice Exercises .Practice Exercises
- [ ] Add an association to the `Event` entity to model a message thread. Use the - [ ] Add an association to the `Event` entity to model a message thread. Use the
http://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html[_User Guide_] for more details. https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html[_User Guide_] for more details.
- [ ] Add a callback to receive notifications when an `Event` is created, updated or deleted. - [ ] Add a callback to receive notifications when an `Event` is created, updated or deleted.
Try the same with an event listener. Use the Try the same with an event listener. Use the
http://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html[_User Guide_] for more details. https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html[_User Guide_] for more details.

View File

@ -12,7 +12,7 @@ static metamodel classes.
For developers it is important that the task of the metamodel generation For developers it is important that the task of the metamodel generation
can be automated. can be automated.
Hibernate Static Metamodel Generator is an annotation processor based on Hibernate Static Metamodel Generator is an annotation processor based on
http://jcp.org/en/jsr/detail?id=269[JSR_269] with the task of creating JPA 2 https://jcp.org/en/jsr/detail?id=269[JSR_269] with the task of creating JPA 2
static metamodel classes. static metamodel classes.
The following example shows two JPA 2 entities `Order` and `Item`, together The following example shows two JPA 2 entities `Order` and `Item`, together
with the metamodel class `Order_` and a typesafe query. with the metamodel class `Order_` and a typesafe query.
@ -111,7 +111,7 @@ persistence unit metadata:
== Canonical Metamodel == Canonical Metamodel
The structure of the metamodel classes is described in the JPA 2 The structure of the metamodel classes is described in the JPA 2
(JSR 317) http://jcp.org/en/jsr/detail?id=317[specification], but for (JSR 317) https://jcp.org/en/jsr/detail?id=317[specification], but for
completeness the definition is repeated in the following paragraphs. completeness the definition is repeated in the following paragraphs.
Feel free to skip ahead to the <<chapter-usage,usage chapter>>, if you Feel free to skip ahead to the <<chapter-usage,usage chapter>>, if you
are not interested into the gory details. are not interested into the gory details.
@ -258,9 +258,9 @@ pass the processor option to the compiler plugin:
==== ====
The maven-compiler-plugin approach has the disadvantage that the maven compiler plugin The maven-compiler-plugin approach has the disadvantage that the maven compiler plugin
does currently not allow to specify multiple compiler arguments does currently not allow to specify multiple compiler arguments
(http://jira.codehaus.org/browse/MCOMPILER-62[MCOMPILER-62]) (https://jira.codehaus.org/browse/MCOMPILER-62[MCOMPILER-62])
and that messages from the Messenger API are suppressed and that messages from the Messenger API are suppressed
(http://jira.codehaus.org/browse/MCOMPILER-66[MCOMPILER-66]). (https://jira.codehaus.org/browse/MCOMPILER-66[MCOMPILER-66]).
A better approach is to disable annotation processing for the compiler A better approach is to disable annotation processing for the compiler
plugin as seen in below. plugin as seen in below.

View File

@ -343,7 +343,7 @@ service to be injected is optional, use `InjectService#required=false`.
Once built, a ServiceRegistry is generally considered immutable. The Services themselves might accept Once built, a ServiceRegistry is generally considered immutable. The Services themselves might accept
re-configuration, but immutability here means adding/replacing services. So all the services hosted in a particular re-configuration, but immutability here means adding/replacing services. So all the services hosted in a particular
ServiceRegistry must be known up-front. To this end, building a ServiceRegistry usually employees a ServiceRegistry must be known up-front. To this end, building a ServiceRegistry usually employees a
http://en.wikipedia.org/wiki/Builder_pattern[builder^]. https://en.wikipedia.org/wiki/Builder_pattern[builder^].
=== Building BootstrapServiceRegistry === Building BootstrapServiceRegistry

View File

@ -2,7 +2,7 @@
== Hibernate ORM within WildFly == Hibernate ORM within WildFly
The http://wildfly.org/[WildFly application server] includes Hibernate ORM as the default JPA provider out of the box. The https://wildfly.org/[WildFly application server] includes Hibernate ORM as the default JPA provider out of the box.
In previous versions of Hibernate ORM, we offered a "feature pack" to enable anyone to use the very latest version in In previous versions of Hibernate ORM, we offered a "feature pack" to enable anyone to use the very latest version in
WildFly as soon as a new release of Hibernate ORM was published. WildFly as soon as a new release of Hibernate ORM was published.

View File

@ -4,7 +4,7 @@
Working with both Object-Oriented software and Relational Databases can be cumbersome and time-consuming. Working with both Object-Oriented software and Relational Databases can be cumbersome and time-consuming.
Development costs are significantly higher due to a paradigm mismatch between how data is represented in objects versus relational databases. Development costs are significantly higher due to a paradigm mismatch between how data is represented in objects versus relational databases.
Hibernate is an Object/Relational Mapping solution for Java environments. Hibernate is an Object/Relational Mapping solution for Java environments.
The term http://en.wikipedia.org/wiki/Object-relational_mapping[Object/Relational Mapping] refers to the technique of mapping data from an object model representation to a relational data model representation (and vice versa). The term https://en.wikipedia.org/wiki/Object-relational_mapping[Object/Relational Mapping] refers to the technique of mapping data from an object model representation to a relational data model representation (and vice versa).
Hibernate not only takes care of the mapping from Java classes to database tables (and from Java data types to SQL data types), but also provides data query and retrieval facilities. Hibernate not only takes care of the mapping from Java classes to database tables (and from Java data types to SQL data types), but also provides data query and retrieval facilities.
It can significantly reduce development time otherwise spent with manual data handling in SQL and JDBC. It can significantly reduce development time otherwise spent with manual data handling in SQL and JDBC.
@ -16,9 +16,9 @@ However, Hibernate can certainly help you to remove or encapsulate vendor-specif
=== Get Involved === Get Involved
* Use Hibernate and report any bugs or issues you find. See http://hibernate.org/issuetracker[Issue Tracker] for details. * Use Hibernate and report any bugs or issues you find. See https://hibernate.org/issuetracker[Issue Tracker] for details.
* Try your hand at fixing some bugs or implementing enhancements. Again, see http://hibernate.org/issuetracker[Issue Tracker]. * Try your hand at fixing some bugs or implementing enhancements. Again, see https://hibernate.org/issuetracker[Issue Tracker].
* Engage with the community using mailing lists, forums, IRC, or other ways listed in the http://hibernate.org/community[Community section]. * Engage with the community using mailing lists, forums, IRC, or other ways listed in the https://hibernate.org/community[Community section].
* Help improve or translate this documentation. Contact us on the developer mailing list if you have interest. * Help improve or translate this documentation. Contact us on the developer mailing list if you have interest.
* Spread the word. Let the rest of your organization know about the benefits of Hibernate. * Spread the word. Let the rest of your organization know about the benefits of Hibernate.
@ -36,7 +36,7 @@ When building Hibernate 5.1 or older from sources, you need Java 1.7 due to a bu
=== Getting Started Guide === Getting Started Guide
New users may want to first look through the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/quickstart/html_single/[Hibernate Getting Started Guide] for basic information as well as tutorials. New users may want to first look through the https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/quickstart/html_single/[Hibernate Getting Started Guide] for basic information as well as tutorials.
There is also a series of http://docs.jboss.org/hibernate/orm/{majorMinorVersion}/topical/html_single/[topical guides] providing deep dives into various topics. There is also a series of https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/topical/html_single/[topical guides] providing deep dives into various topics.
[NOTE] [NOTE]
==== ====
@ -44,8 +44,8 @@ While having a strong background in SQL is not required to use Hibernate, it cer
Probably even more important is an understanding of data modeling principles. Probably even more important is an understanding of data modeling principles.
You might want to consider these resources as a good starting point: You might want to consider these resources as a good starting point:
* http://en.wikipedia.org/wiki/Data_modeling[Data modeling Wikipedia definition] * https://en.wikipedia.org/wiki/Data_modeling[Data modeling Wikipedia definition]
* http://www.agiledata.org/essays/dataModeling101.html[Data Modeling 101] * https://www.agiledata.org/essays/dataModeling101.html[Data Modeling 101]
Understanding the basics of transactions and design patterns such as _Unit of Work_ (<<Bibliography.adoc#PoEAA,PoEAA>>) or _Application Transaction_ are important as well. Understanding the basics of transactions and design patterns such as _Unit of Work_ (<<Bibliography.adoc#PoEAA,PoEAA>>) or _Application Transaction_ are important as well.
These topics will be discussed in the documentation, but a prior understanding will certainly help. These topics will be discussed in the documentation, but a prior understanding will certainly help.

View File

@ -668,7 +668,7 @@ See the <<chapters/caching/Caching.adoc#caching,Caching>> chapter for more info.
[[annotations-hibernate-cascade]] [[annotations-hibernate-cascade]]
==== `@Cascade` ==== `@Cascade`
The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Cascade.html[`@Cascade`] annotation is used to apply the Hibernate specific http://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/CascadeType.html[`CascadeType`] strategies (e.g. `CascadeType.LOCK`, `CascadeType.SAVE_UPDATE`, `CascadeType.REPLICATE`) on a given association. The https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/Cascade.html[`@Cascade`] annotation is used to apply the Hibernate specific https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/javadocs/org/hibernate/annotations/CascadeType.html[`CascadeType`] strategies (e.g. `CascadeType.LOCK`, `CascadeType.SAVE_UPDATE`, `CascadeType.REPLICATE`) on a given association.
For JPA cascading, prefer using the {jpaJavadocUrlPrefix}CascadeType.html[`javax.persistence.CascadeType`] instead. For JPA cascading, prefer using the {jpaJavadocUrlPrefix}CascadeType.html[`javax.persistence.CascadeType`] instead.

View File

@ -10,7 +10,7 @@ Hibernate comes with a great variety of features that can help you tune the data
Although Hibernate provides the `update` option for the `hibernate.hbm2ddl.auto` configuration property, Although Hibernate provides the `update` option for the `hibernate.hbm2ddl.auto` configuration property,
this feature is not suitable for a production environment. this feature is not suitable for a production environment.
An automated schema migration tool (e.g. https://flywaydb.org/[Flyway], http://www.liquibase.org/[Liquibase]) allows you to use any database-specific DDL feature (e.g. Rules, Triggers, Partitioned Tables). An automated schema migration tool (e.g. https://flywaydb.org/[Flyway], https://www.liquibase.org/[Liquibase]) allows you to use any database-specific DDL feature (e.g. Rules, Triggers, Partitioned Tables).
Every migration should have an associated script, which is stored on the Version Control System, along with the application source code. Every migration should have an associated script, which is stored on the Version Control System, along with the application source code.
When the application is deployed on a production-like QA environment, and the deployment worked as expected, then pushing the deployment to a production environment should be straightforward since the latest schema migration was already tested. When the application is deployed on a production-like QA environment, and the deployment worked as expected, then pushing the deployment to a production environment should be straightforward since the latest schema migration was already tested.
@ -233,7 +233,7 @@ and you should consider these alternatives prior to jumping to a second-level ca
After properly tuning the database, to further reduce the average response time and increase the system throughput, application-level caching becomes inevitable. After properly tuning the database, to further reduce the average response time and increase the system throughput, application-level caching becomes inevitable.
Typically, a key-value application-level cache like https://memcached.org/[Memcached] or http://redis.io/[Redis] is a common choice to store data aggregates. Typically, a key-value application-level cache like https://memcached.org/[Memcached] or https://redis.io/[Redis] is a common choice to store data aggregates.
If you can duplicate all data in the key-value store, you have the option of taking down the database system for maintenance without completely losing availability since read-only traffic can still be served from the cache. If you can duplicate all data in the key-value store, you have the option of taking down the database system for maintenance without completely losing availability since read-only traffic can still be served from the cache.
One of the main challenges of using an application-level cache is ensuring data consistency across entity aggregates. One of the main challenges of using an application-level cache is ensuring data consistency across entity aggregates.

View File

@ -207,22 +207,22 @@ The number of seconds between two consecutive pool validations. During validatio
=== c3p0 properties === c3p0 properties
`*hibernate.c3p0.min_size*` (e.g. 1):: `*hibernate.c3p0.min_size*` (e.g. 1)::
Minimum size of C3P0 connection pool. Refers to http://www.mchange.com/projects/c3p0/#minPoolSize[c3p0 `minPoolSize` setting]. Minimum size of C3P0 connection pool. Refers to https://www.mchange.com/projects/c3p0/#minPoolSize[c3p0 `minPoolSize` setting].
`*hibernate.c3p0.max_size*` (e.g. 5):: `*hibernate.c3p0.max_size*` (e.g. 5)::
Maximum size of C3P0 connection pool. Refers to http://www.mchange.com/projects/c3p0/#maxPoolSize[c3p0 `maxPoolSize` setting]. Maximum size of C3P0 connection pool. Refers to https://www.mchange.com/projects/c3p0/#maxPoolSize[c3p0 `maxPoolSize` setting].
`*hibernate.c3p0.timeout*` (e.g. 30):: `*hibernate.c3p0.timeout*` (e.g. 30)::
Maximum idle time for C3P0 connection pool. Refers to http://www.mchange.com/projects/c3p0/#maxIdleTime[c3p0 `maxIdleTime` setting]. Maximum idle time for C3P0 connection pool. Refers to https://www.mchange.com/projects/c3p0/#maxIdleTime[c3p0 `maxIdleTime` setting].
`*hibernate.c3p0.max_statements*` (e.g. 5):: `*hibernate.c3p0.max_statements*` (e.g. 5)::
Maximum size of C3P0 statement cache. Refers to http://www.mchange.com/projects/c3p0/#maxStatements[c3p0 `maxStatements` setting]. Maximum size of C3P0 statement cache. Refers to https://www.mchange.com/projects/c3p0/#maxStatements[c3p0 `maxStatements` setting].
`*hibernate.c3p0.acquire_increment*` (e.g. 2):: `*hibernate.c3p0.acquire_increment*` (e.g. 2)::
The number of connections acquired at a time when there's no connection available in the pool. Refers to http://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 `acquireIncrement` setting]. The number of connections acquired at a time when there's no connection available in the pool. Refers to https://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 `acquireIncrement` setting].
`*hibernate.c3p0.idle_test_period*` (e.g. 5):: `*hibernate.c3p0.idle_test_period*` (e.g. 5)::
Idle time before a C3P0 pooled connection is validated. Refers to http://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod[c3p0 `idleConnectionTestPeriod` setting]. Idle time before a C3P0 pooled connection is validated. Refers to https://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod[c3p0 `idleConnectionTestPeriod` setting].
`*hibernate.c3p0*`:: `*hibernate.c3p0*`::
A setting prefix used to indicate additional c3p0 properties that need to be passed to the underlying c3p0 connection pool. A setting prefix used to indicate additional c3p0 properties that need to be passed to the underlying c3p0 connection pool.

View File

@ -489,7 +489,7 @@ However, this strategy requires the IN-clause row value expression for composite
If you can use temporary tables, that's probably the best choice. If you can use temporary tables, that's probably the best choice.
However, if you are not allowed to create temporary tables, you must pick one of these four strategies that works with your underlying database. However, if you are not allowed to create temporary tables, you must pick one of these four strategies that works with your underlying database.
Before making up your mind, you should benchmark which one works best for your current workload. Before making up your mind, you should benchmark which one works best for your current workload.
For instance, http://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/[CTE are optimization fences in PostgreSQL], so make sure you measure before making a decision. For instance, https://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/[CTE are optimization fences in PostgreSQL], so make sure you measure before making a decision.
If you're using Oracle or MySQL 5.7, you can choose either `InlineIdsOrClauseBulkIdStrategy` or `InlineIdsInClauseBulkIdStrategy`. If you're using Oracle or MySQL 5.7, you can choose either `InlineIdsOrClauseBulkIdStrategy` or `InlineIdsInClauseBulkIdStrategy`.
For older version of MySQL, then you can only use `InlineIdsOrClauseBulkIdStrategy`. For older version of MySQL, then you can only use `InlineIdsOrClauseBulkIdStrategy`.

View File

@ -612,7 +612,7 @@ and also log a warning about the missing cache.
==== ====
Note that caches created this way may not be suitable for production usage (unlimited size and no eviction in particular) unless the cache provider explicitly provides a specific configuration for default caches. Note that caches created this way may not be suitable for production usage (unlimited size and no eviction in particular) unless the cache provider explicitly provides a specific configuration for default caches.
Ehcache, in particular, allows to set such default configuration using cache templates. See the http://www.ehcache.org/documentation/3.0/107.html#supplement-jsr-107-configurations[Ehcache documentation] for more details. Ehcache, in particular, allows to set such default configuration using cache templates. See the https://www.ehcache.org/documentation/3.0/107.html#supplement-jsr-107-configurations[Ehcache documentation] for more details.
==== ====
[[caching-provider-ehcache]] [[caching-provider-ehcache]]
@ -622,7 +622,7 @@ This integration covers Ehcache 2.x, in order to use Ehcache 3.x as second level
[NOTE] [NOTE]
==== ====
Use of the built-in integration for http://www.ehcache.org/[Ehcache] requires that the `hibernate-ehcache` module jar (and all of its dependencies) are on the classpath. Use of the built-in integration for https://www.ehcache.org/[Ehcache] requires that the `hibernate-ehcache` module jar (and all of its dependencies) are on the classpath.
==== ====
[[caching-provider-ehcache-region-factory]] [[caching-provider-ehcache-region-factory]]
@ -665,12 +665,12 @@ To use the `SingletonEhCacheRegionFactory`, you need to specify the following co
---- ----
==== ====
The `SingletonEhCacheRegionFactory` configures a singleton `net.sf.ehcache.CacheManager` (see http://www.ehcache.org/apidocs/2.8.4/net/sf/ehcache/CacheManager.html#create%28%29[CacheManager#create()]), The `SingletonEhCacheRegionFactory` configures a singleton `net.sf.ehcache.CacheManager` (see https://www.ehcache.org/apidocs/2.8.4/net/sf/ehcache/CacheManager.html#create%28%29[CacheManager#create()]),
shared among multiple `SessionFactory` instances in the same JVM. shared among multiple `SessionFactory` instances in the same JVM.
[NOTE] [NOTE]
==== ====
The http://www.ehcache.org/documentation/2.8/integrations/hibernate#optional[Ehcache documentation] recommends using multiple non-singleton ``CacheManager``s when there are multiple Hibernate `SessionFactory` instances running in the same JVM. The https://www.ehcache.org/documentation/2.8/integrations/hibernate#optional[Ehcache documentation] recommends using multiple non-singleton ``CacheManager``s when there are multiple Hibernate `SessionFactory` instances running in the same JVM.
==== ====
[[caching-provider-ehcache-missing-cache-strategy]] [[caching-provider-ehcache-missing-cache-strategy]]

View File

@ -455,8 +455,8 @@ Hibernate will trigger a Persistence Context flush if there are pending `Account
==== Define a custom entity proxy ==== Define a custom entity proxy
By default, when it needs to use a proxy instead of the actual POJO, Hibernate is going to use a Bytecode manipulation library like By default, when it needs to use a proxy instead of the actual POJO, Hibernate is going to use a Bytecode manipulation library like
http://jboss-javassist.github.io/javassist/[Javassist] or https://jboss-javassist.github.io/javassist/[Javassist] or
http://bytebuddy.net/[Byte Buddy]. https://bytebuddy.net/[Byte Buddy].
However, if the entity class is final, Javassist will not create a proxy and you will get a POJO even when you only need a proxy reference. However, if the entity class is final, Javassist will not create a proxy and you will get a POJO even when you only need a proxy reference.
In this case, you could proxy an interface that this particular entity implements, as illustrated by the following example. In this case, you could proxy an interface that this particular entity implements, as illustrated by the following example.

View File

@ -1580,8 +1580,8 @@ And sometime in 2011, the last partition (or 'extension bucket') is split into t
[[envers-links]] [[envers-links]]
=== Envers links === Envers links
. http://hibernate.org[Hibernate main page] . https://hibernate.org[Hibernate main page]
. http://hibernate.org/community/[Forum] . https://hibernate.org/community/[Forum]
. https://hibernate.atlassian.net/[JIRA issue tracker] (when adding issues concerning Envers, be sure to select the "envers" component!) . https://hibernate.atlassian.net/[JIRA issue tracker] (when adding issues concerning Envers, be sure to select the "envers" component!)
. https://hibernate.zulipchat.com/#narrow/stream/132096-hibernate-user[Zulip channel] . https://hibernate.zulipchat.com/#narrow/stream/132096-hibernate-user[Zulip channel]
. https://community.jboss.org/wiki/EnversFAQ[FAQ] . https://community.jboss.org/wiki/EnversFAQ[FAQ]

View File

@ -55,20 +55,20 @@ NOTE: Not all properties apply to all situations. For example, if you are provid
To use the c3p0 integration, the application must include the `hibernate-c3p0` module jar (as well as its dependencies) on the classpath. To use the c3p0 integration, the application must include the `hibernate-c3p0` module jar (as well as its dependencies) on the classpath.
==== ====
Hibernate also provides support for applications to use http://www.mchange.com/projects/c3p0/[c3p0] connection pooling. Hibernate also provides support for applications to use https://www.mchange.com/projects/c3p0/[c3p0] connection pooling.
When c3p0 support is enabled, a number of c3p0-specific configuration settings are recognized in addition to the general ones described in <<database-connectionprovider-driver>>. When c3p0 support is enabled, a number of c3p0-specific configuration settings are recognized in addition to the general ones described in <<database-connectionprovider-driver>>.
Transaction isolation of the Connections is managed by the `ConnectionProvider` itself. See <<database-connectionprovider-isolation>>. Transaction isolation of the Connections is managed by the `ConnectionProvider` itself. See <<database-connectionprovider-isolation>>.
`hibernate.c3p0.min_size` or `c3p0.minPoolSize`:: The minimum size of the c3p0 pool. See http://www.mchange.com/projects/c3p0/#minPoolSize[c3p0 minPoolSize] `hibernate.c3p0.min_size` or `c3p0.minPoolSize`:: The minimum size of the c3p0 pool. See https://www.mchange.com/projects/c3p0/#minPoolSize[c3p0 minPoolSize]
`hibernate.c3p0.max_size` or `c3p0.maxPoolSize`:: The maximum size of the c3p0 pool. See http://www.mchange.com/projects/c3p0/#maxPoolSize[c3p0 maxPoolSize] `hibernate.c3p0.max_size` or `c3p0.maxPoolSize`:: The maximum size of the c3p0 pool. See https://www.mchange.com/projects/c3p0/#maxPoolSize[c3p0 maxPoolSize]
`hibernate.c3p0.timeout` or `c3p0.maxIdleTime`:: The Connection idle time. See http://www.mchange.com/projects/c3p0/#maxIdleTime[c3p0 maxIdleTime] `hibernate.c3p0.timeout` or `c3p0.maxIdleTime`:: The Connection idle time. See https://www.mchange.com/projects/c3p0/#maxIdleTime[c3p0 maxIdleTime]
`hibernate.c3p0.max_statements` or `c3p0.maxStatements`:: Controls the c3p0 PreparedStatement cache size (if using). See http://www.mchange.com/projects/c3p0/#maxStatements[c3p0 maxStatements] `hibernate.c3p0.max_statements` or `c3p0.maxStatements`:: Controls the c3p0 PreparedStatement cache size (if using). See https://www.mchange.com/projects/c3p0/#maxStatements[c3p0 maxStatements]
`hibernate.c3p0.acquire_increment` or `c3p0.acquireIncrement`:: Number of connections c3p0 should acquire at a time when the pool is exhausted. See http://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 acquireIncrement] `hibernate.c3p0.acquire_increment` or `c3p0.acquireIncrement`:: Number of connections c3p0 should acquire at a time when the pool is exhausted. See https://www.mchange.com/projects/c3p0/#acquireIncrement[c3p0 acquireIncrement]
`hibernate.c3p0.idle_test_period` or `c3p0.idleConnectionTestPeriod`:: Idle time before a c3p0 pooled connection is validated. See http://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod[c3p0 idleConnectionTestPeriod] `hibernate.c3p0.idle_test_period` or `c3p0.idleConnectionTestPeriod`:: Idle time before a c3p0 pooled connection is validated. See https://www.mchange.com/projects/c3p0/#idleConnectionTestPeriod[c3p0 idleConnectionTestPeriod]
`hibernate.c3p0.initialPoolSize`:: The initial c3p0 pool size. If not specified, default is to use the min pool size. See http://www.mchange.com/projects/c3p0/#initialPoolSize[c3p0 initialPoolSize] `hibernate.c3p0.initialPoolSize`:: The initial c3p0 pool size. If not specified, default is to use the min pool size. See https://www.mchange.com/projects/c3p0/#initialPoolSize[c3p0 initialPoolSize]
Any other settings prefixed with `hibernate.c3p0.`:: Will have the `hibernate.` portion stripped and be passed to c3p0. Any other settings prefixed with `hibernate.c3p0.`:: Will have the `hibernate.` portion stripped and be passed to c3p0.
Any other settings prefixed with `c3p0.`:: Get passed to c3p0 as is. See http://www.mchange.com/projects/c3p0/#configuration[c3p0 configuration] Any other settings prefixed with `c3p0.`:: Get passed to c3p0 as is. See https://www.mchange.com/projects/c3p0/#configuration[c3p0 configuration]
[[database-connectionprovider-proxool]] [[database-connectionprovider-proxool]]
=== Using Proxool === Using Proxool
@ -78,7 +78,7 @@ Any other settings prefixed with `c3p0.`:: Get passed to c3p0 as is. See http://
To use the Proxool integration, the application must include the `hibernate-proxool` module jar (as well as its dependencies) on the classpath. To use the Proxool integration, the application must include the `hibernate-proxool` module jar (as well as its dependencies) on the classpath.
==== ====
Hibernate also provides support for applications to use http://proxool.sourceforge.net/[Proxool] connection pooling. Hibernate also provides support for applications to use https://proxool.sourceforge.net/[Proxool] connection pooling.
Transaction isolation of the Connections is managed by the `ConnectionProvider` itself. See <<database-connectionprovider-isolation>>. Transaction isolation of the Connections is managed by the `ConnectionProvider` itself. See <<database-connectionprovider-isolation>>.
@ -92,14 +92,14 @@ If set to true, this ConnectionProvider will use an already existing Proxool poo
==== Configuring Proxool via XML ==== Configuring Proxool via XML
The `hibernate.proxool.xml` setting names a Proxool configuration XML file to be loaded as a classpath resource and loaded by Proxool's JAXPConfigurator. The `hibernate.proxool.xml` setting names a Proxool configuration XML file to be loaded as a classpath resource and loaded by Proxool's JAXPConfigurator.
See http://proxool.sourceforge.net/configure.html[proxool configuration]. See https://proxool.sourceforge.net/configure.html[proxool configuration].
`hibernate.proxool.pool_alias` must be set to indicate which pool to use. `hibernate.proxool.pool_alias` must be set to indicate which pool to use.
[[database-connectionprovider-proxool-properties]] [[database-connectionprovider-proxool-properties]]
==== Configuring Proxool via Properties ==== Configuring Proxool via Properties
The `hibernate.proxool.properties` setting names a Proxool configuration properties file to be loaded as a classpath resource and loaded by Proxool's `PropertyConfigurator`. The `hibernate.proxool.properties` setting names a Proxool configuration properties file to be loaded as a classpath resource and loaded by Proxool's `PropertyConfigurator`.
See http://proxool.sourceforge.net/configure.html[proxool configuration]. See https://proxool.sourceforge.net/configure.html[proxool configuration].
`hibernate.proxool.pool_alias` must be set to indicate which pool to use. `hibernate.proxool.pool_alias` must be set to indicate which pool to use.
[[database-connectionprovider-hikari]] [[database-connectionprovider-hikari]]
@ -131,7 +131,7 @@ Note that Hikari only supports JDBC standard isolation levels (apparently).
To use the Vibur DBCP integration, the application must include the `hibernate-vibur` module jar (as well as its dependencies) on the classpath. To use the Vibur DBCP integration, the application must include the `hibernate-vibur` module jar (as well as its dependencies) on the classpath.
==== ====
Hibernate also provides support for applications to use http://www.vibur.org/[Vibur DBCP] connection pool. Hibernate also provides support for applications to use https://www.vibur.org/[Vibur DBCP] connection pool.
Set all of your Vibur settings in Hibernate prefixed by `hibernate.vibur.` and this `ConnectionProvider` will pick them up and pass them along to Vibur DBCP. Set all of your Vibur settings in Hibernate prefixed by `hibernate.vibur.` and this `ConnectionProvider` will pick them up and pass them along to Vibur DBCP.
Additionally, this `ConnectionProvider` will pick up the following Hibernate-specific properties and map them to the corresponding Vibur ones (any `hibernate.vibur.` prefixed ones have precedence): Additionally, this `ConnectionProvider` will pick up the following Hibernate-specific properties and map them to the corresponding Vibur ones (any `hibernate.vibur.` prefixed ones have precedence):
@ -151,7 +151,7 @@ Additionally, this `ConnectionProvider` will pick up the following Hibernate-spe
To use the Agroal integration, the application must include the `hibernate-agroal` module jar (as well as its dependencies) on the classpath. To use the Agroal integration, the application must include the `hibernate-agroal` module jar (as well as its dependencies) on the classpath.
==== ====
Hibernate also provides support for applications to use http://agroal.github.io/[Agroal] connection pool. Hibernate also provides support for applications to use https://agroal.github.io/[Agroal] connection pool.
Set all of your Agroal settings in Hibernate prefixed by `hibernate.agroal.` and this `ConnectionProvider` will pick them up and pass them along to Agroal connection pool. Set all of your Agroal settings in Hibernate prefixed by `hibernate.agroal.` and this `ConnectionProvider` will pick them up and pass them along to Agroal connection pool.
Additionally, this `ConnectionProvider` will pick up the following Hibernate-specific properties and map them to the corresponding Agroal ones (any `hibernate.agroal.` prefixed ones have precedence): Additionally, this `ConnectionProvider` will pick up the following Hibernate-specific properties and map them to the corresponding Agroal ones (any `hibernate.agroal.` prefixed ones have precedence):

View File

@ -8,7 +8,7 @@ In a relational database, locking refers to actions taken to prevent data from c
Your locking strategy can be either optimistic or pessimistic. Your locking strategy can be either optimistic or pessimistic.
Optimistic:: Optimistic::
http://en.wikipedia.org/wiki/Optimistic_locking[Optimistic locking] assumes that multiple transactions can complete without affecting each other, https://en.wikipedia.org/wiki/Optimistic_locking[Optimistic locking] assumes that multiple transactions can complete without affecting each other,
and that therefore transactions can proceed without locking the data resources that they affect. and that therefore transactions can proceed without locking the data resources that they affect.
Before committing, each transaction verifies that no other transaction has modified its data. Before committing, each transaction verifies that no other transaction has modified its data.
If the check reveals conflicting modifications, the committing transaction rolls back. If the check reveals conflicting modifications, the committing transaction rolls back.

View File

@ -66,7 +66,7 @@ Hibernate was changed slightly, once the implications of this were better unders
The underlying issue is that the actual semantics of the application itself changes in these cases. The underlying issue is that the actual semantics of the application itself changes in these cases.
==== ====
Starting with version 3.2.3, Hibernate comes with a set of http://in.relation.to/2082.lace[enhanced] identifier generators targeting portability in a much different way. Starting with version 3.2.3, Hibernate comes with a set of https://in.relation.to/2082.lace[enhanced] identifier generators targeting portability in a much different way.
[NOTE] [NOTE]
==== ====

View File

@ -909,7 +909,7 @@ include::{extrasdir}/hql-distinct-entity-query-example.sql[]
---- ----
==== ====
In this case, the `DISTINCT` SQL keyword is undesirable since it does a redundant result set sorting, as explained http://in.relation.to/2016/08/04/introducing-distinct-pass-through-query-hint/[in this blog post]. In this case, the `DISTINCT` SQL keyword is undesirable since it does a redundant result set sorting, as explained https://in.relation.to/2016/08/04/introducing-distinct-pass-through-query-hint/[in this blog post].
To fix this issue, Hibernate 5.2.2 added support for the `HINT_PASS_DISTINCT_THROUGH` entity query hint: To fix this issue, Hibernate 5.2.2 added support for the `HINT_PASS_DISTINCT_THROUGH` entity query hint:
[[hql-distinct-entity-query-hint-example]] [[hql-distinct-entity-query-hint-example]]

View File

@ -15,13 +15,13 @@ It supports most of the functions described by the OGC Simple Feature Specificat
PostgreSQL/PostGIS, MySQL, Microsoft SQL Server and H2/GeoDB. PostgreSQL/PostGIS, MySQL, Microsoft SQL Server and H2/GeoDB.
Spatial data types are not part of the Java standard library, and they are absent from the JDBC specification. Spatial data types are not part of the Java standard library, and they are absent from the JDBC specification.
Over the years http://tsusiatsoftware.net/jts/main.html[JTS] has emerged the _de facto_ standard to fill this gap. JTS is Over the years https://tsusiatsoftware.net/jts/main.html[JTS] has emerged the _de facto_ standard to fill this gap. JTS is
an implementation of the https://portal.opengeospatial.org/files/?artifact_id=829[Simple Feature Specification (SFS)]. Many databases an implementation of the https://portal.opengeospatial.org/files/?artifact_id=829[Simple Feature Specification (SFS)]. Many databases
on the other hand implement the SQL/MM - Part 3: Spatial Data specification - a related, but broader specification. The biggest difference is that on the other hand implement the SQL/MM - Part 3: Spatial Data specification - a related, but broader specification. The biggest difference is that
SFS is limited to 2D geometries in the projected plane (although JTS supports 3D coordinates), whereas SFS is limited to 2D geometries in the projected plane (although JTS supports 3D coordinates), whereas
SQL/MM supports 2-, 3- or 4-dimensional coordinate spaces. SQL/MM supports 2-, 3- or 4-dimensional coordinate spaces.
Hibernate Spatial supports two different geometry models: http://tsusiatsoftware.net/jts/main.html[JTS] and Hibernate Spatial supports two different geometry models: https://tsusiatsoftware.net/jts/main.html[JTS] and
https://github.com/GeoLatte/geolatte-geom[geolatte-geom]. As already mentioned, JTS is the _de facto_ https://github.com/GeoLatte/geolatte-geom[geolatte-geom]. As already mentioned, JTS is the _de facto_
standard. Geolatte-geom (also written by the lead developer of Hibernate Spatial) is a more recent library that standard. Geolatte-geom (also written by the lead developer of Hibernate Spatial) is a more recent library that
supports many features specified in SQL/MM but not available in JTS (such as support for 4D geometries, and support for extended WKT/WKB formats). supports many features specified in SQL/MM but not available in JTS (such as support for 4D geometries, and support for extended WKT/WKB formats).

View File

@ -40,7 +40,7 @@ or provide a custom `org.hibernate.resource.transaction.TransactionCoordinatorBu
[NOTE] [NOTE]
==== ====
For details on implementing a custom `TransactionCoordinatorBuilder`, or simply better understanding how it works, see the For details on implementing a custom `TransactionCoordinatorBuilder`, or simply better understanding how it works, see the
http://docs.jboss.org/hibernate/orm/{majorMinorVersion}/integrationguide/html_single/Hibernate_Integration_Guide.html[Integration Guide] . https://docs.jboss.org/hibernate/orm/{majorMinorVersion}/integrationguide/html_single/Hibernate_Integration_Guide.html[Integration Guide] .
==== ====
Hibernate uses JDBC connections and JTA resources directly, without adding any additional locking behavior. Hibernate uses JDBC connections and JTA resources directly, without adding any additional locking behavior.

View File

@ -374,7 +374,7 @@ jar {
'Implementation-Version': project.version, 'Implementation-Version': project.version,
'Implementation-Vendor': 'Hibernate.org', 'Implementation-Vendor': 'Hibernate.org',
'Implementation-Vendor-Id': 'org.hibernate', 'Implementation-Vendor-Id': 'org.hibernate',
'Implementation-Url': 'http://hibernate.org/orm', 'Implementation-Url': 'https://hibernate.org/orm',
// Java 9 module name // Java 9 module name
'Automatic-Module-Name': project.java9ModuleName, 'Automatic-Module-Name': project.java9ModuleName,
@ -387,7 +387,7 @@ jar {
'Bundle-Name': project.name, 'Bundle-Name': project.name,
'Bundle-SymbolicName': project.java9ModuleName, 'Bundle-SymbolicName': project.java9ModuleName,
'Bundle-Vendor': 'Hibernate.org', 'Bundle-Vendor': 'Hibernate.org',
'Bundle-DocURL': "http://www.hibernate.org/orm/${project.ormVersion.family}", 'Bundle-DocURL': "https://www.hibernate.org/orm/${project.ormVersion.family}",
// This is overridden in some sub-projects // This is overridden in some sub-projects
'Import-Package': [ 'Import-Package': [
// Temporarily support JTA 1.1 -- Karaf and other frameworks still // Temporarily support JTA 1.1 -- Karaf and other frameworks still
@ -418,7 +418,7 @@ task sourcesJar(type: Jar) {
'Implementation-Version': project.version, 'Implementation-Version': project.version,
'Implementation-Vendor': 'Hibernate.org', 'Implementation-Vendor': 'Hibernate.org',
'Implementation-Vendor-Id': 'org.hibernate', 'Implementation-Vendor-Id': 'org.hibernate',
'Implementation-Url': 'http://hibernate.org/orm', 'Implementation-Url': 'https://hibernate.org/orm',
// Hibernate-specific JAR manifest attributes // Hibernate-specific JAR manifest attributes
'Hibernate-VersionFamily': project.ormVersion.family, 'Hibernate-VersionFamily': project.ormVersion.family,
@ -453,7 +453,7 @@ task javadocJar(type: Jar) {
'Implementation-Version': project.version, 'Implementation-Version': project.version,
'Implementation-Vendor': 'Hibernate.org', 'Implementation-Vendor': 'Hibernate.org',
'Implementation-Vendor-Id': 'org.hibernate', 'Implementation-Vendor-Id': 'org.hibernate',
'Implementation-Url': 'http://hibernate.org/orm', 'Implementation-Url': 'https://hibernate.org/orm',
// Hibernate-specific JAR manifest attributes // Hibernate-specific JAR manifest attributes
'Hibernate-VersionFamily': project.ormVersion.family, 'Hibernate-VersionFamily': project.ormVersion.family,

View File

@ -27,13 +27,13 @@ javadoc {
// doclet = 'org.asciidoctor.Asciidoclet' // doclet = 'org.asciidoctor.Asciidoclet'
windowTitle = "$project.name JavaDocs" windowTitle = "$project.name JavaDocs"
docTitle = "$project.name JavaDocs ($project.version)" docTitle = "$project.name JavaDocs ($project.version)"
bottom = "Copyright &copy; 2001-$currentYear <a href=\"http://redhat.com\">Red Hat, Inc.</a> All Rights Reserved." bottom = "Copyright &copy; 2001-$currentYear <a href=\"https://redhat.com\">Red Hat, Inc.</a> All Rights Reserved."
use = true use = true
encoding = 'UTF-8' encoding = 'UTF-8'
links += [ links += [
'https://docs.oracle.com/javase/8/docs/api/', 'https://docs.oracle.com/javase/8/docs/api/',
'http://docs.jboss.org/hibernate/beanvalidation/spec/2.0/api/', 'https://docs.jboss.org/hibernate/beanvalidation/spec/2.0/api/',
'http://docs.jboss.org/cdi/api/2.0/', 'https://docs.jboss.org/cdi/api/2.0/',
'https://javaee.github.io/javaee-spec/javadocs/' 'https://javaee.github.io/javaee-spec/javadocs/'
] ]
tags = [ "apiNote", 'implSpec', 'implNote', 'todo' ] tags = [ "apiNote", 'implSpec', 'implNote', 'todo' ]

View File

@ -93,7 +93,7 @@ ext {
jakarta_interceptor: 'jakarta.interceptor:jakarta.interceptor-api:2.0.0', jakarta_interceptor: 'jakarta.interceptor:jakarta.interceptor-api:2.0.0',
jakarta_activation: 'jakarta.activation:jakarta.activation-api:2.0.1', jakarta_activation: 'jakarta.activation:jakarta.activation-api:2.0.1',
jakarta_resource: 'jakarta.resource:jakarta.resource-api:2.0.0', jakarta_resource: 'jakarta.resource:jakarta.resource-api:2.0.0',
jakarta_jaxb_api: 'jakarta.xml.bind:jakarta.xml.bind-api:3.0.0', jakarta_jaxb_api: 'jakarta.xml.bind:jakarta.xml.bind-api:3.0.1',
jakarta_jaxb_runtime: "org.glassfish.jaxb:jaxb-runtime:${jakartaJaxbRuntimeVersion}", jakarta_jaxb_runtime: "org.glassfish.jaxb:jaxb-runtime:${jakartaJaxbRuntimeVersion}",
jakarta_cdi: 'jakarta.enterprise:jakarta.enterprise.cdi-api:3.0.0', jakarta_cdi: 'jakarta.enterprise:jakarta.enterprise.cdi-api:3.0.0',

View File

@ -33,25 +33,25 @@ publishing {
version = project.version version = project.version
description = project.description description = project.description
url = 'http://hibernate.org/orm' url = 'https://hibernate.org/orm'
organization { organization {
name = 'Hibernate.org' name = 'Hibernate.org'
url = 'http://hibernate.org' url = 'https://hibernate.org'
} }
licenses { licenses {
license { license {
name = 'GNU Library General Public License v2.1 or later' name = 'GNU Library General Public License v2.1 or later'
url = 'http://www.opensource.org/licenses/LGPL-2.1' url = 'https://www.opensource.org/licenses/LGPL-2.1'
comments = 'See discussion at http://hibernate.org/community/license/ for more details.' comments = 'See discussion at https://hibernate.org/community/license/ for more details.'
distribution = 'repo' distribution = 'repo'
} }
} }
scm { scm {
url = 'http://github.com/hibernate/hibernate-orm' url = 'https://github.com/hibernate/hibernate-orm'
connection = 'scm:git:http://github.com/hibernate/hibernate-orm.git' connection = 'scm:git:https://github.com/hibernate/hibernate-orm.git'
developerConnection = 'scm:git:git@github.com:hibernate/hibernate-orm.git' developerConnection = 'scm:git:git@github.com:hibernate/hibernate-orm.git'
} }
@ -60,7 +60,7 @@ publishing {
id = 'hibernate-team' id = 'hibernate-team'
name = 'The Hibernate Development Team' name = 'The Hibernate Development Team'
organization = 'Hibernate.org' organization = 'Hibernate.org'
organizationUrl = 'http://hibernate.org' organizationUrl = 'https://hibernate.org'
} }
} }

View File

@ -19,25 +19,25 @@ publishing {
pom { pom {
name = 'Hibernate ORM - ' + project.name name = 'Hibernate ORM - ' + project.name
description = project.description description = project.description
url = 'http://hibernate.org/orm' url = 'https://hibernate.org/orm'
organization { organization {
name = 'Hibernate.org' name = 'Hibernate.org'
url = 'http://hibernate.org' url = 'https://hibernate.org'
} }
licenses { licenses {
license { license {
name = 'GNU Library General Public License v2.1 or later' name = 'GNU Library General Public License v2.1 or later'
url = 'http://www.opensource.org/licenses/LGPL-2.1' url = 'https://www.opensource.org/licenses/LGPL-2.1'
comments = 'See discussion at http://hibernate.org/community/license/ for more details.' comments = 'See discussion at https://hibernate.org/community/license/ for more details.'
distribution = 'repo' distribution = 'repo'
} }
} }
scm { scm {
url = 'http://github.com/hibernate/hibernate-orm' url = 'https://github.com/hibernate/hibernate-orm'
connection = 'scm:git:http://github.com/hibernate/hibernate-orm.git' connection = 'scm:git:https://github.com/hibernate/hibernate-orm.git'
developerConnection = 'scm:git:git@github.com:hibernate/hibernate-orm.git' developerConnection = 'scm:git:git@github.com:hibernate/hibernate-orm.git'
} }
@ -51,7 +51,7 @@ publishing {
id = 'hibernate-team' id = 'hibernate-team'
name = 'The Hibernate Development Team' name = 'The Hibernate Development Team'
organization = 'Hibernate.org' organization = 'Hibernate.org'
organizationUrl = 'http://hibernate.org' organizationUrl = 'https://hibernate.org'
} }
} }

View File

@ -16,7 +16,7 @@ import org.hibernate.dialect.identity.IdentityColumnSupport;
import org.hibernate.dialect.identity.Oracle12cIdentityColumnSupport; import org.hibernate.dialect.identity.Oracle12cIdentityColumnSupport;
import org.hibernate.dialect.pagination.LegacyOracleLimitHandler; import org.hibernate.dialect.pagination.LegacyOracleLimitHandler;
import org.hibernate.dialect.pagination.LimitHandler; import org.hibernate.dialect.pagination.LimitHandler;
import org.hibernate.dialect.pagination.OffsetFetchLimitHandler; import org.hibernate.dialect.pagination.Oracle12LimitHandler;
import org.hibernate.dialect.sequence.OracleSequenceSupport; import org.hibernate.dialect.sequence.OracleSequenceSupport;
import org.hibernate.dialect.sequence.SequenceSupport; import org.hibernate.dialect.sequence.SequenceSupport;
import org.hibernate.engine.config.spi.ConfigurationService; import org.hibernate.engine.config.spi.ConfigurationService;
@ -120,7 +120,7 @@ public class OracleDialect extends Dialect {
limitHandler = getVersion() < 1200 limitHandler = getVersion() < 1200
? new LegacyOracleLimitHandler( getVersion() ) ? new LegacyOracleLimitHandler( getVersion() )
: OffsetFetchLimitHandler.INSTANCE; : Oracle12LimitHandler.INSTANCE;
} }
@Override @Override

View File

@ -263,7 +263,7 @@ public class SQLServerDialect extends AbstractTransactSQLDialect {
@Override @Override
public String getCurrentSchemaCommand() { public String getCurrentSchemaCommand() {
return "SELECT SCHEMA_NAME()"; return "select schema_name()";
} }
@Override @Override

View File

@ -35,7 +35,7 @@ public class IndexQueryHintHandler implements QueryHintHandler {
String endToken = matcher.group( 2 ); String endToken = matcher.group( 2 );
return new StringBuilder( startToken ) return new StringBuilder( startToken )
.append( " USE INDEX (" ) .append( " use index (" )
.append( hints ) .append( hints )
.append( ") " ) .append( ") " )
.append( endToken ) .append( endToken )

View File

@ -9,6 +9,7 @@ package org.hibernate.dialect.pagination;
import java.sql.PreparedStatement; import java.sql.PreparedStatement;
import java.sql.SQLException; import java.sql.SQLException;
import org.hibernate.engine.spi.QueryParameters;
import org.hibernate.engine.spi.RowSelection; import org.hibernate.engine.spi.RowSelection;
import org.hibernate.query.Limit; import org.hibernate.query.Limit;
import org.hibernate.query.spi.QueryOptions; import org.hibernate.query.spi.QueryOptions;
@ -54,6 +55,18 @@ public interface LimitHandler {
); );
} }
default String processSql(String sql, Limit limit, QueryOptions queryOptions) {
return processSql(
sql,
limit == null ? null : new RowSelection(
limit.getFirstRow(),
limit.getMaxRows(),
null,
null
)
);
}
default int bindLimitParametersAtStartOfQuery(Limit limit, PreparedStatement statement, int index) default int bindLimitParametersAtStartOfQuery(Limit limit, PreparedStatement statement, int index)
throws SQLException { throws SQLException {
return bindLimitParametersAtStartOfQuery( return bindLimitParametersAtStartOfQuery(
@ -97,7 +110,7 @@ public interface LimitHandler {
/** /**
* Return processed SQL query. * Return processed SQL query.
* *
* @param sql the SQL query to process. * @param sql the SQL query to process.
* @param selection the selection criteria for rows. * @param selection the selection criteria for rows.
* *
* @return Query statement with LIMIT clause applied. * @return Query statement with LIMIT clause applied.
@ -106,6 +119,21 @@ public interface LimitHandler {
@Deprecated @Deprecated
String processSql(String sql, RowSelection selection); String processSql(String sql, RowSelection selection);
/**
* Return processed SQL query.
*
* @param sql the SQL query to process.
* @param queryParameters the queryParameters.
*
* @return Query statement with LIMIT clause applied.
* @deprecated Use {@link #processSql(String, Limit, QueryOptions)}
* todo (6.0): remove in favor of Limit version?
*/
@Deprecated
default String processSql(String sql, QueryParameters queryParameters ){
return processSql( sql, queryParameters.getRowSelection() );
}
/** /**
* Bind parameter values needed by the limit and offset clauses * Bind parameter values needed by the limit and offset clauses
* right at the start of the original query statement, before all * right at the start of the original query statement, before all

View File

@ -0,0 +1,209 @@
/*
* Hibernate, Relational Persistence for Idiomatic Java
*
* License: GNU Lesser General Public License (LGPL), version 2.1 or later.
* See the lgpl.txt file in the root directory or <http://www.gnu.org/licenses/lgpl-2.1.html>.
*/
package org.hibernate.dialect.pagination;
import java.util.Locale;
import org.hibernate.LockMode;
import org.hibernate.LockOptions;
import org.hibernate.engine.spi.QueryParameters;
import org.hibernate.engine.spi.RowSelection;
import org.hibernate.query.Limit;
import org.hibernate.query.spi.QueryOptions;
/**
* A {@link LimitHandler} for databases which support the
* ANSI SQL standard syntax {@code FETCH FIRST m ROWS ONLY}
* and {@code OFFSET n ROWS FETCH NEXT m ROWS ONLY}.
*
* @author Gavin King
*/
public class Oracle12LimitHandler extends AbstractLimitHandler {
private boolean bindLimitParametersInReverseOrder;
private boolean useMaxForLimit;
private boolean supportOffset;
public static final Oracle12LimitHandler INSTANCE = new Oracle12LimitHandler();
Oracle12LimitHandler() {
}
@Override
public String processSql(String sql, RowSelection selection) {
final boolean hasFirstRow = hasFirstRow( selection );
final boolean hasMaxRows = hasMaxRows( selection );
if ( !hasMaxRows ) {
return sql;
}
return processSql( sql, getForUpdateIndex( sql ), hasFirstRow );
}
@Override
public String processSql(String sql, Limit limit, QueryOptions queryOptions) {
final boolean hasMaxRows = hasMaxRows( limit );
if ( !hasMaxRows ) {
return sql;
}
return processSql(
sql,
hasFirstRow( limit ),
queryOptions.getLockOptions()
);
}
@Override
public String processSql(String sql, QueryParameters queryParameters) {
final RowSelection selection = queryParameters.getRowSelection();
final boolean hasMaxRows = hasMaxRows( selection );
if ( !hasMaxRows ) {
return sql;
}
return processSql(
sql,
hasFirstRow( selection ),
queryParameters.getLockOptions()
);
}
protected String processSql(String sql, boolean hasFirstRow, LockOptions lockOptions) {
if ( lockOptions != null ) {
final LockMode lockMode = lockOptions.getLockMode();
switch ( lockMode ) {
case UPGRADE:
case PESSIMISTIC_READ:
case PESSIMISTIC_WRITE:
case UPGRADE_NOWAIT:
case FORCE:
case PESSIMISTIC_FORCE_INCREMENT:
case UPGRADE_SKIPLOCKED:
return processSql( sql, getForUpdateIndex( sql ), hasFirstRow );
default:
return processSqlOffsetFetch( sql, hasFirstRow );
}
}
return processSqlOffsetFetch( sql, hasFirstRow );
}
protected String processSqlOffsetFetch(String sql, boolean hasFirstRow) {
final int forUpdateLastIndex = getForUpdateIndex( sql );
if ( forUpdateLastIndex > -1 ) {
return processSql( sql, forUpdateLastIndex, hasFirstRow );
}
bindLimitParametersInReverseOrder = false;
useMaxForLimit = false;
supportOffset = true;
sql = normalizeStatement( sql );
final int offsetFetchLength;
final String offsetFetchString;
if ( hasFirstRow ) {
offsetFetchString = " offset ? rows fetch next ? rows only";
}
else {
offsetFetchString = " fetch first ? rows only";
}
offsetFetchLength = sql.length() + offsetFetchString.length();
return new StringBuilder( offsetFetchLength ).append( sql ).append( offsetFetchString ).toString();
}
protected String processSql(String sql, int forUpdateIndex, boolean hasFirstRow) {
bindLimitParametersInReverseOrder = true;
useMaxForLimit = true;
supportOffset = false;
sql = normalizeStatement( sql );
String forUpdateClause = null;
boolean isForUpdate = false;
if ( forUpdateIndex > -1 ) {
// save 'for update ...' and then remove it
forUpdateClause = sql.substring( forUpdateIndex );
sql = sql.substring( 0, forUpdateIndex - 1 );
isForUpdate = true;
}
final StringBuilder pagingSelect;
final int forUpdateClauseLength;
if ( forUpdateClause == null ) {
forUpdateClauseLength = 0;
}
else {
forUpdateClauseLength = forUpdateClause.length() + 1;
}
if ( hasFirstRow ) {
pagingSelect = new StringBuilder( sql.length() + forUpdateClauseLength + 98 );
pagingSelect.append( "select * from ( select row_.*, rownum rownum_ from ( " );
pagingSelect.append( sql );
pagingSelect.append( " ) row_ where rownum <= ?) where rownum_ > ?" );
}
else {
pagingSelect = new StringBuilder( sql.length() + forUpdateClauseLength + 37 );
pagingSelect.append( "select * from ( " );
pagingSelect.append( sql );
pagingSelect.append( " ) where rownum <= ?" );
}
if ( isForUpdate ) {
pagingSelect.append( " " );
pagingSelect.append( forUpdateClause );
}
return pagingSelect.toString();
}
private String normalizeStatement(String sql) {
return sql.trim().replaceAll( "\\s+", " " );
}
private int getForUpdateIndex(String sql) {
final int forUpdateLastIndex = sql.toLowerCase( Locale.ROOT ).lastIndexOf( "for update" );
// We need to recognize cases like : select a from t where b = 'for update';
final int lastIndexOfQuote = sql.lastIndexOf( "'" );
if ( forUpdateLastIndex > -1 ) {
if ( lastIndexOfQuote == -1 ) {
return forUpdateLastIndex;
}
if ( lastIndexOfQuote > forUpdateLastIndex ) {
return -1;
}
return forUpdateLastIndex;
}
return forUpdateLastIndex;
}
@Override
public final boolean supportsLimit() {
return true;
}
@Override
public boolean supportsOffset() {
return supportOffset;
}
@Override
public boolean bindLimitParametersInReverseOrder() {
return bindLimitParametersInReverseOrder;
}
@Override
public boolean useMaxForLimit() {
return useMaxForLimit;
}
}

View File

@ -52,7 +52,7 @@ public final class ResourceRegistryStandardImpl implements ResourceRegistry {
private final JdbcObserver jdbcObserver; private final JdbcObserver jdbcObserver;
private final HashMap<Statement, HashMap<ResultSet,Object>> xref = new HashMap<>(); private final HashMap<Statement, HashMap<ResultSet,Object>> xref = new HashMap<>();
private final HashMap<ResultSet,Object> unassociatedResultSets = new HashMap<>(); private HashMap<ResultSet,Object> unassociatedResultSets;
private ArrayList<Blob> blobs; private ArrayList<Blob> blobs;
private ArrayList<Clob> clobs; private ArrayList<Clob> clobs;
@ -138,7 +138,7 @@ public final class ResourceRegistryStandardImpl implements ResourceRegistry {
} }
} }
else { else {
final Object removed = unassociatedResultSets.remove( resultSet ); final Object removed = unassociatedResultSets == null ? null : unassociatedResultSets.remove( resultSet );
if ( removed == null ) { if ( removed == null ) {
log.unregisteredResultSetWithoutStatement(); log.unregisteredResultSetWithoutStatement();
} }
@ -147,6 +147,9 @@ public final class ResourceRegistryStandardImpl implements ResourceRegistry {
} }
private static void closeAll(final HashMap<ResultSet,Object> resultSets) { private static void closeAll(final HashMap<ResultSet,Object> resultSets) {
if ( resultSets == null ) {
return;
}
resultSets.forEach( (resultSet, o) -> close( resultSet ) ); resultSets.forEach( (resultSet, o) -> close( resultSet ) );
resultSets.clear(); resultSets.clear();
} }
@ -234,6 +237,9 @@ public final class ResourceRegistryStandardImpl implements ResourceRegistry {
resultSets.put( resultSet, PRESENT ); resultSets.put( resultSet, PRESENT );
} }
else { else {
if ( unassociatedResultSets == null ) {
this.unassociatedResultSets = new HashMap<ResultSet,Object>();
}
unassociatedResultSets.put( resultSet, PRESENT ); unassociatedResultSets.put( resultSet, PRESENT );
} }
} }

View File

@ -94,7 +94,8 @@ public class DeferredResultSetAccess extends AbstractResultSetAccess {
limitHandler = dialect.getLimitHandler(); limitHandler = dialect.getLimitHandler();
sql = limitHandler.processSql( sql = limitHandler.processSql(
jdbcSelect.getSql(), jdbcSelect.getSql(),
limit limit,
queryOptions
); );
} }

View File

@ -206,7 +206,7 @@ public class NamedQueryCommentTest extends BaseEntityManagerFunctionalTestCase {
sqlStatementInterceptor.assertExecutedCount(1); sqlStatementInterceptor.assertExecutedCount(1);
sqlStatementInterceptor.assertExecuted( sqlStatementInterceptor.assertExecuted(
"/* COMMENT_SELECT_INDEX_game_title */ select namedquery0_.id as id1_0_, namedquery0_.title as title2_0_ from game namedquery0_ USE INDEX (idx_game_id) where namedquery0_.title=?" ) "/* COMMENT_SELECT_INDEX_game_title */ select namedquery0_.id as id1_0_, namedquery0_.title as title2_0_ from game namedquery0_ use index (idx_game_id) where namedquery0_.title=?" )
; ;
} ); } );
} }

View File

@ -16,6 +16,8 @@ import org.hibernate.jpa.test.BaseEntityManagerFunctionalTestCase;
import org.hibernate.testing.RequiresDialect; import org.hibernate.testing.RequiresDialect;
import org.hibernate.testing.TestForIssue; import org.hibernate.testing.TestForIssue;
import org.junit.After;
import org.junit.Before;
import org.junit.Test; import org.junit.Test;
import static org.hibernate.testing.transaction.TransactionUtil.doInJPA; import static org.hibernate.testing.transaction.TransactionUtil.doInJPA;
@ -34,6 +36,94 @@ public class OraclePaginationTest extends BaseEntityManagerFunctionalTestCase {
}; };
} }
@Before
public void setUp() {
doInJPA( this::entityManagerFactory, entityManager -> {
entityManager.persist( new RootEntity( 1L, 7L, "t40", 2L ) );
entityManager.persist( new RootEntity( 16L, 1L, "t47", 2L ) );
entityManager.persist( new RootEntity( 11L, 2L, "t43", 2L ) );
entityManager.persist( new RootEntity( 6L, 4L, "t31", 2L ) );
entityManager.persist( new RootEntity( 15L, 1L, "t46", 2L ) );
entityManager.persist( new RootEntity( 2L, 6L, "t39", 2L ) );
entityManager.persist( new RootEntity( 14L, 1L, "t45", 2L ) );
entityManager.persist( new RootEntity( 4L, 5L, "t38", 2L ) );
entityManager.persist( new RootEntity( 8L, 2L, "t29", 2L ) );
entityManager.persist( new RootEntity( 17L, 1L, "t48", 2L ) );
entityManager.persist( new RootEntity( 3L, 3L, "t21", 2L ) );
entityManager.persist( new RootEntity( 7L, 2L, "t23", 2L ) );
entityManager.persist( new RootEntity( 9L, 2L, "t30", 2L ) );
entityManager.persist( new RootEntity( 10L, 3L, "t42", 2L ) );
entityManager.persist( new RootEntity( 12L, 1L, "t41", 2L ) );
entityManager.persist( new RootEntity( 5L, 6L, "t37", 1L ) );
entityManager.persist( new RootEntity( 13L, 1L, "t44", 1L ) );
} );
}
@After
public void tearDown() {
doInJPA( this::entityManagerFactory, entityManager -> {
entityManager.createQuery( "delete from RootEntity" ).executeUpdate();
} );
}
@Test
@TestForIssue(jiraKey = "HHH-12087")
public void testPagination() {
doInJPA( this::entityManagerFactory, entityManager -> {
List<RootEntity> rootEntitiesAllPages = getLimitedRows( entityManager, 0, 10 );
List<RootEntity> rootEntitiesFirst = getLimitedRows( entityManager, 0, 5 );
assertEquals( 5, rootEntitiesFirst.size() );
List<RootEntity> rootEntitiesSecond = getLimitedRows( entityManager, 5, 10 );
assertEquals( 10, rootEntitiesSecond.size() );
assertEquals( rootEntitiesAllPages.get( 0 ).getId(), rootEntitiesFirst.get( 0 ).getId() );
assertEquals( rootEntitiesAllPages.get( 1 ).getId(), rootEntitiesFirst.get( 1 ).getId() );
assertEquals( rootEntitiesAllPages.get( 2 ).getId(), rootEntitiesFirst.get( 2 ).getId() );
assertEquals( rootEntitiesAllPages.get( 3 ).getId(), rootEntitiesFirst.get( 3 ).getId() );
assertEquals( rootEntitiesAllPages.get( 4 ).getId(), rootEntitiesFirst.get( 4 ).getId() );
assertEquals( rootEntitiesAllPages.get( 5 ).getId(), rootEntitiesSecond.get( 0 ).getId() );
assertEquals( rootEntitiesAllPages.get( 6 ).getId(), rootEntitiesSecond.get( 1 ).getId() );
assertEquals( rootEntitiesAllPages.get( 7 ).getId(), rootEntitiesSecond.get( 2 ).getId() );
assertEquals( rootEntitiesAllPages.get( 8 ).getId(), rootEntitiesSecond.get( 3 ).getId() );
assertEquals( rootEntitiesAllPages.get( 9 ).getId(), rootEntitiesSecond.get( 4 ).getId() );
} );
}
@Test
public void testPaginationWithSetMaxResultsOnly() {
doInJPA( this::entityManagerFactory, entityManager -> {
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<RootEntity> cq = cb.createQuery( RootEntity.class );
Root<RootEntity> c = cq.from( RootEntity.class );
CriteriaQuery<RootEntity> select = cq.select( c ).orderBy( cb.desc( c.get( "status" ) ) );
TypedQuery<RootEntity> typedQuery = entityManager.createQuery( select );
typedQuery.setMaxResults( 10 );
List<RootEntity> resultList = typedQuery.getResultList();
assertEquals( 10, resultList.size() );
} );
}
private List<RootEntity> getAllRows(EntityManager em) {
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<RootEntity> cq = cb.createQuery( RootEntity.class );
Root<RootEntity> c = cq.from( RootEntity.class );
return em.createQuery( cq.select( c ).orderBy( cb.desc( c.get( "status" ) ) ) ).getResultList();
}
private List<RootEntity> getLimitedRows(EntityManager em, int start, int end) {
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<RootEntity> cq = cb.createQuery( RootEntity.class );
Root<RootEntity> c = cq.from( RootEntity.class );
CriteriaQuery<RootEntity> select = cq.select( c ).orderBy( cb.desc( c.get( "status" ) ) );
TypedQuery<RootEntity> typedQuery = em.createQuery( select );
typedQuery.setFirstResult( start );
typedQuery.setMaxResults( end );
return typedQuery.getResultList();
}
@Entity(name = "RootEntity") @Entity(name = "RootEntity")
@Table(name = "V_MYTABLE_LAST") @Table(name = "V_MYTABLE_LAST")
public static class RootEntity implements Serializable { public static class RootEntity implements Serializable {
@ -63,69 +153,4 @@ public class OraclePaginationTest extends BaseEntityManagerFunctionalTestCase {
} }
} }
@Test
@TestForIssue(jiraKey = "HHH-12087")
public void testPagination() throws Exception {
doInJPA( this::entityManagerFactory, entityManager -> {
entityManager.persist( new RootEntity( 1L, 7L, "t40", 2L ) );
entityManager.persist( new RootEntity( 16L, 1L, "t47", 2L ) );
entityManager.persist( new RootEntity( 11L, 2L, "t43", 2L ) );
entityManager.persist( new RootEntity( 6L, 4L, "t31", 2L ) );
entityManager.persist( new RootEntity( 15L, 1L, "t46", 2L ) );
entityManager.persist( new RootEntity( 2L, 6L, "t39", 2L ) );
entityManager.persist( new RootEntity( 14L, 1L, "t45", 2L ) );
entityManager.persist( new RootEntity( 4L, 5L, "t38", 2L ) );
entityManager.persist( new RootEntity( 8L, 2L, "t29", 2L ) );
entityManager.persist( new RootEntity( 17L, 1L, "t48", 2L ) );
entityManager.persist( new RootEntity( 3L, 3L, "t21", 2L ) );
entityManager.persist( new RootEntity( 7L, 2L, "t23", 2L ) );
entityManager.persist( new RootEntity( 9L, 2L, "t30", 2L ) );
entityManager.persist( new RootEntity( 10L, 3L, "t42", 2L ) );
entityManager.persist( new RootEntity( 12L, 1L, "t41", 2L ) );
entityManager.persist( new RootEntity( 5L, 6L, "t37", 1L ) );
entityManager.persist( new RootEntity( 13L, 1L, "t44", 1L ) );
} );
doInJPA( this::entityManagerFactory, entityManager -> {
List<RootEntity> rootEntitiesAllPages = getLimitedRows( entityManager, 0, 10 );
List<RootEntity> rootEntitiesFirst = getLimitedRows( entityManager, 0, 5 );
List<RootEntity> rootEntitiesSecond = getLimitedRows( entityManager, 5, 10 );
assertEquals( rootEntitiesAllPages.get( 0 ).getId(), rootEntitiesFirst.get( 0 ).getId() );
assertEquals( rootEntitiesAllPages.get( 1 ).getId(), rootEntitiesFirst.get( 1 ).getId() );
assertEquals( rootEntitiesAllPages.get( 2 ).getId(), rootEntitiesFirst.get( 2 ).getId() );
assertEquals( rootEntitiesAllPages.get( 3 ).getId(), rootEntitiesFirst.get( 3 ).getId() );
assertEquals( rootEntitiesAllPages.get( 4 ).getId(), rootEntitiesFirst.get( 4 ).getId() );
assertEquals( rootEntitiesAllPages.get( 5 ).getId(), rootEntitiesSecond.get( 0 ).getId() );
assertEquals( rootEntitiesAllPages.get( 6 ).getId(), rootEntitiesSecond.get( 1 ).getId() );
assertEquals( rootEntitiesAllPages.get( 7 ).getId(), rootEntitiesSecond.get( 2 ).getId() );
assertEquals( rootEntitiesAllPages.get( 8 ).getId(), rootEntitiesSecond.get( 3 ).getId() );
assertEquals( rootEntitiesAllPages.get( 9 ).getId(), rootEntitiesSecond.get( 4 ).getId() );
} );
}
private List<RootEntity> getAllRows(EntityManager em) {
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<RootEntity> cq = cb.createQuery( RootEntity.class );
Root<RootEntity> c = cq.from( RootEntity.class );
return em.createQuery( cq.select( c ).orderBy( cb.desc( c.get( "status" ) ) ) ).getResultList();
}
private List<RootEntity> getLimitedRows(EntityManager em, int start, int end) {
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<RootEntity> cq = cb.createQuery( RootEntity.class );
Root<RootEntity> c = cq.from( RootEntity.class );
CriteriaQuery<RootEntity> select = cq.select( c ).orderBy( cb.desc( c.get( "status" ) ) );
TypedQuery<RootEntity> typedQuery = em.createQuery( select );
typedQuery.setFirstResult( start );
typedQuery.setMaxResults( end );
return typedQuery.getResultList();
}
private void createRootEntity(EntityManager entityManager, Long id, Long version, String caption, String status) {
}
} }

View File

@ -0,0 +1,250 @@
package org.hibernate.test.pagination;
import java.util.List;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Root;
import org.hibernate.LockMode;
import org.hibernate.LockOptions;
import org.hibernate.dialect.OracleDialect;
import org.hibernate.resource.jdbc.spi.StatementInspector;
import org.hibernate.testing.TestForIssue;
import org.hibernate.testing.orm.junit.DomainModel;
import org.hibernate.testing.orm.junit.RequiresDialect;
import org.hibernate.testing.orm.junit.SessionFactory;
import org.hibernate.testing.orm.junit.SessionFactoryScope;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertTrue;
@RequiresDialect(value = OracleDialect.class, version = 12)
@TestForIssue(jiraKey = "HHH-14624")
@DomainModel(
annotatedClasses = OraclePaginationWithLocksTest.Person.class
)
@SessionFactory(
statementInspectorClass = OraclePaginationWithLocksTest.MostRecentStatementInspector.class
)
public class OraclePaginationWithLocksTest {
private MostRecentStatementInspector mostRecentStatementInspector;
@BeforeEach
public void setUp(SessionFactoryScope scope) {
scope.inTransaction(
session -> {
for ( int i = 0; i < 20; i++ ) {
session.persist( new Person( "name" + i ) );
}
session.persist( new Person( "for update" ) );
}
);
mostRecentStatementInspector = (MostRecentStatementInspector) scope.getStatementInspector();
}
@AfterEach
public void tearDown(SessionFactoryScope scope) {
scope.inTransaction(
session ->
session.createQuery( "delete from Person" ).executeUpdate()
);
}
@Test
public void testNativeQuery(SessionFactoryScope scope) {
scope.inTransaction(
session -> {
final List<Person> people = session.createNativeQuery( "select * from Person for update" )
.setMaxResults( 10 )
.list();
assertEquals( 10, people.size() );
assertFalse( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
final List<Person> people = session.createNativeQuery( "select * from Person" )
.setMaxResults( 10 )
.list();
assertEquals( 10, people.size() );
assertTrue( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
final List<Person> people = session.createNativeQuery( "select * from Person" )
.setFirstResult( 3 )
.setMaxResults( 10 )
.list();
assertEquals( 10, people.size() );
assertTrue( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
}
@Test
public void testCriteriaQuery(SessionFactoryScope scope) {
scope.inTransaction(
session -> {
final CriteriaQuery<Person> query = session.getCriteriaBuilder().createQuery( Person.class );
final Root<Person> root = query.from( Person.class );
query.select( root );
final List<Person> people = session.createQuery( query )
.setMaxResults( 10 )
.setLockOptions( new LockOptions( LockMode.PESSIMISTIC_WRITE ).setFollowOnLocking( false ) )
.getResultList();
assertEquals( 10, people.size() );
assertFalse( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
final CriteriaQuery<Person> query = session.getCriteriaBuilder().createQuery( Person.class );
final Root<Person> root = query.from( Person.class );
query.select( root );
final List<Person> people = session.createQuery( query )
.setMaxResults( 10 )
.getResultList();
assertEquals( 10, people.size() );
assertTrue( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
final CriteriaQuery<Person> query = session.getCriteriaBuilder().createQuery( Person.class );
final Root<Person> root = query.from( Person.class );
query.select( root );
final List<Person> people = session.createQuery( query )
.setMaxResults( 10 )
.setFirstResult( 2 )
.getResultList();
assertEquals( 10, people.size() );
assertTrue( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
}
@Test
public void testHqlQuery(SessionFactoryScope scope) {
scope.inTransaction(
session -> {
List<Person> people = session.createQuery(
"select p from Person p", Person.class )
.setMaxResults( 10 )
.setLockOptions( new LockOptions( LockMode.PESSIMISTIC_WRITE ).setFollowOnLocking( false ) )
.getResultList();
assertEquals( 10, people.size() );
assertFalse( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
List<Person> people = session.createQuery(
"select p from Person p", Person.class )
.setMaxResults( 10 )
.getResultList();
assertEquals( 10, people.size() );
assertTrue( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
List<Person> people = session.createQuery(
"select p from Person p", Person.class )
.setFirstResult( 2 )
.setMaxResults( 10 )
.getResultList();
assertEquals( 10, people.size() );
assertEquals( 10, people.size() );
assertTrue( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
List<Person> people = session.createQuery(
"select p from Person p where p.name = 'for update'", Person.class )
.setMaxResults( 10 )
.setLockOptions( new LockOptions( LockMode.PESSIMISTIC_WRITE ).setFollowOnLocking( false ) )
.getResultList();
assertEquals( 1, people.size() );
assertFalse( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
scope.inTransaction(
session -> {
List<Person> people = session.createQuery(
"select p from Person p where p.name = 'for update'", Person.class )
.setMaxResults( 10 )
.getResultList();
assertEquals( 1, people.size() );
assertTrue( mostRecentStatementInspector.sqlContains( "fetch" ) );
}
);
}
public static class MostRecentStatementInspector implements StatementInspector {
private String mostRecentSql;
public String inspect(String sql) {
mostRecentSql = sql;
return sql;
}
public boolean sqlContains(String toCheck) {
return mostRecentSql.contains( toCheck );
}
}
@Entity(name = "Person")
public static class Person {
@Id
@GeneratedValue
private Long id;
private String name;
public Person() {
}
public Person(String name) {
this.name = name;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
}

58
tck/summary.md Normal file
View File

@ -0,0 +1,58 @@
# Hibernate ORM's TCK Results
As required by the [Eclipse Foundation Technology Compatibility Kit License](https://www.eclipse.org/legal/tck.php),
following is a summary of the TCK results for releases of Hibernate ORM.
## 5.5.0.Final Certification Request for Jakarta Persistence 3.0
* Organization Name ("Organization") and, if applicable, URL: Red Hat
* Product Name, Version and download URL (if applicable): [Hibernate ORM 5.5.0.Final](http://hibernate.org/orm/releases/5.5/)
* Specification Name, Version and download URL: [Jakarta Persistence 3.0](https://jakarta.ee/specifications/persistence/3.0/)
* TCK Version, digital SHA-256 fingerprint and download URL: [Jakarta Persistence TCK 3.0.0](https://download.eclipse.org/jakartaee/persistence/3.0/jakarta-persistence-tck-3.0.0.zip), SHA-256: b08c8887f00306f8bb7ebe54c4c810f3452519f5395733637ccc639b5081aebf
* Public URL of TCK Results Summary: [TCK results summary](https://github.com/hibernate/hibernate-orm/blob/main/tck/summary.md)
* Any Additional Specification Certification Requirements: None
* Java runtime used to run the implementation: Oracle JDK 1.8.0_292-b10
* Summary of the information for the certification environment, operating system, cloud, ...: Apache Derby 10.13.1.1, Linux
* I acknowledge that the Organization I represent accepts the terms of the [EFTL](https://www.eclipse.org/legal/tck.php).
* I attest that all TCK requirements have been met, including any compatibility rules.
Test results:
```
[javatest.batch] ********************************************************************************
[javatest.batch] Number of tests completed: 2055 (2055 passed, 0 failed, 0 with errors)
[javatest.batch] Number of tests remaining: 3
[javatest.batch] ********************************************************************************
[javatest.batch] Completed running 2055 tests.
[javatest.batch] Number of Tests Passed = 2055
[javatest.batch] Number of Tests Failed = 0
[javatest.batch] Number of Tests with Errors = 0
[javatest.batch] ********************************************************************************
```
## 5.5.0.Final Certification Request for Jakarta Persistence 2.2
* Organization Name ("Organization") and, if applicable, URL: Red Hat
* Product Name, Version and download URL (if applicable): [Hibernate ORM 5.5.0.Final](http://hibernate.org/orm/releases/5.5/)
* Specification Name, Version and download URL: [Jakarta Persistence 2.2](https://jakarta.ee/specifications/persistence/2.2/)
* TCK Version, digital SHA-256 fingerprint and download URL: [Jakarta Persistence TCK 2.2.0](https://download.eclipse.org/jakartaee/persistence/2.2/jakarta-persistence-tck-2.2.0.zip), SHA-256: c9cdc30e0e462e875c80f0bd46b964dd8aa8c2a3b69ade49d11df7652f3b5c39
* Public URL of TCK Results Summary: [TCK results summary](https://github.com/hibernate/hibernate-orm/blob/main/tck/summary.md)
* Any Additional Specification Certification Requirements: None
* Java runtime used to run the implementation: Oracle JDK 1.8.0_292-b10
* Summary of the information for the certification environment, operating system, cloud, ...: Apache Derby 10.13.1.1, Linux
* I acknowledge that the Organization I represent accepts the terms of the [EFTL](https://www.eclipse.org/legal/tck.php).
* I attest that all TCK requirements have been met, including any compatibility rules.
Test results:
```
[javatest.batch] ********************************************************************************
[javatest.batch] Number of tests completed: 2055 (2055 passed, 0 failed, 0 with errors)
[javatest.batch] Number of tests remaining: 3
[javatest.batch] ********************************************************************************
[javatest.batch] Completed running 2055 tests.
[javatest.batch] Number of Tests Passed = 2055
[javatest.batch] Number of Tests Failed = 0
[javatest.batch] Number of Tests with Errors = 0
[javatest.batch] ********************************************************************************
```