Merge branch master into metamodel
This commit is contained in:
commit
18e44d1739
|
@ -32,3 +32,10 @@ bin
|
||||||
# Miscellaneous
|
# Miscellaneous
|
||||||
*.log
|
*.log
|
||||||
.clover
|
.clover
|
||||||
|
|
||||||
|
# JBoss Transactions
|
||||||
|
ObjectStore
|
||||||
|
|
||||||
|
# Profiler and heap dumps
|
||||||
|
*.jps
|
||||||
|
*.hprof
|
||||||
|
|
|
@ -0,0 +1,45 @@
|
||||||
|
Guidelines for Contributing
|
||||||
|
====
|
||||||
|
Contributions from the community are essential in keeping Hibernate (any Open Source
|
||||||
|
project really) strong and successful. While we try to keep requirements for
|
||||||
|
contributing to a minimum, there are a few guidelines we ask that you mind.
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
If you are just getting started with Git, GitHub and/or contributing to Hibernate via
|
||||||
|
GitHub there are a few pre-requisite steps.
|
||||||
|
|
||||||
|
* Make sure you have a [Hibernate Jira account](https://hibernate.onjira.com)
|
||||||
|
* Make sure you have a [GitHub account](https://github.com/signup/free)
|
||||||
|
* [Fork](http://help.github.com/fork-a-repo) the Hibernate repository. As discussed in
|
||||||
|
the linked page, this also includes:
|
||||||
|
* [Set](https://help.github.com/articles/set-up-git) up your local git install
|
||||||
|
* Clone your fork
|
||||||
|
|
||||||
|
|
||||||
|
## Create the working (topic) branch
|
||||||
|
Create a "topic" branch on which you will work. The convention is to name the branch
|
||||||
|
using the JIRA issue key. If there is not already a Jira issue covering the work you
|
||||||
|
want to do, create one. Assuming you will be working from the master branch and working
|
||||||
|
on the Jira HHH-123 : `git checkout -b HHH-123 master`
|
||||||
|
|
||||||
|
|
||||||
|
## Code
|
||||||
|
Do yo thang!
|
||||||
|
|
||||||
|
## Commit
|
||||||
|
|
||||||
|
* Make commits of logical units.
|
||||||
|
* Be sure to use the JIRA issue key in the commit message. This is how Jira will pick
|
||||||
|
up the related commits and display them on the Jira issue.
|
||||||
|
* Make sure you have added the necessary tests for your changes.
|
||||||
|
* Run _all_ the tests to assure nothing else was accidentally broken.
|
||||||
|
|
||||||
|
_Prior to commiting, if you want to pull in the latest upstream changes (highly
|
||||||
|
appreciated btw), please use rebasing rather than merging. Merging creates
|
||||||
|
"merge commits" that really muck up the project timeline._
|
||||||
|
|
||||||
|
## Submit
|
||||||
|
* Sign the [Contributor License Agreement](https://cla.jboss.org/index.seam).
|
||||||
|
* Push your changes to a topic branch in your fork of the repository.
|
||||||
|
* Initiate a [pull request](http://help.github.com/send-pull-requests/)
|
||||||
|
* Update the Jira issue, adding a comment inclusing a link to the created pull request
|
|
@ -38,7 +38,11 @@ Executing Tasks
|
||||||
Gradle uses the concept of build tasks (equivalent to Ant targets). You can get a list of available tasks
|
Gradle uses the concept of build tasks (equivalent to Ant targets). You can get a list of available tasks
|
||||||
via
|
via
|
||||||
|
|
||||||
gradle --tasks
|
gradle tasks
|
||||||
|
|
||||||
|
or if using gradle wrapper
|
||||||
|
|
||||||
|
./gradlew tasks
|
||||||
|
|
||||||
### Executing Tasks Across All Modules
|
### Executing Tasks Across All Modules
|
||||||
|
|
||||||
|
|
|
@ -179,6 +179,10 @@ subprojects { subProject ->
|
||||||
systemProperty entry.key, entry.value
|
systemProperty entry.key, entry.value
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
test {
|
||||||
|
systemProperties['hibernate.test.validatefailureexpected'] = true
|
||||||
|
systemProperties += System.properties.findAll { it.key.startsWith( "hibernate.") }
|
||||||
maxHeapSize = "1024m"
|
maxHeapSize = "1024m"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -54,7 +54,7 @@ class DatabaseAllocator {
|
||||||
"postgresql82", "postgresql83", "postgresql84", "postgresql91",
|
"postgresql82", "postgresql83", "postgresql84", "postgresql91",
|
||||||
"mysql50", "mysql51","mysql55",
|
"mysql50", "mysql51","mysql55",
|
||||||
"db2-91", "db2-97",
|
"db2-91", "db2-97",
|
||||||
"mssql2005", "mssql2008R1", "mssql2008R2",
|
"mssql2005", "mssql2008R1", "mssql2008R2", "mssql2012",
|
||||||
"sybase155", "sybase157"
|
"sybase155", "sybase157"
|
||||||
];
|
];
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,228 @@
|
||||||
|
<?xml version='1.0' encoding="UTF-8"?>
|
||||||
|
|
||||||
|
<chapter xml:id="fetching"
|
||||||
|
xmlns="http://docbook.org/ns/docbook"
|
||||||
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||||
|
xmlns:xl="http://www.w3.org/1999/xlink">
|
||||||
|
|
||||||
|
<title>Fetching</title>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
Fetching, essentially, is the process of grabbing data from the database and making it available to the
|
||||||
|
application. Tuning how an application does fetching is one of the biggest factors in determining how an
|
||||||
|
application will perform. Fetching too much data, in terms of width (values/columns) and/or
|
||||||
|
depth (results/rows), adds unnecessary overhead in terms of both JDBC communication and ResultSet processing.
|
||||||
|
Fetching too little data causes additional fetches to be needed. Tuning how an application
|
||||||
|
fetches data presents a great opportunity to influence the application's overall performance.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<title>The basics</title>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
The concept of fetching breaks down into two different questions.
|
||||||
|
<itemizedlist>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
When should the data be fetched? Now? Later?
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
How should the data be fetched?
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</itemizedlist>
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<note>
|
||||||
|
<para>
|
||||||
|
"now" is generally termed <phrase>eager</phrase> or <phrase>immediate</phrase>. "later" is
|
||||||
|
generally termed <phrase>lazy</phrase> or <phrase>delayed</phrase>.
|
||||||
|
</para>
|
||||||
|
</note>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
There are a number of scopes for defining fetching:
|
||||||
|
<itemizedlist>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
<emphasis>static</emphasis> - Static definition of fetching strategies is done in the
|
||||||
|
mappings. The statically-defined fetch strategies is used in the absence of any dynamically
|
||||||
|
defined strategies <footnote><para>Except in the case of HQL/JPQL; see xyz</para></footnote>.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
<emphasis>dynamic</emphasis> (sometimes referred to as runtime) - Dynamic definition is
|
||||||
|
really use-case centric. There are 2 main ways to define dynamic fetching:
|
||||||
|
</para>
|
||||||
|
<itemizedlist>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
<emphasis>fetch profiles</emphasis> - defined in mappings, but can be
|
||||||
|
enabled/disabled on the Session.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
HQL/JPQL and both Hibernate and JPA Criteria queries have the ability to specify
|
||||||
|
fetching, specific to said query.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</itemizedlist>
|
||||||
|
</listitem>
|
||||||
|
</itemizedlist>
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<variablelist>
|
||||||
|
<title>The strategies</title>
|
||||||
|
<varlistentry>
|
||||||
|
<term>SELECT</term>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
Performs a separate SQL select to load the data. This can either be EAGER (the second select
|
||||||
|
is issued immediately) or LAZY (the second select is delayed until the data is needed). This
|
||||||
|
is the strategy generally termed <phrase>N+1</phrase>.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</varlistentry>
|
||||||
|
<varlistentry>
|
||||||
|
<term>JOIN</term>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
Inherently an EAGER style of fetching. The data to be fetched is obtained through the use of
|
||||||
|
an SQL join.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</varlistentry>
|
||||||
|
<varlistentry>
|
||||||
|
<term>BATCH</term>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
Performs a separate SQL select to load a number of related data items using an
|
||||||
|
IN-restriction as part of the SQL WHERE-clause based on a batch size. Again, this can either
|
||||||
|
be EAGER (the second select is issued immediately) or LAZY (the second select is delayed until
|
||||||
|
the data is needed).
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</varlistentry>
|
||||||
|
<varlistentry>
|
||||||
|
<term>SUBSELECT</term>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
Performs a separate SQL select to load associated data based on the SQL restriction used to
|
||||||
|
load the owner. Again, this can either be EAGER (the second select is issued immediately)
|
||||||
|
or LAZY (the second select is delayed until the data is needed).
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</varlistentry>
|
||||||
|
</variablelist>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<title>Applying fetch strategies</title>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
Let's consider these topics as it relates to an simple domain model and a few use cases.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<example>
|
||||||
|
<title>Sample domain model</title>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/Employee.java" parse="text"/></programlisting>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/Department.java" parse="text"/></programlisting>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/Project.java" parse="text"/></programlisting>
|
||||||
|
</example>
|
||||||
|
|
||||||
|
<important>
|
||||||
|
<para>
|
||||||
|
The Hibernate recommendation is to statically mark all associations lazy and to use dynamic fetching
|
||||||
|
strategies for eagerness. This is unfortunately at odds with the JPA specification which defines that
|
||||||
|
all one-to-one and many-to-one associations should be eagerly fetched by default. Hibernate, as a JPA
|
||||||
|
provider honors that default.
|
||||||
|
</para>
|
||||||
|
</important>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<title>No fetching</title>
|
||||||
|
<subtitle>The login use-case</subtitle>
|
||||||
|
<para>
|
||||||
|
For the first use case, consider the application's login process for an Employee. Lets assume that
|
||||||
|
login only requires access to the Employee information, not Project nor Department information.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<example>
|
||||||
|
<title>No fetching example</title>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/Login.java" parse="text"/></programlisting>
|
||||||
|
</example>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
In this example, the application gets the Employee data. However, because all associations from
|
||||||
|
Employee are declared as LAZY (JPA defines the default for collections as LAZY) no other data is
|
||||||
|
fetched.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
If the login process does not need access to the Employee information specifically, another
|
||||||
|
fetching optimization here would be to limit the width of the query results.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<example>
|
||||||
|
<title>No fetching (scalar) example</title>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/LoginScalar.java" parse="text"/></programlisting>
|
||||||
|
</example>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<title>Dynamic fetching via queries</title>
|
||||||
|
<subtitle>The projects for an employee use-case</subtitle>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
For the second use case, consider a screen displaying the Projects for an Employee. Certainly access
|
||||||
|
to the Employee is needed, as is the collection of Projects for that Employee. Information
|
||||||
|
about Departments, other Employees or other Projects is not needed.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<example>
|
||||||
|
<title>Dynamic query fetching example</title>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/ProjectsForAnEmployeeHql.java" parse="text"/></programlisting>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/ProjectsForAnEmployeeCriteria.java" parse="text"/></programlisting>
|
||||||
|
</example>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
In this example we have an Employee and their Projects loaded in a single query shown both as an HQL
|
||||||
|
query and a JPA Criteria query. In both cases, this resolves to exactly one database query to get
|
||||||
|
all that information.
|
||||||
|
</para>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section>
|
||||||
|
<title>Dynamic fetching via profiles</title>
|
||||||
|
<subtitle>The projects for an employee use-case using natural-id</subtitle>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
Suppose we wanted to leverage loading by natural-id to obtain the Employee information in the
|
||||||
|
"projects for and employee" use-case. Loading by natural-id uses the statically defined fetching
|
||||||
|
strategies, but does not expose a means to define load-specific fetching. So we would leverage a
|
||||||
|
fetch profile.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<example>
|
||||||
|
<title>Fetch profile example</title>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/FetchOverrides.java" parse="text"/></programlisting>
|
||||||
|
<programlisting role="JAVA"><xi:include href="extras/ProjectsForAnEmployeeFetchProfile.java" parse="text"/></programlisting>
|
||||||
|
</example>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
Here the Employee is obtained by natural-id lookup and the Employee's Project data is fetched eagerly.
|
||||||
|
If the Employee data is resolved from cache, the Project data is resolved on its own. However,
|
||||||
|
if the Employee data is not resolved in cache, the Employee and Project data is resolved in one
|
||||||
|
SQL query via join as we saw above.
|
||||||
|
</para>
|
||||||
|
</section>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<!-- todo : document special fetching considerations such as batch fetching, subselect fetching and extra laziness -->
|
||||||
|
|
||||||
|
|
||||||
|
</chapter>
|
|
@ -0,0 +1,10 @@
|
||||||
|
@Entity
|
||||||
|
public class Department {
|
||||||
|
@Id
|
||||||
|
private Long id;
|
||||||
|
|
||||||
|
@OneToMany(mappedBy="department")
|
||||||
|
private List<Employees> employees;
|
||||||
|
|
||||||
|
...
|
||||||
|
}
|
|
@ -0,0 +1,24 @@
|
||||||
|
@Entity
|
||||||
|
public class Employee {
|
||||||
|
@Id
|
||||||
|
private Long id;
|
||||||
|
|
||||||
|
@NaturalId
|
||||||
|
private String userid;
|
||||||
|
|
||||||
|
@Column( name="pswd" )
|
||||||
|
@ColumnTransformer( read="decrypt(pswd)" write="encrypt(?)" )
|
||||||
|
private String password;
|
||||||
|
|
||||||
|
private int accessLevel;
|
||||||
|
|
||||||
|
@ManyToOne( fetch=LAZY )
|
||||||
|
@JoinColumn
|
||||||
|
private Department department;
|
||||||
|
|
||||||
|
@ManyToMany(mappedBy="employees")
|
||||||
|
@JoinColumn
|
||||||
|
private Set<Project> projects;
|
||||||
|
|
||||||
|
...
|
||||||
|
}
|
|
@ -0,0 +1,10 @@
|
||||||
|
@FetchProfile(
|
||||||
|
name="employee.projects",
|
||||||
|
fetchOverrides={
|
||||||
|
@FetchOverride(
|
||||||
|
entity=Employee.class,
|
||||||
|
association="projects",
|
||||||
|
mode=JOIN
|
||||||
|
)
|
||||||
|
}
|
||||||
|
)
|
|
@ -0,0 +1,4 @@
|
||||||
|
String loginHql = "select e from Employee e where e.userid = :userid and e.password = :password";
|
||||||
|
Employee employee = (Employee) session.createQuery( loginHql )
|
||||||
|
...
|
||||||
|
.uniqueResult();
|
|
@ -0,0 +1,4 @@
|
||||||
|
String loginHql = "select e.accessLevel from Employee e where e.userid = :userid and e.password = :password";
|
||||||
|
Employee employee = (Employee) session.createQuery( loginHql )
|
||||||
|
...
|
||||||
|
.uniqueResult();
|
|
@ -0,0 +1,10 @@
|
||||||
|
@Entity
|
||||||
|
public class Project {
|
||||||
|
@Id
|
||||||
|
private Long id;
|
||||||
|
|
||||||
|
@ManyToMany
|
||||||
|
private Set<Employee> employees;
|
||||||
|
|
||||||
|
...
|
||||||
|
}
|
|
@ -0,0 +1,10 @@
|
||||||
|
String userid = ...;
|
||||||
|
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
|
||||||
|
CriteriaQuery<Employee> criteria = cb.createQuery( Employee.class );
|
||||||
|
Root<Employee> root = criteria.from( Employee.class );
|
||||||
|
root.fetch( Employee_.projects );
|
||||||
|
criteria.select( root );
|
||||||
|
criteria.where(
|
||||||
|
cb.equal( root.get( Employee_.userid ), cb.literal( userid ) )
|
||||||
|
);
|
||||||
|
Employee e = entityManager.createQuery( criteria ).getSingleResult();
|
|
@ -0,0 +1,4 @@
|
||||||
|
String userid = ...;
|
||||||
|
session.enableFetchProfile( "employee.projects" );
|
||||||
|
Employee e = (Employee) session.bySimpleNaturalId( Employee.class )
|
||||||
|
.load( userid );
|
|
@ -0,0 +1,5 @@
|
||||||
|
String userid = ...;
|
||||||
|
String hql = "select e from Employee e join fetch e.projects where e.userid = :userid";
|
||||||
|
Employee e = (Employee) session.createQuery( hql )
|
||||||
|
.setParameter( "userid", userid )
|
||||||
|
.uniqueResult();
|
|
@ -72,7 +72,7 @@ public class BulkOperationCleanupAction implements Executable, Serializable {
|
||||||
* @param session The session to which this request is tied.
|
* @param session The session to which this request is tied.
|
||||||
* @param affectedQueryables The affected entity persisters.
|
* @param affectedQueryables The affected entity persisters.
|
||||||
*/
|
*/
|
||||||
public BulkOperationCleanupAction(SessionImplementor session, Queryable[] affectedQueryables) {
|
public BulkOperationCleanupAction(SessionImplementor session, Queryable... affectedQueryables) {
|
||||||
SessionFactoryImplementor factory = session.getFactory();
|
SessionFactoryImplementor factory = session.getFactory();
|
||||||
LinkedHashSet<String> spacesList = new LinkedHashSet<String>();
|
LinkedHashSet<String> spacesList = new LinkedHashSet<String>();
|
||||||
for ( Queryable persister : affectedQueryables ) {
|
for ( Queryable persister : affectedQueryables ) {
|
||||||
|
|
|
@ -73,7 +73,8 @@ public final class CollectionUpdateAction extends CollectionAction {
|
||||||
if (affectedByFilters) {
|
if (affectedByFilters) {
|
||||||
throw new HibernateException(
|
throw new HibernateException(
|
||||||
"cannot recreate collection while filter is enabled: " +
|
"cannot recreate collection while filter is enabled: " +
|
||||||
MessageHelper.collectionInfoString( persister, id, persister.getFactory() )
|
MessageHelper.collectionInfoString(persister, collection,
|
||||||
|
id, session )
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if ( !emptySnapshot ) persister.remove( id, session );
|
if ( !emptySnapshot ) persister.remove( id, session );
|
||||||
|
|
|
@ -36,9 +36,13 @@ import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
* Prefer the standard {@link javax.persistence.Access} annotation
|
* Prefer the standard {@link javax.persistence.Access} annotation
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
*
|
||||||
|
* @deprecated Use {@link AttributeAccessor} instead; renamed to avoid confusion with the JPA
|
||||||
|
* {@link javax.persistence.AccessType} enum.
|
||||||
*/
|
*/
|
||||||
@Target({ TYPE, METHOD, FIELD })
|
@Target({ TYPE, METHOD, FIELD })
|
||||||
@Retention(RUNTIME)
|
@Retention(RUNTIME)
|
||||||
|
@Deprecated
|
||||||
public @interface AccessType {
|
public @interface AccessType {
|
||||||
String value();
|
String value();
|
||||||
}
|
}
|
||||||
|
|
|
@ -31,11 +31,32 @@ import static java.lang.annotation.ElementType.METHOD;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Define a ToOne association pointing to several entity types.
|
* Defines a ToOne-style association pointing to one of several entity types depending on a local discriminator,
|
||||||
* Matching the according entity type is doe through a metadata discriminator column
|
* as opposed to discriminated inheritance where the discriminator is kept as part of the entity hierarchy.
|
||||||
* This kind of mapping should be only marginal.
|
*
|
||||||
|
* For example, if you consider an Order entity containing Payment information where Payment might be of type
|
||||||
|
* CashPayment or CreditCardPayment the @Any approach would be to keep that discriminator and matching value on the
|
||||||
|
* Order itself. Thought of another way, the "foreign-key" really is made up of the value and discriminator
|
||||||
|
* (there is no physical foreign key here as databases do not support this):
|
||||||
|
* <blockquote><pre>
|
||||||
|
* @Entity
|
||||||
|
* class Order {
|
||||||
|
* ...
|
||||||
|
* @Any( metaColumn = @Column( name="payment_type" ) )
|
||||||
|
* @AnyMetDef(
|
||||||
|
* idType = "long"
|
||||||
|
* metaValues = {
|
||||||
|
* @MetaValue( value="C", targetEntity=CashPayment.class ),
|
||||||
|
* @MetaValue( value="CC", targetEntity=CreditCardPayment.class ),
|
||||||
|
* }
|
||||||
|
* )
|
||||||
|
* pubic Payment getPayment() { ... }
|
||||||
|
* }
|
||||||
|
* }
|
||||||
|
* </pre></blockquote>
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@java.lang.annotation.Target({METHOD, FIELD})
|
@java.lang.annotation.Target({METHOD, FIELD})
|
||||||
@Retention(RUNTIME)
|
@Retention(RUNTIME)
|
||||||
|
@ -48,10 +69,10 @@ public @interface Any {
|
||||||
String metaDef() default "";
|
String metaDef() default "";
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Metadata discriminator column description, This column will hold the meta value corresponding to the
|
* Identifies the discriminator column. This column will hold the value that identifies the targeted entity.
|
||||||
* targeted entity.
|
|
||||||
*/
|
*/
|
||||||
Column metaColumn();
|
Column metaColumn();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Defines whether the value of the field or property should be lazily loaded or must be
|
* Defines whether the value of the field or property should be lazily loaded or must be
|
||||||
* eagerly fetched. The EAGER strategy is a requirement on the persistence provider runtime
|
* eagerly fetched. The EAGER strategy is a requirement on the persistence provider runtime
|
||||||
|
|
|
@ -31,9 +31,12 @@ import static java.lang.annotation.ElementType.TYPE;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Defines @Any and @manyToAny metadata
|
* Used to provide metadata about an {@link Any} or {@link ManyToAny} mapping.
|
||||||
|
*
|
||||||
|
* @see AnyMetaDefs
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@java.lang.annotation.Target( { PACKAGE, TYPE, METHOD, FIELD } )
|
@java.lang.annotation.Target( { PACKAGE, TYPE, METHOD, FIELD } )
|
||||||
@Retention( RUNTIME )
|
@Retention( RUNTIME )
|
||||||
|
@ -45,18 +48,18 @@ public @interface AnyMetaDef {
|
||||||
String name() default "";
|
String name() default "";
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* meta discriminator Hibernate type
|
* Names the discriminator Hibernate Type for this Any/ManyToAny mapping. The default is to use
|
||||||
|
* {@link org.hibernate.type.StringType}
|
||||||
*/
|
*/
|
||||||
String metaType();
|
String metaType();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Hibernate type of the id column
|
* Names the identifier Hibernate Type for the entity associated through this Any/ManyToAny mapping.
|
||||||
* @return Hibernate type of the id column
|
|
||||||
*/
|
*/
|
||||||
String idType();
|
String idType();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Matching discriminator values with their respective entity
|
* Maps discriminator values to the matching corresponding entity types.
|
||||||
*/
|
*/
|
||||||
MetaValue[] metaValues();
|
MetaValue[] metaValues();
|
||||||
}
|
}
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.annotations;
|
package org.hibernate.annotations;
|
||||||
|
|
||||||
import java.lang.annotation.Retention;
|
import java.lang.annotation.Retention;
|
||||||
|
|
||||||
import static java.lang.annotation.ElementType.PACKAGE;
|
import static java.lang.annotation.ElementType.PACKAGE;
|
||||||
|
@ -29,10 +30,10 @@ import static java.lang.annotation.ElementType.TYPE;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Defines @Any and @ManyToAny set of metadata.
|
* Used to group together {@link AnyMetaDef} annotations. Can be defined at the entity or package level
|
||||||
* Can be defined at the entity level or the package level
|
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@java.lang.annotation.Target( { PACKAGE, TYPE } )
|
@java.lang.annotation.Target( { PACKAGE, TYPE } )
|
||||||
@Retention( RUNTIME )
|
@Retention( RUNTIME )
|
||||||
|
|
|
@ -0,0 +1,61 @@
|
||||||
|
/*
|
||||||
|
* Hibernate, Relational Persistence for Idiomatic Java
|
||||||
|
*
|
||||||
|
* Copyright (c) 2012, Red Hat Inc. or third-party contributors as
|
||||||
|
* indicated by the @author tags or express copyright attribution
|
||||||
|
* statements applied by the authors. All third-party contributions are
|
||||||
|
* distributed under license by Red Hat Inc.
|
||||||
|
*
|
||||||
|
* This copyrighted material is made available to anyone wishing to use, modify,
|
||||||
|
* copy, or redistribute it subject to the terms and conditions of the GNU
|
||||||
|
* Lesser General Public License, as published by the Free Software Foundation.
|
||||||
|
*
|
||||||
|
* This program is distributed in the hope that it will be useful,
|
||||||
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
|
||||||
|
* or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
|
||||||
|
* for more details.
|
||||||
|
*
|
||||||
|
* You should have received a copy of the GNU Lesser General Public License
|
||||||
|
* along with this distribution; if not, write to:
|
||||||
|
* Free Software Foundation, Inc.
|
||||||
|
* 51 Franklin Street, Fifth Floor
|
||||||
|
* Boston, MA 02110-1301 USA
|
||||||
|
*/
|
||||||
|
package org.hibernate.annotations;
|
||||||
|
|
||||||
|
import java.lang.annotation.Retention;
|
||||||
|
|
||||||
|
import static java.lang.annotation.ElementType.FIELD;
|
||||||
|
import static java.lang.annotation.ElementType.METHOD;
|
||||||
|
import static java.lang.annotation.ElementType.TYPE;
|
||||||
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Names a {@link org.hibernate.property.PropertyAccessor} strategy to use.
|
||||||
|
*
|
||||||
|
* Can be specified at either:<ul>
|
||||||
|
* <li>
|
||||||
|
* <strong>TYPE</strong> level, which will act as naming the default accessor strategy for
|
||||||
|
* all attributes on the class which do not explicitly name an accessor strategy
|
||||||
|
* </li>
|
||||||
|
* <li>
|
||||||
|
* <strong>METHOD/FIELD</strong> level, which will be in effect for just that attribute.
|
||||||
|
* </li>
|
||||||
|
* </ul>
|
||||||
|
*
|
||||||
|
* Should only be used to name custom {@link org.hibernate.property.PropertyAccessor}. For {@code property/field}
|
||||||
|
* access, the JPA {@link javax.persistence.Access} annotation should be preferred using the appropriate
|
||||||
|
* {@link javax.persistence.AccessType}. However, if this annotation is used with either {@code value="property"}
|
||||||
|
* or {@code value="field"}, it will act just as the corresponding usage of {@link javax.persistence.Access}.
|
||||||
|
*
|
||||||
|
* @author Steve Ebersole
|
||||||
|
* @author Emmanuel Bernard
|
||||||
|
*/
|
||||||
|
@java.lang.annotation.Target({ TYPE, METHOD, FIELD })
|
||||||
|
@Retention(RUNTIME)
|
||||||
|
public @interface AttributeAccessor {
|
||||||
|
/**
|
||||||
|
* Names the {@link org.hibernate.property.PropertyAccessor} strategy
|
||||||
|
*/
|
||||||
|
String value();
|
||||||
|
}
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.annotations;
|
package org.hibernate.annotations;
|
||||||
|
|
||||||
import java.lang.annotation.Retention;
|
import java.lang.annotation.Retention;
|
||||||
import java.lang.annotation.Target;
|
import java.lang.annotation.Target;
|
||||||
|
|
||||||
|
@ -31,13 +32,31 @@ import static java.lang.annotation.ElementType.TYPE;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Batch size for SQL loading
|
* Defines size for batch loading of collections or lazy entities. For example...
|
||||||
|
* <blockquote><pre>
|
||||||
|
* @Entity
|
||||||
|
* @BatchSize(size=100)
|
||||||
|
* class Product {
|
||||||
|
* ...
|
||||||
|
* }
|
||||||
|
* </pre></blockquote>
|
||||||
|
* will initialize up to 100 lazy Product entity proxies at a time.
|
||||||
|
*
|
||||||
|
* <blockquote><pre>
|
||||||
|
* @OneToMany
|
||||||
|
* @BatchSize(size = 5) /
|
||||||
|
* Set<Product> getProducts() { ... };
|
||||||
|
* </pre></blockquote>
|
||||||
|
* will initialize up to 5 lazy collections of products at a time
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@Target({TYPE, METHOD, FIELD})
|
@Target({TYPE, METHOD, FIELD})
|
||||||
@Retention(RUNTIME)
|
@Retention(RUNTIME)
|
||||||
public @interface BatchSize {
|
public @interface BatchSize {
|
||||||
/** Strictly positive integer */
|
/**
|
||||||
|
* Strictly positive integer
|
||||||
|
*/
|
||||||
int size();
|
int size();
|
||||||
}
|
}
|
||||||
|
|
|
@ -30,7 +30,12 @@ import static java.lang.annotation.ElementType.METHOD;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Apply a cascade strategy on an association
|
* Apply a cascade strategy on an association. Used to apply Hibernate specific cascades. For JPA cascading, prefer
|
||||||
|
* using {@link javax.persistence.CascadeType} on {@link javax.persistence.OneToOne},
|
||||||
|
* {@link javax.persistence.OneToMany}, etc. Hibernate will merge together both sets of cascades.
|
||||||
|
*
|
||||||
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@Target({METHOD, FIELD})
|
@Target({METHOD, FIELD})
|
||||||
@Retention(RUNTIME)
|
@Retention(RUNTIME)
|
||||||
|
|
|
@ -25,7 +25,7 @@ package org.hibernate.annotations;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Cascade types (can override default EJB3 cascades
|
* Cascade types (can override default JPA cascades
|
||||||
*/
|
*/
|
||||||
public enum CascadeType {
|
public enum CascadeType {
|
||||||
ALL,
|
ALL,
|
||||||
|
|
|
@ -32,8 +32,7 @@ import static java.lang.annotation.ElementType.TYPE;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Arbitrary SQL check constraints which can be defined at the class,
|
* Arbitrary SQL CHECK constraints which can be defined at the class, property or collection level
|
||||||
* property or collection level
|
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -30,7 +30,8 @@ import static java.lang.annotation.ElementType.METHOD;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Names a custom collection type for a persistent collection.
|
* Names a custom collection type for a persistent collection. The collection can also name a @Type, which defines
|
||||||
|
* the Hibernate Type of the collection elements.
|
||||||
*
|
*
|
||||||
* @see org.hibernate.type.CollectionType
|
* @see org.hibernate.type.CollectionType
|
||||||
* @see org.hibernate.usertype.UserCollectionType
|
* @see org.hibernate.usertype.UserCollectionType
|
||||||
|
|
|
@ -35,6 +35,8 @@ import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
*
|
*
|
||||||
* For example: <code>read="decrypt(credit_card_num)" write="encrypt(?)"</code>
|
* For example: <code>read="decrypt(credit_card_num)" write="encrypt(?)"</code>
|
||||||
*
|
*
|
||||||
|
* @see ColumnTransformers
|
||||||
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
*/
|
*/
|
||||||
@java.lang.annotation.Target({FIELD,METHOD})
|
@java.lang.annotation.Target({FIELD,METHOD})
|
||||||
|
|
|
@ -29,11 +29,15 @@ import static java.lang.annotation.ElementType.TYPE;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Discriminator formula
|
* Used to apply a Hibernate formula (derived value) as the inheritance discriminator "column". Used in place of
|
||||||
* To be placed at the root entity.
|
* the JPA {@link javax.persistence.DiscriminatorColumn} when a formula is wanted.
|
||||||
|
*
|
||||||
|
* To be placed on the root entity.
|
||||||
|
*
|
||||||
|
* @see Formula
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
* @see Formula
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@Target({TYPE})
|
@Target({TYPE})
|
||||||
@Retention(RUNTIME)
|
@Retention(RUNTIME)
|
||||||
|
|
|
@ -30,10 +30,34 @@ import static java.lang.annotation.ElementType.METHOD;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Formula. To be used as a replacement for @Column in most places
|
* Defines a formula (derived value) which is a SQL fragment that acts as a @Column alternative in most cases.
|
||||||
* The formula has to be a valid SQL fragment
|
* Represents read-only state.
|
||||||
|
*
|
||||||
|
* In certain cases @ColumnTransformer might be a better option, especially as it leaves open the option of still
|
||||||
|
* being writable.
|
||||||
|
*
|
||||||
|
* <blockquote><pre>
|
||||||
|
* // perform calculations
|
||||||
|
* @Formula( "sub_total + (sub_total * tax)" )
|
||||||
|
* long getTotalCost() { ... }
|
||||||
|
* </pre></blockquote>
|
||||||
|
*
|
||||||
|
* <blockquote><pre>
|
||||||
|
* // call functions
|
||||||
|
* @Formula( "upper( substring( middle_name, 1 ) )" )
|
||||||
|
* Character getMiddleInitial() { ... }
|
||||||
|
* </pre></blockquote>
|
||||||
|
*
|
||||||
|
* <blockquote><pre>
|
||||||
|
* // this might be better handled through @ColumnTransformer
|
||||||
|
* @Formula( "decrypt(credit_card_num)" )
|
||||||
|
* String getCreditCardNumber() { ... }
|
||||||
|
* </pre></blockquote>
|
||||||
|
*
|
||||||
|
* @see ColumnTransformer
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@Target({METHOD, FIELD})
|
@Target({METHOD, FIELD})
|
||||||
@Retention(RUNTIME)
|
@Retention(RUNTIME)
|
||||||
|
|
|
@ -22,20 +22,23 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.annotations;
|
package org.hibernate.annotations;
|
||||||
import java.lang.annotation.Retention;
|
|
||||||
import javax.persistence.Column;
|
import javax.persistence.Column;
|
||||||
import javax.persistence.FetchType;
|
import javax.persistence.FetchType;
|
||||||
|
import java.lang.annotation.Retention;
|
||||||
|
|
||||||
import static java.lang.annotation.ElementType.FIELD;
|
import static java.lang.annotation.ElementType.FIELD;
|
||||||
import static java.lang.annotation.ElementType.METHOD;
|
import static java.lang.annotation.ElementType.METHOD;
|
||||||
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
import static java.lang.annotation.RetentionPolicy.RUNTIME;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Defined a ToMany association pointing to different entity types.
|
* This is the collection-valued form of @Any definitions. Defines a ToMany-style association pointing
|
||||||
* Matching the according entity type is doe through a metadata discriminator column
|
* to one of several entity types depending on a local discriminator. See {@link Any} for further information.
|
||||||
* This kind of mapping should be only marginal.
|
*
|
||||||
|
* @see Any
|
||||||
*
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
@java.lang.annotation.Target({METHOD, FIELD})
|
@java.lang.annotation.Target({METHOD, FIELD})
|
||||||
@Retention(RUNTIME)
|
@Retention(RUNTIME)
|
||||||
|
|
|
@ -23,10 +23,13 @@
|
||||||
*/
|
*/
|
||||||
package org.hibernate.annotations;
|
package org.hibernate.annotations;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Represent a discriminator value associated to a given entity type
|
* Maps a given discriminator value to the corresponding entity type. See {@link Any} for more information.
|
||||||
|
*
|
||||||
|
* @see Any
|
||||||
|
*
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
public @interface MetaValue {
|
public @interface MetaValue {
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -92,6 +92,10 @@ public class StandardServiceRegistryBuilder {
|
||||||
return initiators;
|
return initiators;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public BootstrapServiceRegistry getBootstrapServiceRegistry() {
|
||||||
|
return bootstrapServiceRegistry;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Read settings from a {@link Properties} file. Differs from {@link #configure()} and {@link #configure(String)}
|
* Read settings from a {@link Properties} file. Differs from {@link #configure()} and {@link #configure(String)}
|
||||||
* in that here we read a {@link Properties} file while for {@link #configure} we read the XML variant.
|
* in that here we read a {@link Properties} file while for {@link #configure} we read the XML variant.
|
||||||
|
@ -224,6 +228,15 @@ public class StandardServiceRegistryBuilder {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Temporarily exposed since Configuration is still around and much code still uses Configuration. This allows
|
||||||
|
* code to configure the builder and access that to configure Configuration object (used from HEM atm).
|
||||||
|
*/
|
||||||
|
@Deprecated
|
||||||
|
public Map getSettings() {
|
||||||
|
return settings;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Destroy a service registry. Applications should only destroy registries they have explicitly created.
|
* Destroy a service registry. Applications should only destroy registries they have explicitly created.
|
||||||
*
|
*
|
||||||
|
|
|
@ -94,6 +94,9 @@ import org.hibernate.engine.transaction.jta.platform.internal.WebSphereJtaPlatfo
|
||||||
import org.hibernate.engine.transaction.jta.platform.internal.WeblogicJtaPlatform;
|
import org.hibernate.engine.transaction.jta.platform.internal.WeblogicJtaPlatform;
|
||||||
import org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform;
|
import org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform;
|
||||||
import org.hibernate.engine.transaction.spi.TransactionFactory;
|
import org.hibernate.engine.transaction.spi.TransactionFactory;
|
||||||
|
import org.hibernate.hql.spi.MultiTableBulkIdStrategy;
|
||||||
|
import org.hibernate.hql.spi.PersistentTableBulkIdStrategy;
|
||||||
|
import org.hibernate.hql.spi.TemporaryTableBulkIdStrategy;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
|
@ -131,6 +134,7 @@ public class StrategySelectorBuilder {
|
||||||
addDialects( strategySelector );
|
addDialects( strategySelector );
|
||||||
addJtaPlatforms( strategySelector );
|
addJtaPlatforms( strategySelector );
|
||||||
addTransactionFactories( strategySelector );
|
addTransactionFactories( strategySelector );
|
||||||
|
addMultiTableBulkIdStrategies( strategySelector );
|
||||||
|
|
||||||
// apply auto-discovered registrations
|
// apply auto-discovered registrations
|
||||||
for ( AvailabilityAnnouncer announcer : classLoaderService.loadJavaServices( AvailabilityAnnouncer.class ) ) {
|
for ( AvailabilityAnnouncer announcer : classLoaderService.loadJavaServices( AvailabilityAnnouncer.class ) ) {
|
||||||
|
@ -327,4 +331,17 @@ public class StrategySelectorBuilder {
|
||||||
strategySelector.registerStrategyImplementor( TransactionFactory.class, CMTTransactionFactory.SHORT_NAME, CMTTransactionFactory.class );
|
strategySelector.registerStrategyImplementor( TransactionFactory.class, CMTTransactionFactory.SHORT_NAME, CMTTransactionFactory.class );
|
||||||
strategySelector.registerStrategyImplementor( TransactionFactory.class, "org.hibernate.transaction.CMTTransactionFactory", CMTTransactionFactory.class );
|
strategySelector.registerStrategyImplementor( TransactionFactory.class, "org.hibernate.transaction.CMTTransactionFactory", CMTTransactionFactory.class );
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private void addMultiTableBulkIdStrategies(StrategySelectorImpl strategySelector) {
|
||||||
|
strategySelector.registerStrategyImplementor(
|
||||||
|
MultiTableBulkIdStrategy.class,
|
||||||
|
PersistentTableBulkIdStrategy.SHORT_NAME,
|
||||||
|
PersistentTableBulkIdStrategy.class
|
||||||
|
);
|
||||||
|
strategySelector.registerStrategyImplementor(
|
||||||
|
MultiTableBulkIdStrategy.class,
|
||||||
|
TemporaryTableBulkIdStrategy.SHORT_NAME,
|
||||||
|
TemporaryTableBulkIdStrategy.class
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -25,13 +25,9 @@ package org.hibernate.bytecode.buildtime.internal;
|
||||||
|
|
||||||
import java.io.ByteArrayInputStream;
|
import java.io.ByteArrayInputStream;
|
||||||
import java.io.DataInputStream;
|
import java.io.DataInputStream;
|
||||||
import java.io.File;
|
|
||||||
import java.io.FileInputStream;
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
|
|
||||||
import javassist.ClassClassPath;
|
|
||||||
import javassist.ClassPool;
|
|
||||||
import javassist.bytecode.ClassFile;
|
import javassist.bytecode.ClassFile;
|
||||||
|
|
||||||
import org.hibernate.bytecode.buildtime.spi.AbstractInstrumenter;
|
import org.hibernate.bytecode.buildtime.spi.AbstractInstrumenter;
|
||||||
|
@ -48,7 +44,6 @@ import org.hibernate.bytecode.spi.ClassTransformer;
|
||||||
*
|
*
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
* @author Muga Nishizawa
|
* @author Muga Nishizawa
|
||||||
* @author Dustin Schultz
|
|
||||||
*/
|
*/
|
||||||
public class JavassistInstrumenter extends AbstractInstrumenter {
|
public class JavassistInstrumenter extends AbstractInstrumenter {
|
||||||
|
|
||||||
|
@ -76,20 +71,6 @@ public class JavassistInstrumenter extends AbstractInstrumenter {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public void execute(Set<File> files) {
|
|
||||||
ClassPool cp = ClassPool.getDefault();
|
|
||||||
cp.insertClassPath(new ClassClassPath(this.getClass()));
|
|
||||||
try {
|
|
||||||
for (File file : files) {
|
|
||||||
cp.makeClass(new FileInputStream(file));
|
|
||||||
}
|
|
||||||
} catch (IOException e) {
|
|
||||||
throw new RuntimeException(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
super.execute(files);
|
|
||||||
}
|
|
||||||
|
|
||||||
private static class CustomClassDescriptor implements ClassDescriptor {
|
private static class CustomClassDescriptor implements ClassDescriptor {
|
||||||
private final byte[] bytes;
|
private final byte[] bytes;
|
||||||
private final ClassFile classFile;
|
private final ClassFile classFile;
|
||||||
|
|
|
@ -32,7 +32,6 @@ import java.util.Iterator;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
import javassist.CannotCompileException;
|
import javassist.CannotCompileException;
|
||||||
import javassist.ClassPool;
|
|
||||||
import javassist.bytecode.AccessFlag;
|
import javassist.bytecode.AccessFlag;
|
||||||
import javassist.bytecode.BadBytecode;
|
import javassist.bytecode.BadBytecode;
|
||||||
import javassist.bytecode.Bytecode;
|
import javassist.bytecode.Bytecode;
|
||||||
|
@ -44,8 +43,6 @@ import javassist.bytecode.Descriptor;
|
||||||
import javassist.bytecode.FieldInfo;
|
import javassist.bytecode.FieldInfo;
|
||||||
import javassist.bytecode.MethodInfo;
|
import javassist.bytecode.MethodInfo;
|
||||||
import javassist.bytecode.Opcode;
|
import javassist.bytecode.Opcode;
|
||||||
import javassist.bytecode.StackMapTable;
|
|
||||||
import javassist.bytecode.stackmap.MapMaker;
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The thing that handles actual class enhancement in regards to
|
* The thing that handles actual class enhancement in regards to
|
||||||
|
@ -53,7 +50,6 @@ import javassist.bytecode.stackmap.MapMaker;
|
||||||
*
|
*
|
||||||
* @author Muga Nishizawa
|
* @author Muga Nishizawa
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
* @author Dustin Schultz
|
|
||||||
*/
|
*/
|
||||||
public class FieldTransformer {
|
public class FieldTransformer {
|
||||||
|
|
||||||
|
@ -134,7 +130,7 @@ public class FieldTransformer {
|
||||||
}
|
}
|
||||||
|
|
||||||
private void addGetFieldHandlerMethod(ClassFile classfile)
|
private void addGetFieldHandlerMethod(ClassFile classfile)
|
||||||
throws CannotCompileException, BadBytecode {
|
throws CannotCompileException {
|
||||||
ConstPool cp = classfile.getConstPool();
|
ConstPool cp = classfile.getConstPool();
|
||||||
int this_class_index = cp.getThisClassInfo();
|
int this_class_index = cp.getThisClassInfo();
|
||||||
MethodInfo minfo = new MethodInfo(cp, GETFIELDHANDLER_METHOD_NAME,
|
MethodInfo minfo = new MethodInfo(cp, GETFIELDHANDLER_METHOD_NAME,
|
||||||
|
@ -152,13 +148,11 @@ public class FieldTransformer {
|
||||||
code.addOpcode(Opcode.ARETURN);
|
code.addOpcode(Opcode.ARETURN);
|
||||||
minfo.setCodeAttribute(code.toCodeAttribute());
|
minfo.setCodeAttribute(code.toCodeAttribute());
|
||||||
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
||||||
StackMapTable smt = MapMaker.make(ClassPool.getDefault(), minfo);
|
|
||||||
minfo.getCodeAttribute().setAttribute(smt);
|
|
||||||
classfile.addMethod(minfo);
|
classfile.addMethod(minfo);
|
||||||
}
|
}
|
||||||
|
|
||||||
private void addSetFieldHandlerMethod(ClassFile classfile)
|
private void addSetFieldHandlerMethod(ClassFile classfile)
|
||||||
throws CannotCompileException, BadBytecode {
|
throws CannotCompileException {
|
||||||
ConstPool cp = classfile.getConstPool();
|
ConstPool cp = classfile.getConstPool();
|
||||||
int this_class_index = cp.getThisClassInfo();
|
int this_class_index = cp.getThisClassInfo();
|
||||||
MethodInfo minfo = new MethodInfo(cp, SETFIELDHANDLER_METHOD_NAME,
|
MethodInfo minfo = new MethodInfo(cp, SETFIELDHANDLER_METHOD_NAME,
|
||||||
|
@ -178,8 +172,6 @@ public class FieldTransformer {
|
||||||
code.addOpcode(Opcode.RETURN);
|
code.addOpcode(Opcode.RETURN);
|
||||||
minfo.setCodeAttribute(code.toCodeAttribute());
|
minfo.setCodeAttribute(code.toCodeAttribute());
|
||||||
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
||||||
StackMapTable smt = MapMaker.make(ClassPool.getDefault(), minfo);
|
|
||||||
minfo.getCodeAttribute().setAttribute(smt);
|
|
||||||
classfile.addMethod(minfo);
|
classfile.addMethod(minfo);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -193,7 +185,7 @@ public class FieldTransformer {
|
||||||
}
|
}
|
||||||
|
|
||||||
private void addReadWriteMethods(ClassFile classfile)
|
private void addReadWriteMethods(ClassFile classfile)
|
||||||
throws CannotCompileException, BadBytecode {
|
throws CannotCompileException {
|
||||||
List fields = classfile.getFields();
|
List fields = classfile.getFields();
|
||||||
for (Iterator field_iter = fields.iterator(); field_iter.hasNext();) {
|
for (Iterator field_iter = fields.iterator(); field_iter.hasNext();) {
|
||||||
FieldInfo finfo = (FieldInfo) field_iter.next();
|
FieldInfo finfo = (FieldInfo) field_iter.next();
|
||||||
|
@ -213,7 +205,7 @@ public class FieldTransformer {
|
||||||
}
|
}
|
||||||
|
|
||||||
private void addReadMethod(ClassFile classfile, FieldInfo finfo)
|
private void addReadMethod(ClassFile classfile, FieldInfo finfo)
|
||||||
throws CannotCompileException, BadBytecode {
|
throws CannotCompileException {
|
||||||
ConstPool cp = classfile.getConstPool();
|
ConstPool cp = classfile.getConstPool();
|
||||||
int this_class_index = cp.getThisClassInfo();
|
int this_class_index = cp.getThisClassInfo();
|
||||||
String desc = "()" + finfo.getDescriptor();
|
String desc = "()" + finfo.getDescriptor();
|
||||||
|
@ -262,13 +254,11 @@ public class FieldTransformer {
|
||||||
|
|
||||||
minfo.setCodeAttribute(code.toCodeAttribute());
|
minfo.setCodeAttribute(code.toCodeAttribute());
|
||||||
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
||||||
StackMapTable smt = MapMaker.make(ClassPool.getDefault(), minfo);
|
|
||||||
minfo.getCodeAttribute().setAttribute(smt);
|
|
||||||
classfile.addMethod(minfo);
|
classfile.addMethod(minfo);
|
||||||
}
|
}
|
||||||
|
|
||||||
private void addWriteMethod(ClassFile classfile, FieldInfo finfo)
|
private void addWriteMethod(ClassFile classfile, FieldInfo finfo)
|
||||||
throws CannotCompileException, BadBytecode {
|
throws CannotCompileException {
|
||||||
ConstPool cp = classfile.getConstPool();
|
ConstPool cp = classfile.getConstPool();
|
||||||
int this_class_index = cp.getThisClassInfo();
|
int this_class_index = cp.getThisClassInfo();
|
||||||
String desc = "(" + finfo.getDescriptor() + ")V";
|
String desc = "(" + finfo.getDescriptor() + ")V";
|
||||||
|
@ -330,13 +320,11 @@ public class FieldTransformer {
|
||||||
|
|
||||||
minfo.setCodeAttribute(code.toCodeAttribute());
|
minfo.setCodeAttribute(code.toCodeAttribute());
|
||||||
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
minfo.setAccessFlags(AccessFlag.PUBLIC);
|
||||||
StackMapTable smt = MapMaker.make(ClassPool.getDefault(), minfo);
|
|
||||||
minfo.getCodeAttribute().setAttribute(smt);
|
|
||||||
classfile.addMethod(minfo);
|
classfile.addMethod(minfo);
|
||||||
}
|
}
|
||||||
|
|
||||||
private void transformInvokevirtualsIntoPutAndGetfields(ClassFile classfile)
|
private void transformInvokevirtualsIntoPutAndGetfields(ClassFile classfile)
|
||||||
throws CannotCompileException, BadBytecode {
|
throws CannotCompileException {
|
||||||
List methods = classfile.getMethods();
|
List methods = classfile.getMethods();
|
||||||
for (Iterator method_iter = methods.iterator(); method_iter.hasNext();) {
|
for (Iterator method_iter = methods.iterator(); method_iter.hasNext();) {
|
||||||
MethodInfo minfo = (MethodInfo) method_iter.next();
|
MethodInfo minfo = (MethodInfo) method_iter.next();
|
||||||
|
@ -353,13 +341,15 @@ public class FieldTransformer {
|
||||||
}
|
}
|
||||||
CodeIterator iter = codeAttr.iterator();
|
CodeIterator iter = codeAttr.iterator();
|
||||||
while (iter.hasNext()) {
|
while (iter.hasNext()) {
|
||||||
|
try {
|
||||||
int pos = iter.next();
|
int pos = iter.next();
|
||||||
pos = transformInvokevirtualsIntoGetfields(classfile, iter, pos);
|
pos = transformInvokevirtualsIntoGetfields(classfile, iter, pos);
|
||||||
pos = transformInvokevirtualsIntoPutfields(classfile, iter, pos);
|
pos = transformInvokevirtualsIntoPutfields(classfile, iter, pos);
|
||||||
|
} catch ( BadBytecode e ){
|
||||||
|
throw new CannotCompileException( e );
|
||||||
}
|
}
|
||||||
|
|
||||||
StackMapTable smt = MapMaker.make(ClassPool.getDefault(), minfo);
|
}
|
||||||
minfo.getCodeAttribute().setAttribute(smt);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -64,6 +64,8 @@ public class StandardQueryCache implements QueryCache {
|
||||||
StandardQueryCache.class.getName()
|
StandardQueryCache.class.getName()
|
||||||
);
|
);
|
||||||
|
|
||||||
|
private static final boolean tracing = LOG.isTraceEnabled();
|
||||||
|
|
||||||
private QueryResultsRegion cacheRegion;
|
private QueryResultsRegion cacheRegion;
|
||||||
private UpdateTimestampsCache updateTimestampsCache;
|
private UpdateTimestampsCache updateTimestampsCache;
|
||||||
|
|
||||||
|
@ -246,7 +248,7 @@ public class StandardQueryCache implements QueryCache {
|
||||||
}
|
}
|
||||||
|
|
||||||
private static void logCachedResultRowDetails(Type[] returnTypes, Object[] tuple) {
|
private static void logCachedResultRowDetails(Type[] returnTypes, Object[] tuple) {
|
||||||
if ( !LOG.isTraceEnabled() ) {
|
if ( !tracing ) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
if ( tuple == null ) {
|
if ( tuple == null ) {
|
||||||
|
|
|
@ -26,7 +26,6 @@ package org.hibernate.cache.spi;
|
||||||
import java.io.Serializable;
|
import java.io.Serializable;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.concurrent.locks.ReentrantReadWriteLock;
|
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
|
@ -51,74 +50,72 @@ public class UpdateTimestampsCache {
|
||||||
public static final String REGION_NAME = UpdateTimestampsCache.class.getName();
|
public static final String REGION_NAME = UpdateTimestampsCache.class.getName();
|
||||||
private static final CoreMessageLogger LOG = Logger.getMessageLogger( CoreMessageLogger.class, UpdateTimestampsCache.class.getName() );
|
private static final CoreMessageLogger LOG = Logger.getMessageLogger( CoreMessageLogger.class, UpdateTimestampsCache.class.getName() );
|
||||||
|
|
||||||
private ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
|
|
||||||
private final TimestampsRegion region;
|
|
||||||
private final SessionFactoryImplementor factory;
|
private final SessionFactoryImplementor factory;
|
||||||
|
private final TimestampsRegion region;
|
||||||
|
|
||||||
public UpdateTimestampsCache(Settings settings, Properties props, final SessionFactoryImplementor factory) throws HibernateException {
|
public UpdateTimestampsCache(Settings settings, Properties props, final SessionFactoryImplementor factory) throws HibernateException {
|
||||||
this.factory = factory;
|
this.factory = factory;
|
||||||
String prefix = settings.getCacheRegionPrefix();
|
final String prefix = settings.getCacheRegionPrefix();
|
||||||
String regionName = prefix == null ? REGION_NAME : prefix + '.' + REGION_NAME;
|
final String regionName = prefix == null ? REGION_NAME : prefix + '.' + REGION_NAME;
|
||||||
|
|
||||||
LOG.startingUpdateTimestampsCache( regionName );
|
LOG.startingUpdateTimestampsCache( regionName );
|
||||||
this.region = factory.getServiceRegistry().getService( RegionFactory.class ).buildTimestampsRegion( regionName, props );
|
this.region = factory.getServiceRegistry().getService( RegionFactory.class ).buildTimestampsRegion( regionName, props );
|
||||||
}
|
}
|
||||||
|
|
||||||
@SuppressWarnings({"UnusedDeclaration"})
|
@SuppressWarnings({"UnusedDeclaration"})
|
||||||
public UpdateTimestampsCache(Settings settings, Properties props)
|
public UpdateTimestampsCache(Settings settings, Properties props) throws HibernateException {
|
||||||
throws HibernateException {
|
|
||||||
this( settings, props, null );
|
this( settings, props, null );
|
||||||
}
|
}
|
||||||
|
|
||||||
@SuppressWarnings({"UnnecessaryBoxing"})
|
@SuppressWarnings({"UnnecessaryBoxing"})
|
||||||
public void preinvalidate(Serializable[] spaces) throws CacheException {
|
public void preinvalidate(Serializable[] spaces) throws CacheException {
|
||||||
readWriteLock.writeLock().lock();
|
final boolean debug = LOG.isDebugEnabled();
|
||||||
|
final boolean stats = factory != null && factory.getStatistics().isStatisticsEnabled();
|
||||||
|
|
||||||
|
final Long ts = region.nextTimestamp() + region.getTimeout();
|
||||||
|
|
||||||
try {
|
|
||||||
Long ts = region.nextTimestamp() + region.getTimeout();
|
|
||||||
for ( Serializable space : spaces ) {
|
for ( Serializable space : spaces ) {
|
||||||
|
if ( debug ) {
|
||||||
LOG.debugf( "Pre-invalidating space [%s], timestamp: %s", space, ts );
|
LOG.debugf( "Pre-invalidating space [%s], timestamp: %s", space, ts );
|
||||||
|
}
|
||||||
//put() has nowait semantics, is this really appropriate?
|
//put() has nowait semantics, is this really appropriate?
|
||||||
//note that it needs to be async replication, never local or sync
|
//note that it needs to be async replication, never local or sync
|
||||||
region.put( space, ts );
|
region.put( space, ts );
|
||||||
if ( factory != null && factory.getStatistics().isStatisticsEnabled() ) {
|
if ( stats ) {
|
||||||
factory.getStatisticsImplementor().updateTimestampsCachePut();
|
factory.getStatisticsImplementor().updateTimestampsCachePut();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
finally {
|
|
||||||
readWriteLock.writeLock().unlock();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@SuppressWarnings({"UnnecessaryBoxing"})
|
@SuppressWarnings({"UnnecessaryBoxing"})
|
||||||
public void invalidate(Serializable[] spaces) throws CacheException {
|
public void invalidate(Serializable[] spaces) throws CacheException {
|
||||||
readWriteLock.writeLock().lock();
|
final boolean debug = LOG.isDebugEnabled();
|
||||||
|
final boolean stats = factory != null && factory.getStatistics().isStatisticsEnabled();
|
||||||
|
|
||||||
|
final Long ts = region.nextTimestamp();
|
||||||
|
|
||||||
try {
|
|
||||||
Long ts = region.nextTimestamp();
|
|
||||||
for (Serializable space : spaces) {
|
for (Serializable space : spaces) {
|
||||||
|
if ( debug ) {
|
||||||
LOG.debugf( "Invalidating space [%s], timestamp: %s", space, ts );
|
LOG.debugf( "Invalidating space [%s], timestamp: %s", space, ts );
|
||||||
|
}
|
||||||
//put() has nowait semantics, is this really appropriate?
|
//put() has nowait semantics, is this really appropriate?
|
||||||
//note that it needs to be async replication, never local or sync
|
//note that it needs to be async replication, never local or sync
|
||||||
region.put( space, ts );
|
region.put( space, ts );
|
||||||
if ( factory != null && factory.getStatistics().isStatisticsEnabled() ) {
|
if ( stats ) {
|
||||||
factory.getStatisticsImplementor().updateTimestampsCachePut();
|
factory.getStatisticsImplementor().updateTimestampsCachePut();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
finally {
|
|
||||||
readWriteLock.writeLock().unlock();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@SuppressWarnings({"unchecked", "UnnecessaryUnboxing"})
|
@SuppressWarnings({"unchecked", "UnnecessaryUnboxing"})
|
||||||
public boolean isUpToDate(Set spaces, Long timestamp) throws HibernateException {
|
public boolean isUpToDate(Set spaces, Long timestamp) throws HibernateException {
|
||||||
readWriteLock.readLock().lock();
|
final boolean debug = LOG.isDebugEnabled();
|
||||||
|
final boolean stats = factory != null && factory.getStatistics().isStatisticsEnabled();
|
||||||
|
|
||||||
try {
|
|
||||||
for ( Serializable space : (Set<Serializable>) spaces ) {
|
for ( Serializable space : (Set<Serializable>) spaces ) {
|
||||||
Long lastUpdate = (Long) region.get( space );
|
Long lastUpdate = (Long) region.get( space );
|
||||||
if ( lastUpdate == null ) {
|
if ( lastUpdate == null ) {
|
||||||
if ( factory != null && factory.getStatistics().isStatisticsEnabled() ) {
|
if ( stats ) {
|
||||||
factory.getStatisticsImplementor().updateTimestampsCacheMiss();
|
factory.getStatisticsImplementor().updateTimestampsCacheMiss();
|
||||||
}
|
}
|
||||||
//the last update timestamp was lost from the cache
|
//the last update timestamp was lost from the cache
|
||||||
|
@ -127,25 +124,23 @@ public class UpdateTimestampsCache {
|
||||||
//result = false; // safer
|
//result = false; // safer
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
if ( LOG.isDebugEnabled() ) {
|
if ( debug ) {
|
||||||
LOG.debugf(
|
LOG.debugf(
|
||||||
"[%s] last update timestamp: %s",
|
"[%s] last update timestamp: %s",
|
||||||
space,
|
space,
|
||||||
lastUpdate + ", result set timestamp: " + timestamp
|
lastUpdate + ", result set timestamp: " + timestamp
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if ( factory != null && factory.getStatistics().isStatisticsEnabled() ) {
|
if ( stats ) {
|
||||||
factory.getStatisticsImplementor().updateTimestampsCacheHit();
|
factory.getStatisticsImplementor().updateTimestampsCacheHit();
|
||||||
}
|
}
|
||||||
if ( lastUpdate >= timestamp ) return false;
|
if ( lastUpdate >= timestamp ) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
finally {
|
|
||||||
readWriteLock.readLock().unlock();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public void clear() throws CacheException {
|
public void clear() throws CacheException {
|
||||||
region.evictAll();
|
region.evictAll();
|
||||||
|
|
|
@ -269,7 +269,7 @@ public interface AvailableSettings {
|
||||||
public static final String CURRENT_SESSION_CONTEXT_CLASS = "hibernate.current_session_context_class";
|
public static final String CURRENT_SESSION_CONTEXT_CLASS = "hibernate.current_session_context_class";
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Names the implementation of {@link org.hibernate.engine.transaction.spi.TransactionContext} to use for
|
* Names the implementation of {@link org.hibernate.engine.transaction.spi.TransactionFactory} to use for
|
||||||
* creating {@link org.hibernate.Transaction} instances
|
* creating {@link org.hibernate.Transaction} instances
|
||||||
*/
|
*/
|
||||||
public static final String TRANSACTION_STRATEGY = "hibernate.transaction.factory_class";
|
public static final String TRANSACTION_STRATEGY = "hibernate.transaction.factory_class";
|
||||||
|
@ -643,4 +643,13 @@ public interface AvailableSettings {
|
||||||
// todo : add to Environment
|
// todo : add to Environment
|
||||||
String SCHEMA_NAME_RESOLVER = "hibernate.schema_name_resolver";
|
String SCHEMA_NAME_RESOLVER = "hibernate.schema_name_resolver";
|
||||||
public static final String ENABLE_LAZY_LOAD_NO_TRANS = "hibernate.enable_lazy_load_no_trans";
|
public static final String ENABLE_LAZY_LOAD_NO_TRANS = "hibernate.enable_lazy_load_no_trans";
|
||||||
|
|
||||||
|
public static final String HQL_BULK_ID_STRATEGY = "hibernate.hql.bulk_id_strategy";
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Names the {@link org.hibernate.loader.BatchFetchStyle} to use. Can specify either the
|
||||||
|
* {@link org.hibernate.loader.BatchFetchStyle} name (insensitively), or a
|
||||||
|
* {@link org.hibernate.loader.BatchFetchStyle} instance.
|
||||||
|
*/
|
||||||
|
public static final String BATCH_FETCH_STYLE = "hibernate.batch_fetch_style";
|
||||||
}
|
}
|
||||||
|
|
|
@ -2418,7 +2418,9 @@ public class Configuration implements Serializable {
|
||||||
}
|
}
|
||||||
|
|
||||||
public void addSqlFunction(String functionName, SQLFunction function) {
|
public void addSqlFunction(String functionName, SQLFunction function) {
|
||||||
sqlFunctions.put( functionName, function );
|
// HHH-7721: SQLFunctionRegistry expects all lowercase. Enforce,
|
||||||
|
// just in case a user's customer dialect uses mixed cases.
|
||||||
|
sqlFunctions.put( functionName.toLowerCase(), function );
|
||||||
}
|
}
|
||||||
|
|
||||||
public TypeResolver getTypeResolver() {
|
public TypeResolver getTypeResolver() {
|
||||||
|
|
|
@ -29,8 +29,10 @@ import org.hibernate.ConnectionReleaseMode;
|
||||||
import org.hibernate.EntityMode;
|
import org.hibernate.EntityMode;
|
||||||
import org.hibernate.MultiTenancyStrategy;
|
import org.hibernate.MultiTenancyStrategy;
|
||||||
import org.hibernate.cache.spi.QueryCacheFactory;
|
import org.hibernate.cache.spi.QueryCacheFactory;
|
||||||
import org.hibernate.hql.spi.QueryTranslatorFactory;
|
|
||||||
import org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform;
|
import org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform;
|
||||||
|
import org.hibernate.hql.spi.MultiTableBulkIdStrategy;
|
||||||
|
import org.hibernate.hql.spi.QueryTranslatorFactory;
|
||||||
|
import org.hibernate.loader.BatchFetchStyle;
|
||||||
import org.hibernate.tuple.entity.EntityTuplizerFactory;
|
import org.hibernate.tuple.entity.EntityTuplizerFactory;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -77,6 +79,7 @@ public final class Settings {
|
||||||
private boolean namedQueryStartupCheckingEnabled;
|
private boolean namedQueryStartupCheckingEnabled;
|
||||||
private EntityTuplizerFactory entityTuplizerFactory;
|
private EntityTuplizerFactory entityTuplizerFactory;
|
||||||
private boolean checkNullability;
|
private boolean checkNullability;
|
||||||
|
private boolean initializeLazyStateOutsideTransactions;
|
||||||
// private ComponentTuplizerFactory componentTuplizerFactory; todo : HHH-3517 and HHH-1907
|
// private ComponentTuplizerFactory componentTuplizerFactory; todo : HHH-3517 and HHH-1907
|
||||||
// private BytecodeProvider bytecodeProvider;
|
// private BytecodeProvider bytecodeProvider;
|
||||||
private String importFiles;
|
private String importFiles;
|
||||||
|
@ -84,6 +87,10 @@ public final class Settings {
|
||||||
|
|
||||||
private JtaPlatform jtaPlatform;
|
private JtaPlatform jtaPlatform;
|
||||||
|
|
||||||
|
private MultiTableBulkIdStrategy multiTableBulkIdStrategy;
|
||||||
|
private BatchFetchStyle batchFetchStyle;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Package protected constructor
|
* Package protected constructor
|
||||||
*/
|
*/
|
||||||
|
@ -411,4 +418,28 @@ public final class Settings {
|
||||||
void setMultiTenancyStrategy(MultiTenancyStrategy multiTenancyStrategy) {
|
void setMultiTenancyStrategy(MultiTenancyStrategy multiTenancyStrategy) {
|
||||||
this.multiTenancyStrategy = multiTenancyStrategy;
|
this.multiTenancyStrategy = multiTenancyStrategy;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public boolean isInitializeLazyStateOutsideTransactionsEnabled() {
|
||||||
|
return initializeLazyStateOutsideTransactions;
|
||||||
|
}
|
||||||
|
|
||||||
|
void setInitializeLazyStateOutsideTransactions(boolean initializeLazyStateOutsideTransactions) {
|
||||||
|
this.initializeLazyStateOutsideTransactions = initializeLazyStateOutsideTransactions;
|
||||||
|
}
|
||||||
|
|
||||||
|
public MultiTableBulkIdStrategy getMultiTableBulkIdStrategy() {
|
||||||
|
return multiTableBulkIdStrategy;
|
||||||
|
}
|
||||||
|
|
||||||
|
void setMultiTableBulkIdStrategy(MultiTableBulkIdStrategy multiTableBulkIdStrategy) {
|
||||||
|
this.multiTableBulkIdStrategy = multiTableBulkIdStrategy;
|
||||||
|
}
|
||||||
|
|
||||||
|
public BatchFetchStyle getBatchFetchStyle() {
|
||||||
|
return batchFetchStyle;
|
||||||
|
}
|
||||||
|
|
||||||
|
void setBatchFetchStyle(BatchFetchStyle batchFetchStyle) {
|
||||||
|
this.batchFetchStyle = batchFetchStyle;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -27,30 +27,34 @@ import java.io.Serializable;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.ConnectionReleaseMode;
|
import org.hibernate.ConnectionReleaseMode;
|
||||||
import org.hibernate.EntityMode;
|
import org.hibernate.EntityMode;
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
import org.hibernate.MultiTenancyStrategy;
|
import org.hibernate.MultiTenancyStrategy;
|
||||||
|
import org.hibernate.boot.registry.classloading.spi.ClassLoaderService;
|
||||||
|
import org.hibernate.boot.registry.selector.spi.StrategySelector;
|
||||||
import org.hibernate.cache.internal.NoCachingRegionFactory;
|
import org.hibernate.cache.internal.NoCachingRegionFactory;
|
||||||
import org.hibernate.cache.internal.RegionFactoryInitiator;
|
import org.hibernate.cache.internal.RegionFactoryInitiator;
|
||||||
import org.hibernate.cache.internal.StandardQueryCacheFactory;
|
import org.hibernate.cache.internal.StandardQueryCacheFactory;
|
||||||
import org.hibernate.cache.spi.QueryCacheFactory;
|
import org.hibernate.cache.spi.QueryCacheFactory;
|
||||||
import org.hibernate.cache.spi.RegionFactory;
|
import org.hibernate.cache.spi.RegionFactory;
|
||||||
|
import org.hibernate.engine.jdbc.connections.spi.ConnectionProvider;
|
||||||
|
import org.hibernate.engine.jdbc.connections.spi.MultiTenantConnectionProvider;
|
||||||
|
import org.hibernate.engine.jdbc.env.spi.ExtractedDatabaseMetaData;
|
||||||
import org.hibernate.engine.jdbc.spi.JdbcServices;
|
import org.hibernate.engine.jdbc.spi.JdbcServices;
|
||||||
|
import org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform;
|
||||||
import org.hibernate.engine.transaction.spi.TransactionFactory;
|
import org.hibernate.engine.transaction.spi.TransactionFactory;
|
||||||
|
import org.hibernate.hql.spi.MultiTableBulkIdStrategy;
|
||||||
|
import org.hibernate.hql.spi.PersistentTableBulkIdStrategy;
|
||||||
import org.hibernate.hql.spi.QueryTranslatorFactory;
|
import org.hibernate.hql.spi.QueryTranslatorFactory;
|
||||||
|
import org.hibernate.hql.spi.TemporaryTableBulkIdStrategy;
|
||||||
import org.hibernate.internal.CoreMessageLogger;
|
import org.hibernate.internal.CoreMessageLogger;
|
||||||
import org.hibernate.internal.util.StringHelper;
|
import org.hibernate.internal.util.StringHelper;
|
||||||
import org.hibernate.internal.util.config.ConfigurationHelper;
|
import org.hibernate.internal.util.config.ConfigurationHelper;
|
||||||
|
import org.hibernate.loader.BatchFetchStyle;
|
||||||
import org.hibernate.service.ServiceRegistry;
|
import org.hibernate.service.ServiceRegistry;
|
||||||
import org.hibernate.boot.registry.classloading.spi.ClassLoaderService;
|
|
||||||
import org.hibernate.engine.jdbc.connections.spi.ConnectionProvider;
|
|
||||||
import org.hibernate.engine.jdbc.connections.spi.MultiTenantConnectionProvider;
|
|
||||||
import org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform;
|
|
||||||
import org.hibernate.engine.jdbc.env.spi.ExtractedDatabaseMetaData;
|
|
||||||
import org.hibernate.tuple.entity.EntityTuplizerFactory;
|
import org.hibernate.tuple.entity.EntityTuplizerFactory;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Reads configuration properties and builds a {@link Settings} instance.
|
* Reads configuration properties and builds a {@link Settings} instance.
|
||||||
|
@ -75,7 +79,7 @@ public class SettingsFactory implements Serializable {
|
||||||
|
|
||||||
//SessionFactory name:
|
//SessionFactory name:
|
||||||
|
|
||||||
String sessionFactoryName = props.getProperty( Environment.SESSION_FACTORY_NAME );
|
String sessionFactoryName = props.getProperty( AvailableSettings.SESSION_FACTORY_NAME );
|
||||||
settings.setSessionFactoryName( sessionFactoryName );
|
settings.setSessionFactoryName( sessionFactoryName );
|
||||||
settings.setSessionFactoryNameAlsoJndiName(
|
settings.setSessionFactoryNameAlsoJndiName(
|
||||||
ConfigurationHelper.getBoolean( AvailableSettings.SESSION_FACTORY_NAME_IS_JNDI, props, true )
|
ConfigurationHelper.getBoolean( AvailableSettings.SESSION_FACTORY_NAME_IS_JNDI, props, true )
|
||||||
|
@ -97,13 +101,25 @@ public class SettingsFactory implements Serializable {
|
||||||
// Transaction settings:
|
// Transaction settings:
|
||||||
settings.setJtaPlatform( serviceRegistry.getService( JtaPlatform.class ) );
|
settings.setJtaPlatform( serviceRegistry.getService( JtaPlatform.class ) );
|
||||||
|
|
||||||
boolean flushBeforeCompletion = ConfigurationHelper.getBoolean(Environment.FLUSH_BEFORE_COMPLETION, properties);
|
MultiTableBulkIdStrategy multiTableBulkIdStrategy = serviceRegistry.getService( StrategySelector.class )
|
||||||
|
.resolveStrategy(
|
||||||
|
MultiTableBulkIdStrategy.class,
|
||||||
|
properties.getProperty( AvailableSettings.HQL_BULK_ID_STRATEGY )
|
||||||
|
);
|
||||||
|
if ( multiTableBulkIdStrategy == null ) {
|
||||||
|
multiTableBulkIdStrategy = jdbcServices.getDialect().supportsTemporaryTables()
|
||||||
|
? TemporaryTableBulkIdStrategy.INSTANCE
|
||||||
|
: new PersistentTableBulkIdStrategy();
|
||||||
|
}
|
||||||
|
settings.setMultiTableBulkIdStrategy( multiTableBulkIdStrategy );
|
||||||
|
|
||||||
|
boolean flushBeforeCompletion = ConfigurationHelper.getBoolean(AvailableSettings.FLUSH_BEFORE_COMPLETION, properties);
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Automatic flush during beforeCompletion(): %s", enabledDisabled(flushBeforeCompletion) );
|
LOG.debugf( "Automatic flush during beforeCompletion(): %s", enabledDisabled(flushBeforeCompletion) );
|
||||||
}
|
}
|
||||||
settings.setFlushBeforeCompletionEnabled(flushBeforeCompletion);
|
settings.setFlushBeforeCompletionEnabled(flushBeforeCompletion);
|
||||||
|
|
||||||
boolean autoCloseSession = ConfigurationHelper.getBoolean(Environment.AUTO_CLOSE_SESSION, properties);
|
boolean autoCloseSession = ConfigurationHelper.getBoolean(AvailableSettings.AUTO_CLOSE_SESSION, properties);
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Automatic session close at end of transaction: %s", enabledDisabled(autoCloseSession) );
|
LOG.debugf( "Automatic session close at end of transaction: %s", enabledDisabled(autoCloseSession) );
|
||||||
}
|
}
|
||||||
|
@ -111,7 +127,7 @@ public class SettingsFactory implements Serializable {
|
||||||
|
|
||||||
//JDBC and connection settings:
|
//JDBC and connection settings:
|
||||||
|
|
||||||
int batchSize = ConfigurationHelper.getInt(Environment.STATEMENT_BATCH_SIZE, properties, 0);
|
int batchSize = ConfigurationHelper.getInt(AvailableSettings.STATEMENT_BATCH_SIZE, properties, 0);
|
||||||
if ( !meta.supportsBatchUpdates() ) {
|
if ( !meta.supportsBatchUpdates() ) {
|
||||||
batchSize = 0;
|
batchSize = 0;
|
||||||
}
|
}
|
||||||
|
@ -120,14 +136,14 @@ public class SettingsFactory implements Serializable {
|
||||||
}
|
}
|
||||||
settings.setJdbcBatchSize(batchSize);
|
settings.setJdbcBatchSize(batchSize);
|
||||||
|
|
||||||
boolean jdbcBatchVersionedData = ConfigurationHelper.getBoolean(Environment.BATCH_VERSIONED_DATA, properties, false);
|
boolean jdbcBatchVersionedData = ConfigurationHelper.getBoolean(AvailableSettings.BATCH_VERSIONED_DATA, properties, false);
|
||||||
if ( batchSize > 0 && debugEnabled ) {
|
if ( batchSize > 0 && debugEnabled ) {
|
||||||
LOG.debugf( "JDBC batch updates for versioned data: %s", enabledDisabled(jdbcBatchVersionedData) );
|
LOG.debugf( "JDBC batch updates for versioned data: %s", enabledDisabled(jdbcBatchVersionedData) );
|
||||||
}
|
}
|
||||||
settings.setJdbcBatchVersionedData(jdbcBatchVersionedData);
|
settings.setJdbcBatchVersionedData(jdbcBatchVersionedData);
|
||||||
|
|
||||||
boolean useScrollableResultSets = ConfigurationHelper.getBoolean(
|
boolean useScrollableResultSets = ConfigurationHelper.getBoolean(
|
||||||
Environment.USE_SCROLLABLE_RESULTSET,
|
AvailableSettings.USE_SCROLLABLE_RESULTSET,
|
||||||
properties,
|
properties,
|
||||||
meta.supportsScrollableResults()
|
meta.supportsScrollableResults()
|
||||||
);
|
);
|
||||||
|
@ -136,19 +152,19 @@ public class SettingsFactory implements Serializable {
|
||||||
}
|
}
|
||||||
settings.setScrollableResultSetsEnabled(useScrollableResultSets);
|
settings.setScrollableResultSetsEnabled(useScrollableResultSets);
|
||||||
|
|
||||||
boolean wrapResultSets = ConfigurationHelper.getBoolean(Environment.WRAP_RESULT_SETS, properties, false);
|
boolean wrapResultSets = ConfigurationHelper.getBoolean(AvailableSettings.WRAP_RESULT_SETS, properties, false);
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Wrap result sets: %s", enabledDisabled(wrapResultSets) );
|
LOG.debugf( "Wrap result sets: %s", enabledDisabled(wrapResultSets) );
|
||||||
}
|
}
|
||||||
settings.setWrapResultSetsEnabled(wrapResultSets);
|
settings.setWrapResultSetsEnabled(wrapResultSets);
|
||||||
|
|
||||||
boolean useGetGeneratedKeys = ConfigurationHelper.getBoolean(Environment.USE_GET_GENERATED_KEYS, properties, meta.supportsGetGeneratedKeys());
|
boolean useGetGeneratedKeys = ConfigurationHelper.getBoolean(AvailableSettings.USE_GET_GENERATED_KEYS, properties, meta.supportsGetGeneratedKeys());
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "JDBC3 getGeneratedKeys(): %s", enabledDisabled(useGetGeneratedKeys) );
|
LOG.debugf( "JDBC3 getGeneratedKeys(): %s", enabledDisabled(useGetGeneratedKeys) );
|
||||||
}
|
}
|
||||||
settings.setGetGeneratedKeysEnabled(useGetGeneratedKeys);
|
settings.setGetGeneratedKeysEnabled(useGetGeneratedKeys);
|
||||||
|
|
||||||
Integer statementFetchSize = ConfigurationHelper.getInteger(Environment.STATEMENT_FETCH_SIZE, properties);
|
Integer statementFetchSize = ConfigurationHelper.getInteger(AvailableSettings.STATEMENT_FETCH_SIZE, properties);
|
||||||
if ( statementFetchSize != null && debugEnabled ) {
|
if ( statementFetchSize != null && debugEnabled ) {
|
||||||
LOG.debugf( "JDBC result set fetch size: %s", statementFetchSize );
|
LOG.debugf( "JDBC result set fetch size: %s", statementFetchSize );
|
||||||
}
|
}
|
||||||
|
@ -160,7 +176,7 @@ public class SettingsFactory implements Serializable {
|
||||||
}
|
}
|
||||||
settings.setMultiTenancyStrategy( multiTenancyStrategy );
|
settings.setMultiTenancyStrategy( multiTenancyStrategy );
|
||||||
|
|
||||||
String releaseModeName = ConfigurationHelper.getString( Environment.RELEASE_CONNECTIONS, properties, "auto" );
|
String releaseModeName = ConfigurationHelper.getString( AvailableSettings.RELEASE_CONNECTIONS, properties, "auto" );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Connection release mode: %s", releaseModeName );
|
LOG.debugf( "Connection release mode: %s", releaseModeName );
|
||||||
}
|
}
|
||||||
|
@ -183,10 +199,15 @@ public class SettingsFactory implements Serializable {
|
||||||
}
|
}
|
||||||
settings.setConnectionReleaseMode( releaseMode );
|
settings.setConnectionReleaseMode( releaseMode );
|
||||||
|
|
||||||
|
final BatchFetchStyle batchFetchStyle = BatchFetchStyle.interpret( properties.get( AvailableSettings.BATCH_FETCH_STYLE ) );
|
||||||
|
LOG.debugf( "Using BatchFetchStyle : " + batchFetchStyle.name() );
|
||||||
|
settings.setBatchFetchStyle( batchFetchStyle );
|
||||||
|
|
||||||
|
|
||||||
//SQL Generation settings:
|
//SQL Generation settings:
|
||||||
|
|
||||||
String defaultSchema = properties.getProperty( Environment.DEFAULT_SCHEMA );
|
String defaultSchema = properties.getProperty( AvailableSettings.DEFAULT_SCHEMA );
|
||||||
String defaultCatalog = properties.getProperty( Environment.DEFAULT_CATALOG );
|
String defaultCatalog = properties.getProperty( AvailableSettings.DEFAULT_CATALOG );
|
||||||
if ( defaultSchema != null && debugEnabled ) {
|
if ( defaultSchema != null && debugEnabled ) {
|
||||||
LOG.debugf( "Default schema: %s", defaultSchema );
|
LOG.debugf( "Default schema: %s", defaultSchema );
|
||||||
}
|
}
|
||||||
|
@ -196,31 +217,31 @@ public class SettingsFactory implements Serializable {
|
||||||
settings.setDefaultSchemaName( defaultSchema );
|
settings.setDefaultSchemaName( defaultSchema );
|
||||||
settings.setDefaultCatalogName( defaultCatalog );
|
settings.setDefaultCatalogName( defaultCatalog );
|
||||||
|
|
||||||
Integer maxFetchDepth = ConfigurationHelper.getInteger( Environment.MAX_FETCH_DEPTH, properties );
|
Integer maxFetchDepth = ConfigurationHelper.getInteger( AvailableSettings.MAX_FETCH_DEPTH, properties );
|
||||||
if ( maxFetchDepth != null ) {
|
if ( maxFetchDepth != null ) {
|
||||||
LOG.debugf( "Maximum outer join fetch depth: %s", maxFetchDepth );
|
LOG.debugf( "Maximum outer join fetch depth: %s", maxFetchDepth );
|
||||||
}
|
}
|
||||||
settings.setMaximumFetchDepth( maxFetchDepth );
|
settings.setMaximumFetchDepth( maxFetchDepth );
|
||||||
|
|
||||||
int batchFetchSize = ConfigurationHelper.getInt(Environment.DEFAULT_BATCH_FETCH_SIZE, properties, 1);
|
int batchFetchSize = ConfigurationHelper.getInt(AvailableSettings.DEFAULT_BATCH_FETCH_SIZE, properties, 1);
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Default batch fetch size: %s", batchFetchSize );
|
LOG.debugf( "Default batch fetch size: %s", batchFetchSize );
|
||||||
}
|
}
|
||||||
settings.setDefaultBatchFetchSize( batchFetchSize );
|
settings.setDefaultBatchFetchSize( batchFetchSize );
|
||||||
|
|
||||||
boolean comments = ConfigurationHelper.getBoolean( Environment.USE_SQL_COMMENTS, properties );
|
boolean comments = ConfigurationHelper.getBoolean( AvailableSettings.USE_SQL_COMMENTS, properties );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Generate SQL with comments: %s", enabledDisabled(comments) );
|
LOG.debugf( "Generate SQL with comments: %s", enabledDisabled(comments) );
|
||||||
}
|
}
|
||||||
settings.setCommentsEnabled( comments );
|
settings.setCommentsEnabled( comments );
|
||||||
|
|
||||||
boolean orderUpdates = ConfigurationHelper.getBoolean( Environment.ORDER_UPDATES, properties );
|
boolean orderUpdates = ConfigurationHelper.getBoolean( AvailableSettings.ORDER_UPDATES, properties );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Order SQL updates by primary key: %s", enabledDisabled(orderUpdates) );
|
LOG.debugf( "Order SQL updates by primary key: %s", enabledDisabled(orderUpdates) );
|
||||||
}
|
}
|
||||||
settings.setOrderUpdatesEnabled( orderUpdates );
|
settings.setOrderUpdatesEnabled( orderUpdates );
|
||||||
|
|
||||||
boolean orderInserts = ConfigurationHelper.getBoolean(Environment.ORDER_INSERTS, properties);
|
boolean orderInserts = ConfigurationHelper.getBoolean(AvailableSettings.ORDER_INSERTS, properties);
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Order SQL inserts for batching: %s", enabledDisabled(orderInserts) );
|
LOG.debugf( "Order SQL inserts for batching: %s", enabledDisabled(orderInserts) );
|
||||||
}
|
}
|
||||||
|
@ -230,13 +251,13 @@ public class SettingsFactory implements Serializable {
|
||||||
|
|
||||||
settings.setQueryTranslatorFactory( createQueryTranslatorFactory( properties, serviceRegistry ) );
|
settings.setQueryTranslatorFactory( createQueryTranslatorFactory( properties, serviceRegistry ) );
|
||||||
|
|
||||||
Map querySubstitutions = ConfigurationHelper.toMap( Environment.QUERY_SUBSTITUTIONS, " ,=;:\n\t\r\f", properties );
|
Map querySubstitutions = ConfigurationHelper.toMap( AvailableSettings.QUERY_SUBSTITUTIONS, " ,=;:\n\t\r\f", properties );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Query language substitutions: %s", querySubstitutions );
|
LOG.debugf( "Query language substitutions: %s", querySubstitutions );
|
||||||
}
|
}
|
||||||
settings.setQuerySubstitutions( querySubstitutions );
|
settings.setQuerySubstitutions( querySubstitutions );
|
||||||
|
|
||||||
boolean jpaqlCompliance = ConfigurationHelper.getBoolean( Environment.JPAQL_STRICT_COMPLIANCE, properties, false );
|
boolean jpaqlCompliance = ConfigurationHelper.getBoolean( AvailableSettings.JPAQL_STRICT_COMPLIANCE, properties, false );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "JPA-QL strict compliance: %s", enabledDisabled(jpaqlCompliance) );
|
LOG.debugf( "JPA-QL strict compliance: %s", enabledDisabled(jpaqlCompliance) );
|
||||||
}
|
}
|
||||||
|
@ -244,13 +265,13 @@ public class SettingsFactory implements Serializable {
|
||||||
|
|
||||||
// Second-level / query cache:
|
// Second-level / query cache:
|
||||||
|
|
||||||
boolean useSecondLevelCache = ConfigurationHelper.getBoolean( Environment.USE_SECOND_LEVEL_CACHE, properties, true );
|
boolean useSecondLevelCache = ConfigurationHelper.getBoolean( AvailableSettings.USE_SECOND_LEVEL_CACHE, properties, true );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Second-level cache: %s", enabledDisabled(useSecondLevelCache) );
|
LOG.debugf( "Second-level cache: %s", enabledDisabled(useSecondLevelCache) );
|
||||||
}
|
}
|
||||||
settings.setSecondLevelCacheEnabled( useSecondLevelCache );
|
settings.setSecondLevelCacheEnabled( useSecondLevelCache );
|
||||||
|
|
||||||
boolean useQueryCache = ConfigurationHelper.getBoolean(Environment.USE_QUERY_CACHE, properties);
|
boolean useQueryCache = ConfigurationHelper.getBoolean(AvailableSettings.USE_QUERY_CACHE, properties);
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Query cache: %s", enabledDisabled(useQueryCache) );
|
LOG.debugf( "Query cache: %s", enabledDisabled(useQueryCache) );
|
||||||
}
|
}
|
||||||
|
@ -268,13 +289,13 @@ public class SettingsFactory implements Serializable {
|
||||||
}
|
}
|
||||||
settings.setCacheRegionPrefix( prefix );
|
settings.setCacheRegionPrefix( prefix );
|
||||||
|
|
||||||
boolean useStructuredCacheEntries = ConfigurationHelper.getBoolean( Environment.USE_STRUCTURED_CACHE, properties, false );
|
boolean useStructuredCacheEntries = ConfigurationHelper.getBoolean( AvailableSettings.USE_STRUCTURED_CACHE, properties, false );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Structured second-level cache entries: %s", enabledDisabled(useStructuredCacheEntries) );
|
LOG.debugf( "Structured second-level cache entries: %s", enabledDisabled(useStructuredCacheEntries) );
|
||||||
}
|
}
|
||||||
settings.setStructuredCacheEntriesEnabled( useStructuredCacheEntries );
|
settings.setStructuredCacheEntriesEnabled( useStructuredCacheEntries );
|
||||||
|
|
||||||
boolean useIdentifierRollback = ConfigurationHelper.getBoolean( Environment.USE_IDENTIFIER_ROLLBACK, properties );
|
boolean useIdentifierRollback = ConfigurationHelper.getBoolean( AvailableSettings.USE_IDENTIFIER_ROLLBACK, properties );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Deleted entity synthetic identifier rollback: %s", enabledDisabled(useIdentifierRollback) );
|
LOG.debugf( "Deleted entity synthetic identifier rollback: %s", enabledDisabled(useIdentifierRollback) );
|
||||||
}
|
}
|
||||||
|
@ -282,7 +303,7 @@ public class SettingsFactory implements Serializable {
|
||||||
|
|
||||||
//Schema export:
|
//Schema export:
|
||||||
|
|
||||||
String autoSchemaExport = properties.getProperty( Environment.HBM2DDL_AUTO );
|
String autoSchemaExport = properties.getProperty( AvailableSettings.HBM2DDL_AUTO );
|
||||||
if ( "validate".equals(autoSchemaExport) ) {
|
if ( "validate".equals(autoSchemaExport) ) {
|
||||||
settings.setAutoValidateSchema( true );
|
settings.setAutoValidateSchema( true );
|
||||||
}
|
}
|
||||||
|
@ -296,21 +317,21 @@ public class SettingsFactory implements Serializable {
|
||||||
settings.setAutoCreateSchema( true );
|
settings.setAutoCreateSchema( true );
|
||||||
settings.setAutoDropSchema( true );
|
settings.setAutoDropSchema( true );
|
||||||
}
|
}
|
||||||
settings.setImportFiles( properties.getProperty( Environment.HBM2DDL_IMPORT_FILES ) );
|
settings.setImportFiles( properties.getProperty( AvailableSettings.HBM2DDL_IMPORT_FILES ) );
|
||||||
|
|
||||||
EntityMode defaultEntityMode = EntityMode.parse( properties.getProperty( Environment.DEFAULT_ENTITY_MODE ) );
|
EntityMode defaultEntityMode = EntityMode.parse( properties.getProperty( AvailableSettings.DEFAULT_ENTITY_MODE ) );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Default entity-mode: %s", defaultEntityMode );
|
LOG.debugf( "Default entity-mode: %s", defaultEntityMode );
|
||||||
}
|
}
|
||||||
settings.setDefaultEntityMode( defaultEntityMode );
|
settings.setDefaultEntityMode( defaultEntityMode );
|
||||||
|
|
||||||
boolean namedQueryChecking = ConfigurationHelper.getBoolean( Environment.QUERY_STARTUP_CHECKING, properties, true );
|
boolean namedQueryChecking = ConfigurationHelper.getBoolean( AvailableSettings.QUERY_STARTUP_CHECKING, properties, true );
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Named query checking : %s", enabledDisabled(namedQueryChecking) );
|
LOG.debugf( "Named query checking : %s", enabledDisabled(namedQueryChecking) );
|
||||||
}
|
}
|
||||||
settings.setNamedQueryStartupCheckingEnabled( namedQueryChecking );
|
settings.setNamedQueryStartupCheckingEnabled( namedQueryChecking );
|
||||||
|
|
||||||
boolean checkNullability = ConfigurationHelper.getBoolean(Environment.CHECK_NULLABILITY, properties, true);
|
boolean checkNullability = ConfigurationHelper.getBoolean(AvailableSettings.CHECK_NULLABILITY, properties, true);
|
||||||
if ( debugEnabled ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Check Nullability in Core (should be disabled when Bean Validation is on): %s", enabledDisabled(checkNullability) );
|
LOG.debugf( "Check Nullability in Core (should be disabled when Bean Validation is on): %s", enabledDisabled(checkNullability) );
|
||||||
}
|
}
|
||||||
|
@ -319,11 +340,21 @@ public class SettingsFactory implements Serializable {
|
||||||
// TODO: Does EntityTuplizerFactory really need to be configurable? revisit for HHH-6383
|
// TODO: Does EntityTuplizerFactory really need to be configurable? revisit for HHH-6383
|
||||||
settings.setEntityTuplizerFactory( new EntityTuplizerFactory() );
|
settings.setEntityTuplizerFactory( new EntityTuplizerFactory() );
|
||||||
|
|
||||||
// String provider = properties.getProperty( Environment.BYTECODE_PROVIDER );
|
// String provider = properties.getProperty( AvailableSettings.BYTECODE_PROVIDER );
|
||||||
// log.info( "Bytecode provider name : " + provider );
|
// log.info( "Bytecode provider name : " + provider );
|
||||||
// BytecodeProvider bytecodeProvider = buildBytecodeProvider( provider );
|
// BytecodeProvider bytecodeProvider = buildBytecodeProvider( provider );
|
||||||
// settings.setBytecodeProvider( bytecodeProvider );
|
// settings.setBytecodeProvider( bytecodeProvider );
|
||||||
|
|
||||||
|
boolean initializeLazyStateOutsideTransactionsEnabled = ConfigurationHelper.getBoolean(
|
||||||
|
AvailableSettings.ENABLE_LAZY_LOAD_NO_TRANS,
|
||||||
|
properties,
|
||||||
|
false
|
||||||
|
);
|
||||||
|
if ( debugEnabled ) {
|
||||||
|
LOG.debugf( "Allow initialization of lazy state outside session : : %s", enabledDisabled( initializeLazyStateOutsideTransactionsEnabled ) );
|
||||||
|
}
|
||||||
|
settings.setInitializeLazyStateOutsideTransactions( initializeLazyStateOutsideTransactionsEnabled );
|
||||||
|
|
||||||
return settings;
|
return settings;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -344,7 +375,7 @@ public class SettingsFactory implements Serializable {
|
||||||
|
|
||||||
protected QueryCacheFactory createQueryCacheFactory(Properties properties, ServiceRegistry serviceRegistry) {
|
protected QueryCacheFactory createQueryCacheFactory(Properties properties, ServiceRegistry serviceRegistry) {
|
||||||
String queryCacheFactoryClassName = ConfigurationHelper.getString(
|
String queryCacheFactoryClassName = ConfigurationHelper.getString(
|
||||||
Environment.QUERY_CACHE_FACTORY, properties, StandardQueryCacheFactory.class.getName()
|
AvailableSettings.QUERY_CACHE_FACTORY, properties, StandardQueryCacheFactory.class.getName()
|
||||||
);
|
);
|
||||||
LOG.debugf( "Query cache factory: %s", queryCacheFactoryClassName );
|
LOG.debugf( "Query cache factory: %s", queryCacheFactoryClassName );
|
||||||
try {
|
try {
|
||||||
|
@ -362,7 +393,7 @@ public class SettingsFactory implements Serializable {
|
||||||
// todo : REMOVE! THIS IS TOTALLY A TEMPORARY HACK FOR org.hibernate.cfg.AnnotationBinder which will be going away
|
// todo : REMOVE! THIS IS TOTALLY A TEMPORARY HACK FOR org.hibernate.cfg.AnnotationBinder which will be going away
|
||||||
String regionFactoryClassName = RegionFactoryInitiator.mapLegacyNames(
|
String regionFactoryClassName = RegionFactoryInitiator.mapLegacyNames(
|
||||||
ConfigurationHelper.getString(
|
ConfigurationHelper.getString(
|
||||||
Environment.CACHE_REGION_FACTORY, properties, null
|
AvailableSettings.CACHE_REGION_FACTORY, properties, null
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
if ( regionFactoryClassName == null ) {
|
if ( regionFactoryClassName == null ) {
|
||||||
|
@ -392,7 +423,7 @@ public class SettingsFactory implements Serializable {
|
||||||
|
|
||||||
protected QueryTranslatorFactory createQueryTranslatorFactory(Properties properties, ServiceRegistry serviceRegistry) {
|
protected QueryTranslatorFactory createQueryTranslatorFactory(Properties properties, ServiceRegistry serviceRegistry) {
|
||||||
String className = ConfigurationHelper.getString(
|
String className = ConfigurationHelper.getString(
|
||||||
Environment.QUERY_TRANSLATOR, properties, "org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory"
|
AvailableSettings.QUERY_TRANSLATOR, properties, "org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory"
|
||||||
);
|
);
|
||||||
LOG.debugf( "Query translator: %s", className );
|
LOG.debugf( "Query translator: %s", className );
|
||||||
try {
|
try {
|
||||||
|
|
|
@ -795,7 +795,7 @@ public abstract class CollectionBinder {
|
||||||
String entityName = oneToMany.getReferencedEntityName();
|
String entityName = oneToMany.getReferencedEntityName();
|
||||||
PersistentClass referenced = mappings.getClass( entityName );
|
PersistentClass referenced = mappings.getClass( entityName );
|
||||||
Backref prop = new Backref();
|
Backref prop = new Backref();
|
||||||
prop.setName( '_' + fkJoinColumns[0].getPropertyName() + "Backref" );
|
prop.setName( '_' + fkJoinColumns[0].getPropertyName() + '_' + fkJoinColumns[0].getLogicalColumnName() + "Backref" );
|
||||||
prop.setUpdateable( false );
|
prop.setUpdateable( false );
|
||||||
prop.setSelectable( false );
|
prop.setSelectable( false );
|
||||||
prop.setCollectionRole( collection.getRole() );
|
prop.setCollectionRole( collection.getRole() );
|
||||||
|
|
|
@ -24,10 +24,10 @@
|
||||||
package org.hibernate.cfg.annotations;
|
package org.hibernate.cfg.annotations;
|
||||||
|
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
|
||||||
import javax.persistence.EmbeddedId;
|
import javax.persistence.EmbeddedId;
|
||||||
import javax.persistence.Id;
|
import javax.persistence.Id;
|
||||||
|
import javax.persistence.Lob;
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.AnnotationException;
|
import org.hibernate.AnnotationException;
|
||||||
import org.hibernate.annotations.Generated;
|
import org.hibernate.annotations.Generated;
|
||||||
|
@ -57,6 +57,7 @@ import org.hibernate.mapping.RootClass;
|
||||||
import org.hibernate.mapping.SimpleValue;
|
import org.hibernate.mapping.SimpleValue;
|
||||||
import org.hibernate.mapping.ToOne;
|
import org.hibernate.mapping.ToOne;
|
||||||
import org.hibernate.mapping.Value;
|
import org.hibernate.mapping.Value;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @author Emmanuel Bernard
|
* @author Emmanuel Bernard
|
||||||
|
@ -264,6 +265,7 @@ public class PropertyBinder {
|
||||||
prop.setLazy( lazy );
|
prop.setLazy( lazy );
|
||||||
prop.setCascade( cascade );
|
prop.setCascade( cascade );
|
||||||
prop.setPropertyAccessorName( accessType.getType() );
|
prop.setPropertyAccessorName( accessType.getType() );
|
||||||
|
|
||||||
Generated ann = property != null ?
|
Generated ann = property != null ?
|
||||||
property.getAnnotation( Generated.class ) :
|
property.getAnnotation( Generated.class ) :
|
||||||
null;
|
null;
|
||||||
|
@ -286,6 +288,7 @@ public class PropertyBinder {
|
||||||
prop.setGeneration( PropertyGeneration.parse( generated.toString().toLowerCase() ) );
|
prop.setGeneration( PropertyGeneration.parse( generated.toString().toLowerCase() ) );
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
NaturalId naturalId = property != null ? property.getAnnotation( NaturalId.class ) : null;
|
NaturalId naturalId = property != null ? property.getAnnotation( NaturalId.class ) : null;
|
||||||
if ( naturalId != null ) {
|
if ( naturalId != null ) {
|
||||||
if ( ! entityBinder.isRootEntity() ) {
|
if ( ! entityBinder.isRootEntity() ) {
|
||||||
|
@ -296,6 +299,11 @@ public class PropertyBinder {
|
||||||
}
|
}
|
||||||
prop.setNaturalIdentifier( true );
|
prop.setNaturalIdentifier( true );
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// HHH-4635 -- needed for dialect-specific property ordering
|
||||||
|
Lob lob = property != null ? property.getAnnotation( Lob.class ) : null;
|
||||||
|
prop.setLob( lob != null );
|
||||||
|
|
||||||
prop.setInsertable( insertable );
|
prop.setInsertable( insertable );
|
||||||
prop.setUpdateable( updatable );
|
prop.setUpdateable( updatable );
|
||||||
|
|
||||||
|
|
|
@ -28,6 +28,7 @@ import java.lang.reflect.TypeVariable;
|
||||||
import java.util.Calendar;
|
import java.util.Calendar;
|
||||||
import java.util.Date;
|
import java.util.Date;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
|
|
||||||
import javax.persistence.AttributeConverter;
|
import javax.persistence.AttributeConverter;
|
||||||
import javax.persistence.Convert;
|
import javax.persistence.Convert;
|
||||||
import javax.persistence.Converts;
|
import javax.persistence.Converts;
|
||||||
|
@ -227,7 +228,6 @@ public class SimpleValueBinder {
|
||||||
.toXClass( Serializable.class )
|
.toXClass( Serializable.class )
|
||||||
.isAssignableFrom( returnedClassOrElement ) ) {
|
.isAssignableFrom( returnedClassOrElement ) ) {
|
||||||
type = SerializableToBlobType.class.getName();
|
type = SerializableToBlobType.class.getName();
|
||||||
//typeParameters = new Properties();
|
|
||||||
typeParameters.setProperty(
|
typeParameters.setProperty(
|
||||||
SerializableToBlobType.CLASS_NAME,
|
SerializableToBlobType.CLASS_NAME,
|
||||||
returnedClassOrElement.getName()
|
returnedClassOrElement.getName()
|
||||||
|
@ -618,6 +618,7 @@ public class SimpleValueBinder {
|
||||||
parameters.put( DynamicParameterizedType.IS_PRIMARY_KEY, Boolean.toString( key ) );
|
parameters.put( DynamicParameterizedType.IS_PRIMARY_KEY, Boolean.toString( key ) );
|
||||||
|
|
||||||
parameters.put( DynamicParameterizedType.ENTITY, persistentClassName );
|
parameters.put( DynamicParameterizedType.ENTITY, persistentClassName );
|
||||||
|
parameters.put( DynamicParameterizedType.XPROPERTY, xproperty );
|
||||||
parameters.put( DynamicParameterizedType.PROPERTY, xproperty.getName() );
|
parameters.put( DynamicParameterizedType.PROPERTY, xproperty.getName() );
|
||||||
parameters.put( DynamicParameterizedType.ACCESS_TYPE, accessType.getType() );
|
parameters.put( DynamicParameterizedType.ACCESS_TYPE, accessType.getType() );
|
||||||
simpleValue.setTypeParameters( parameters );
|
simpleValue.setTypeParameters( parameters );
|
||||||
|
|
|
@ -33,13 +33,12 @@ import java.util.List;
|
||||||
import java.util.ListIterator;
|
import java.util.ListIterator;
|
||||||
import javax.naming.NamingException;
|
import javax.naming.NamingException;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
import javax.naming.NamingException;
|
||||||
|
|
||||||
import org.hibernate.AssertionFailure;
|
import org.hibernate.AssertionFailure;
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
import org.hibernate.LazyInitializationException;
|
import org.hibernate.LazyInitializationException;
|
||||||
import org.hibernate.Session;
|
import org.hibernate.Session;
|
||||||
import org.hibernate.cfg.AvailableSettings;
|
|
||||||
import org.hibernate.collection.spi.PersistentCollection;
|
import org.hibernate.collection.spi.PersistentCollection;
|
||||||
import org.hibernate.engine.internal.ForeignKeys;
|
import org.hibernate.engine.internal.ForeignKeys;
|
||||||
import org.hibernate.engine.spi.CollectionEntry;
|
import org.hibernate.engine.spi.CollectionEntry;
|
||||||
|
@ -56,6 +55,7 @@ import org.hibernate.persister.collection.CollectionPersister;
|
||||||
import org.hibernate.persister.entity.EntityPersister;
|
import org.hibernate.persister.entity.EntityPersister;
|
||||||
import org.hibernate.pretty.MessageHelper;
|
import org.hibernate.pretty.MessageHelper;
|
||||||
import org.hibernate.type.Type;
|
import org.hibernate.type.Type;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Base class implementing {@link org.hibernate.collection.spi.PersistentCollection}
|
* Base class implementing {@link org.hibernate.collection.spi.PersistentCollection}
|
||||||
|
@ -140,6 +140,8 @@ public abstract class AbstractPersistentCollection implements Serializable, Pers
|
||||||
@Override
|
@Override
|
||||||
public Boolean doWork() {
|
public Boolean doWork() {
|
||||||
CollectionEntry entry = session.getPersistenceContext().getCollectionEntry( AbstractPersistentCollection.this );
|
CollectionEntry entry = session.getPersistenceContext().getCollectionEntry( AbstractPersistentCollection.this );
|
||||||
|
|
||||||
|
if ( entry != null ) {
|
||||||
CollectionPersister persister = entry.getLoadedPersister();
|
CollectionPersister persister = entry.getLoadedPersister();
|
||||||
if ( persister.isExtraLazy() ) {
|
if ( persister.isExtraLazy() ) {
|
||||||
if ( hasQueuedOperations() ) {
|
if ( hasQueuedOperations() ) {
|
||||||
|
@ -151,6 +153,10 @@ public abstract class AbstractPersistentCollection implements Serializable, Pers
|
||||||
else {
|
else {
|
||||||
read();
|
read();
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
else{
|
||||||
|
throwLazyInitializationExceptionIfNotConnected();
|
||||||
|
}
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -170,6 +176,7 @@ public abstract class AbstractPersistentCollection implements Serializable, Pers
|
||||||
private <T> T withTemporarySessionIfNeeded(LazyInitializationWork<T> lazyInitializationWork) {
|
private <T> T withTemporarySessionIfNeeded(LazyInitializationWork<T> lazyInitializationWork) {
|
||||||
SessionImplementor originalSession = null;
|
SessionImplementor originalSession = null;
|
||||||
boolean isTempSession = false;
|
boolean isTempSession = false;
|
||||||
|
boolean isJTA = false;
|
||||||
|
|
||||||
if ( session == null ) {
|
if ( session == null ) {
|
||||||
if ( specjLazyLoad ) {
|
if ( specjLazyLoad ) {
|
||||||
|
@ -202,6 +209,22 @@ public abstract class AbstractPersistentCollection implements Serializable, Pers
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( isTempSession ) {
|
if ( isTempSession ) {
|
||||||
|
// TODO: On the next major release, add an
|
||||||
|
// 'isJTA' or 'getTransactionFactory' method to Session.
|
||||||
|
isJTA = session.getTransactionCoordinator()
|
||||||
|
.getTransactionContext().getTransactionEnvironment()
|
||||||
|
.getTransactionFactory()
|
||||||
|
.compatibleWithJtaSynchronization();
|
||||||
|
|
||||||
|
if ( !isJTA ) {
|
||||||
|
// Explicitly handle the transactions only if we're not in
|
||||||
|
// a JTA environment. A lazy loading temporary session can
|
||||||
|
// be created even if a current session and transaction are
|
||||||
|
// open (ex: session.clear() was used). We must prevent
|
||||||
|
// multiple transactions.
|
||||||
|
( ( Session) session ).beginTransaction();
|
||||||
|
}
|
||||||
|
|
||||||
session.getPersistenceContext().addUninitializedDetachedCollection(
|
session.getPersistenceContext().addUninitializedDetachedCollection(
|
||||||
session.getFactory().getCollectionPersister( getRole() ),
|
session.getFactory().getCollectionPersister( getRole() ),
|
||||||
this
|
this
|
||||||
|
@ -215,6 +238,9 @@ public abstract class AbstractPersistentCollection implements Serializable, Pers
|
||||||
if ( isTempSession ) {
|
if ( isTempSession ) {
|
||||||
// make sure the just opened temp session gets closed!
|
// make sure the just opened temp session gets closed!
|
||||||
try {
|
try {
|
||||||
|
if ( !isJTA ) {
|
||||||
|
( ( Session) session ).getTransaction().commit();
|
||||||
|
}
|
||||||
( (Session) session ).close();
|
( (Session) session ).close();
|
||||||
}
|
}
|
||||||
catch (Exception e) {
|
catch (Exception e) {
|
||||||
|
@ -580,11 +606,7 @@ public abstract class AbstractPersistentCollection implements Serializable, Pers
|
||||||
|
|
||||||
protected void prepareForPossibleSpecialSpecjInitialization() {
|
protected void prepareForPossibleSpecialSpecjInitialization() {
|
||||||
if ( session != null ) {
|
if ( session != null ) {
|
||||||
specjLazyLoad = Boolean.parseBoolean(
|
specjLazyLoad = session.getFactory().getSettings().isInitializeLazyStateOutsideTransactionsEnabled();
|
||||||
session.getFactory()
|
|
||||||
.getProperties()
|
|
||||||
.getProperty( AvailableSettings.ENABLE_LAZY_LOAD_NO_TRANS )
|
|
||||||
);
|
|
||||||
|
|
||||||
if ( specjLazyLoad && sessionFactoryUuid == null ) {
|
if ( specjLazyLoad && sessionFactoryUuid == null ) {
|
||||||
try {
|
try {
|
||||||
|
@ -622,9 +644,8 @@ public abstract class AbstractPersistentCollection implements Serializable, Pers
|
||||||
throw new HibernateException(
|
throw new HibernateException(
|
||||||
"Illegal attempt to associate a collection with two open sessions: " +
|
"Illegal attempt to associate a collection with two open sessions: " +
|
||||||
MessageHelper.collectionInfoString(
|
MessageHelper.collectionInfoString(
|
||||||
ce.getLoadedPersister(),
|
ce.getLoadedPersister(), this,
|
||||||
ce.getLoadedKey(),
|
ce.getLoadedKey(), session
|
||||||
session.getFactory()
|
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
@ -296,6 +296,7 @@ public class PersistentMap extends AbstractPersistentCollection implements Map {
|
||||||
for ( Object[] entry : loadingEntries ) {
|
for ( Object[] entry : loadingEntries ) {
|
||||||
map.put( entry[0], entry[1] );
|
map.put( entry[0], entry[1] );
|
||||||
}
|
}
|
||||||
|
loadingEntries = null;
|
||||||
}
|
}
|
||||||
return super.endRead();
|
return super.endRead();
|
||||||
}
|
}
|
||||||
|
|
|
@ -31,6 +31,9 @@ import org.hibernate.cfg.Environment;
|
||||||
import org.hibernate.dialect.function.NoArgSQLFunction;
|
import org.hibernate.dialect.function.NoArgSQLFunction;
|
||||||
import org.hibernate.dialect.function.StandardSQLFunction;
|
import org.hibernate.dialect.function.StandardSQLFunction;
|
||||||
import org.hibernate.dialect.function.VarArgsSQLFunction;
|
import org.hibernate.dialect.function.VarArgsSQLFunction;
|
||||||
|
import org.hibernate.dialect.pagination.LimitHandler;
|
||||||
|
import org.hibernate.dialect.pagination.CUBRIDLimitHandler;
|
||||||
|
import org.hibernate.engine.spi.RowSelection;
|
||||||
import org.hibernate.type.StandardBasicTypes;
|
import org.hibernate.type.StandardBasicTypes;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -39,94 +42,220 @@ import org.hibernate.type.StandardBasicTypes;
|
||||||
* @author Seok Jeong Il
|
* @author Seok Jeong Il
|
||||||
*/
|
*/
|
||||||
public class CUBRIDDialect extends Dialect {
|
public class CUBRIDDialect extends Dialect {
|
||||||
@Override
|
|
||||||
protected String getIdentityColumnString() throws MappingException {
|
|
||||||
return "auto_increment"; //starts with 1, implicitly
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String getIdentitySelectString(String table, String column, int type)
|
|
||||||
throws MappingException {
|
|
||||||
// CUBRID 8.4.0 support last_insert_id()
|
|
||||||
// return "select last_insert_id()";
|
|
||||||
return "select current_val from db_serial where name = '" + ( table + "_ai_" + column ).toLowerCase() + "'";
|
|
||||||
}
|
|
||||||
|
|
||||||
public CUBRIDDialect() {
|
public CUBRIDDialect() {
|
||||||
super();
|
super();
|
||||||
|
|
||||||
|
registerColumnType( Types.BIGINT, "bigint" );
|
||||||
registerColumnType( Types.BIT, "bit(8)" );
|
registerColumnType( Types.BIT, "bit(8)" );
|
||||||
registerColumnType( Types.BIGINT, "numeric(19,0)" );
|
registerColumnType( Types.BLOB, "bit varying(65535)" );
|
||||||
registerColumnType( Types.SMALLINT, "short" );
|
registerColumnType( Types.BOOLEAN, "bit(8)");
|
||||||
registerColumnType( Types.TINYINT, "short" );
|
|
||||||
registerColumnType( Types.INTEGER, "integer" );
|
|
||||||
registerColumnType( Types.CHAR, "char(1)" );
|
registerColumnType( Types.CHAR, "char(1)" );
|
||||||
registerColumnType( Types.VARCHAR, 4000, "varchar($l)" );
|
registerColumnType( Types.CLOB, "string" );
|
||||||
registerColumnType( Types.FLOAT, "float" );
|
|
||||||
registerColumnType( Types.DOUBLE, "double" );
|
|
||||||
registerColumnType( Types.DATE, "date" );
|
registerColumnType( Types.DATE, "date" );
|
||||||
|
registerColumnType( Types.DECIMAL, "decimal" );
|
||||||
|
registerColumnType( Types.DOUBLE, "double" );
|
||||||
|
registerColumnType( Types.FLOAT, "float" );
|
||||||
|
registerColumnType( Types.INTEGER, "int" );
|
||||||
|
registerColumnType( Types.NUMERIC, "numeric($p,$s)" );
|
||||||
|
registerColumnType( Types.REAL, "double" );
|
||||||
|
registerColumnType( Types.SMALLINT, "short" );
|
||||||
registerColumnType( Types.TIME, "time" );
|
registerColumnType( Types.TIME, "time" );
|
||||||
registerColumnType( Types.TIMESTAMP, "timestamp" );
|
registerColumnType( Types.TIMESTAMP, "timestamp" );
|
||||||
|
registerColumnType( Types.TINYINT, "short" );
|
||||||
registerColumnType( Types.VARBINARY, 2000, "bit varying($l)" );
|
registerColumnType( Types.VARBINARY, 2000, "bit varying($l)" );
|
||||||
registerColumnType( Types.NUMERIC, "numeric($p,$s)" );
|
registerColumnType( Types.VARCHAR, "string" );
|
||||||
registerColumnType( Types.BLOB, "blob" );
|
registerColumnType( Types.VARCHAR, 2000, "varchar($l)" );
|
||||||
registerColumnType( Types.CLOB, "string" );
|
registerColumnType( Types.VARCHAR, 255, "varchar($l)" );
|
||||||
|
|
||||||
getDefaultProperties().setProperty(Environment.USE_STREAMS_FOR_BINARY, "true");
|
getDefaultProperties().setProperty(Environment.USE_STREAMS_FOR_BINARY, "true");
|
||||||
getDefaultProperties().setProperty(Environment.STATEMENT_BATCH_SIZE, DEFAULT_BATCH_SIZE);
|
getDefaultProperties().setProperty(Environment.STATEMENT_BATCH_SIZE, DEFAULT_BATCH_SIZE);
|
||||||
|
|
||||||
registerFunction( "substring", new StandardSQLFunction( "substr", StandardBasicTypes.STRING ) );
|
registerFunction("ascii", new StandardSQLFunction("ascii", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("bin", new StandardSQLFunction("bin", StandardBasicTypes.STRING) );
|
||||||
|
registerFunction("char_length", new StandardSQLFunction("char_length", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("character_length", new StandardSQLFunction("character_length", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("lengthb", new StandardSQLFunction("lengthb", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("lengthh", new StandardSQLFunction("lengthh", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("lcase", new StandardSQLFunction("lcase") );
|
||||||
|
registerFunction("lower", new StandardSQLFunction("lower") );
|
||||||
|
registerFunction("ltrim", new StandardSQLFunction("ltrim") );
|
||||||
|
registerFunction("reverse", new StandardSQLFunction("reverse") );
|
||||||
|
registerFunction("rtrim", new StandardSQLFunction("rtrim") );
|
||||||
registerFunction("trim", new StandardSQLFunction("trim") );
|
registerFunction("trim", new StandardSQLFunction("trim") );
|
||||||
|
registerFunction("space", new StandardSQLFunction("space", StandardBasicTypes.STRING) );
|
||||||
|
registerFunction("ucase", new StandardSQLFunction("ucase") );
|
||||||
|
registerFunction("upper", new StandardSQLFunction("upper") );
|
||||||
|
|
||||||
|
registerFunction("abs", new StandardSQLFunction("abs") );
|
||||||
|
registerFunction("sign", new StandardSQLFunction("sign", StandardBasicTypes.INTEGER) );
|
||||||
|
|
||||||
|
registerFunction("acos", new StandardSQLFunction("acos", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("asin", new StandardSQLFunction("asin", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("atan", new StandardSQLFunction("atan", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("cos", new StandardSQLFunction("cos", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("cot", new StandardSQLFunction("cot", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("exp", new StandardSQLFunction("exp", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("ln", new StandardSQLFunction("ln", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("log2", new StandardSQLFunction("log2", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("log10", new StandardSQLFunction("log10", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("pi", new NoArgSQLFunction("pi", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("rand", new NoArgSQLFunction("rand", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("random", new NoArgSQLFunction("random", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("sin", new StandardSQLFunction("sin", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("sqrt", new StandardSQLFunction("sqrt", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("tan", new StandardSQLFunction("tan", StandardBasicTypes.DOUBLE) );
|
||||||
|
|
||||||
|
registerFunction("radians", new StandardSQLFunction("radians", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("degrees", new StandardSQLFunction("degrees", StandardBasicTypes.DOUBLE) );
|
||||||
|
|
||||||
|
registerFunction("ceil", new StandardSQLFunction("ceil", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("floor", new StandardSQLFunction("floor", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("round", new StandardSQLFunction("round") );
|
||||||
|
|
||||||
|
registerFunction("datediff", new StandardSQLFunction("datediff", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("timediff", new StandardSQLFunction("timediff", StandardBasicTypes.TIME) );
|
||||||
|
|
||||||
|
registerFunction("date", new StandardSQLFunction("date", StandardBasicTypes.DATE) );
|
||||||
|
registerFunction("curdate", new NoArgSQLFunction("curdate", StandardBasicTypes.DATE) );
|
||||||
|
registerFunction("current_date", new NoArgSQLFunction("current_date", StandardBasicTypes.DATE, false) );
|
||||||
|
registerFunction("sys_date", new NoArgSQLFunction("sys_date", StandardBasicTypes.DATE, false) );
|
||||||
|
registerFunction("sysdate", new NoArgSQLFunction("sysdate", StandardBasicTypes.DATE, false) );
|
||||||
|
|
||||||
|
registerFunction("time", new StandardSQLFunction("time", StandardBasicTypes.TIME) );
|
||||||
|
registerFunction("curtime", new NoArgSQLFunction("curtime", StandardBasicTypes.TIME) );
|
||||||
|
registerFunction("current_time", new NoArgSQLFunction("current_time", StandardBasicTypes.TIME, false) );
|
||||||
|
registerFunction("sys_time", new NoArgSQLFunction("sys_time", StandardBasicTypes.TIME, false) );
|
||||||
|
registerFunction("systime", new NoArgSQLFunction("systime", StandardBasicTypes.TIME, false) );
|
||||||
|
|
||||||
|
registerFunction("timestamp", new StandardSQLFunction("timestamp", StandardBasicTypes.TIMESTAMP) );
|
||||||
|
registerFunction("current_timestamp", new NoArgSQLFunction("current_timestamp", StandardBasicTypes.TIMESTAMP, false) );
|
||||||
|
registerFunction("sys_timestamp", new NoArgSQLFunction("sys_timestamp", StandardBasicTypes.TIMESTAMP, false) );
|
||||||
|
registerFunction("systimestamp", new NoArgSQLFunction("systimestamp", StandardBasicTypes.TIMESTAMP, false) );
|
||||||
|
registerFunction("localtime", new NoArgSQLFunction("localtime", StandardBasicTypes.TIMESTAMP, false) );
|
||||||
|
registerFunction("localtimestamp", new NoArgSQLFunction("localtimestamp", StandardBasicTypes.TIMESTAMP, false) );
|
||||||
|
|
||||||
|
registerFunction("day", new StandardSQLFunction("day", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("dayofmonth", new StandardSQLFunction("dayofmonth", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("dayofweek", new StandardSQLFunction("dayofweek", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("dayofyear", new StandardSQLFunction("dayofyear", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("from_days", new StandardSQLFunction("from_days", StandardBasicTypes.DATE) );
|
||||||
|
registerFunction("from_unixtime", new StandardSQLFunction("from_unixtime", StandardBasicTypes.TIMESTAMP) );
|
||||||
|
registerFunction("last_day", new StandardSQLFunction("last_day", StandardBasicTypes.DATE) );
|
||||||
|
registerFunction("minute", new StandardSQLFunction("minute", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("month", new StandardSQLFunction("month", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("months_between", new StandardSQLFunction("months_between", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction("now", new NoArgSQLFunction("now", StandardBasicTypes.TIMESTAMP) );
|
||||||
|
registerFunction("quarter", new StandardSQLFunction("quarter", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("second", new StandardSQLFunction("second", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("sec_to_time", new StandardSQLFunction("sec_to_time", StandardBasicTypes.TIME) );
|
||||||
|
registerFunction("time_to_sec", new StandardSQLFunction("time_to_sec", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("to_days", new StandardSQLFunction("to_days", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("unix_timestamp", new StandardSQLFunction("unix_timestamp", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("utc_date", new NoArgSQLFunction("utc_date", StandardBasicTypes.STRING) );
|
||||||
|
registerFunction("utc_time", new NoArgSQLFunction("utc_time", StandardBasicTypes.STRING) );
|
||||||
|
registerFunction("week", new StandardSQLFunction("week", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("weekday", new StandardSQLFunction("weekday", StandardBasicTypes.INTEGER) );
|
||||||
|
registerFunction("year", new StandardSQLFunction("year", StandardBasicTypes.INTEGER) );
|
||||||
|
|
||||||
|
registerFunction("hex", new StandardSQLFunction("hex", StandardBasicTypes.STRING) );
|
||||||
|
|
||||||
|
registerFunction("octet_length", new StandardSQLFunction("octet_length", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("bit_length", new StandardSQLFunction("bit_length", StandardBasicTypes.LONG) );
|
||||||
|
|
||||||
|
registerFunction("bit_count", new StandardSQLFunction("bit_count", StandardBasicTypes.LONG) );
|
||||||
|
registerFunction("md5", new StandardSQLFunction("md5", StandardBasicTypes.STRING) );
|
||||||
|
|
||||||
|
registerFunction( "concat", new StandardSQLFunction( "concat", StandardBasicTypes.STRING ) );
|
||||||
|
|
||||||
|
registerFunction("substring", new StandardSQLFunction("substring", StandardBasicTypes.STRING) );
|
||||||
|
registerFunction("substr", new StandardSQLFunction("substr", StandardBasicTypes.STRING) );
|
||||||
|
|
||||||
registerFunction("length", new StandardSQLFunction("length", StandardBasicTypes.INTEGER) );
|
registerFunction("length", new StandardSQLFunction("length", StandardBasicTypes.INTEGER) );
|
||||||
registerFunction("bit_length",new StandardSQLFunction("bit_length", StandardBasicTypes.INTEGER) );
|
registerFunction("bit_length",new StandardSQLFunction("bit_length", StandardBasicTypes.INTEGER) );
|
||||||
registerFunction("coalesce", new StandardSQLFunction("coalesce") );
|
registerFunction("coalesce", new StandardSQLFunction("coalesce") );
|
||||||
registerFunction("nullif", new StandardSQLFunction("nullif") );
|
registerFunction("nullif", new StandardSQLFunction("nullif") );
|
||||||
registerFunction( "abs", new StandardSQLFunction( "abs" ) );
|
|
||||||
registerFunction("mod", new StandardSQLFunction("mod") );
|
registerFunction("mod", new StandardSQLFunction("mod") );
|
||||||
registerFunction( "upper", new StandardSQLFunction( "upper" ) );
|
|
||||||
registerFunction( "lower", new StandardSQLFunction( "lower" ) );
|
|
||||||
|
|
||||||
registerFunction("power", new StandardSQLFunction("power") );
|
registerFunction("power", new StandardSQLFunction("power") );
|
||||||
registerFunction("stddev", new StandardSQLFunction("stddev") );
|
registerFunction("stddev", new StandardSQLFunction("stddev") );
|
||||||
registerFunction("variance", new StandardSQLFunction("variance") );
|
registerFunction("variance", new StandardSQLFunction("variance") );
|
||||||
registerFunction( "round", new StandardSQLFunction( "round" ) );
|
|
||||||
registerFunction("trunc", new StandardSQLFunction("trunc") );
|
registerFunction("trunc", new StandardSQLFunction("trunc") );
|
||||||
registerFunction( "ceil", new StandardSQLFunction( "ceil" ) );
|
|
||||||
registerFunction( "floor", new StandardSQLFunction( "floor" ) );
|
|
||||||
registerFunction( "ltrim", new StandardSQLFunction( "ltrim" ) );
|
|
||||||
registerFunction( "rtrim", new StandardSQLFunction( "rtrim" ) );
|
|
||||||
registerFunction("nvl", new StandardSQLFunction("nvl") );
|
registerFunction("nvl", new StandardSQLFunction("nvl") );
|
||||||
registerFunction("nvl2", new StandardSQLFunction("nvl2") );
|
registerFunction("nvl2", new StandardSQLFunction("nvl2") );
|
||||||
registerFunction( "sign", new StandardSQLFunction( "sign", StandardBasicTypes.INTEGER ) );
|
|
||||||
registerFunction("chr", new StandardSQLFunction("chr", StandardBasicTypes.CHARACTER));
|
registerFunction("chr", new StandardSQLFunction("chr", StandardBasicTypes.CHARACTER));
|
||||||
registerFunction("to_char", new StandardSQLFunction("to_char", StandardBasicTypes.STRING) );
|
registerFunction("to_char", new StandardSQLFunction("to_char", StandardBasicTypes.STRING) );
|
||||||
registerFunction("to_date", new StandardSQLFunction("to_date", StandardBasicTypes.TIMESTAMP));
|
registerFunction("to_date", new StandardSQLFunction("to_date", StandardBasicTypes.TIMESTAMP));
|
||||||
registerFunction( "last_day", new StandardSQLFunction( "last_day", StandardBasicTypes.DATE ) );
|
|
||||||
registerFunction("instr", new StandardSQLFunction("instr", StandardBasicTypes.INTEGER) );
|
registerFunction("instr", new StandardSQLFunction("instr", StandardBasicTypes.INTEGER) );
|
||||||
registerFunction("instrb", new StandardSQLFunction("instrb", StandardBasicTypes.INTEGER) );
|
registerFunction("instrb", new StandardSQLFunction("instrb", StandardBasicTypes.INTEGER) );
|
||||||
registerFunction("lpad", new StandardSQLFunction("lpad", StandardBasicTypes.STRING) );
|
registerFunction("lpad", new StandardSQLFunction("lpad", StandardBasicTypes.STRING) );
|
||||||
registerFunction("replace", new StandardSQLFunction("replace", StandardBasicTypes.STRING) );
|
registerFunction("replace", new StandardSQLFunction("replace", StandardBasicTypes.STRING) );
|
||||||
registerFunction("rpad", new StandardSQLFunction("rpad", StandardBasicTypes.STRING) );
|
registerFunction("rpad", new StandardSQLFunction("rpad", StandardBasicTypes.STRING) );
|
||||||
registerFunction( "substr", new StandardSQLFunction( "substr", StandardBasicTypes.STRING ) );
|
|
||||||
registerFunction( "substrb", new StandardSQLFunction( "substrb", StandardBasicTypes.STRING ) );
|
|
||||||
registerFunction("translate", new StandardSQLFunction("translate", StandardBasicTypes.STRING) );
|
registerFunction("translate", new StandardSQLFunction("translate", StandardBasicTypes.STRING) );
|
||||||
registerFunction( "add_months", new StandardSQLFunction( "add_months", StandardBasicTypes.DATE ) );
|
|
||||||
registerFunction( "months_between", new StandardSQLFunction( "months_between", StandardBasicTypes.FLOAT ) );
|
|
||||||
|
|
||||||
registerFunction( "current_date", new NoArgSQLFunction( "current_date", StandardBasicTypes.DATE, false ) );
|
registerFunction("add_months", new StandardSQLFunction("add_months", StandardBasicTypes.DATE) );
|
||||||
registerFunction( "current_time", new NoArgSQLFunction( "current_time", StandardBasicTypes.TIME, false ) );
|
|
||||||
registerFunction(
|
|
||||||
"current_timestamp",
|
|
||||||
new NoArgSQLFunction( "current_timestamp", StandardBasicTypes.TIMESTAMP, false )
|
|
||||||
);
|
|
||||||
registerFunction( "sysdate", new NoArgSQLFunction( "sysdate", StandardBasicTypes.DATE, false ) );
|
|
||||||
registerFunction( "systime", new NoArgSQLFunction( "systime", StandardBasicTypes.TIME, false ) );
|
|
||||||
registerFunction( "systimestamp", new NoArgSQLFunction( "systimestamp", StandardBasicTypes.TIMESTAMP, false ) );
|
|
||||||
registerFunction("user", new NoArgSQLFunction("user", StandardBasicTypes.STRING, false) );
|
registerFunction("user", new NoArgSQLFunction("user", StandardBasicTypes.STRING, false) );
|
||||||
registerFunction("rownum", new NoArgSQLFunction("rownum", StandardBasicTypes.LONG, false) );
|
registerFunction("rownum", new NoArgSQLFunction("rownum", StandardBasicTypes.LONG, false) );
|
||||||
registerFunction("concat", new VarArgsSQLFunction(StandardBasicTypes.STRING, "", "||", ""));
|
registerFunction("concat", new VarArgsSQLFunction(StandardBasicTypes.STRING, "", "||", ""));
|
||||||
|
|
||||||
|
registerKeyword( "TYPE" );
|
||||||
|
registerKeyword( "YEAR" );
|
||||||
|
registerKeyword( "MONTH" );
|
||||||
|
registerKeyword( "ALIAS" );
|
||||||
|
registerKeyword( "VALUE" );
|
||||||
|
registerKeyword( "FIRST" );
|
||||||
|
registerKeyword( "ROLE" );
|
||||||
|
registerKeyword( "CLASS" );
|
||||||
|
registerKeyword( "BIT" );
|
||||||
|
registerKeyword( "TIME" );
|
||||||
|
registerKeyword( "QUERY" );
|
||||||
|
registerKeyword( "DATE" );
|
||||||
|
registerKeyword( "USER" );
|
||||||
|
registerKeyword( "ACTION" );
|
||||||
|
registerKeyword( "SYS_USER" );
|
||||||
|
registerKeyword( "ZONE" );
|
||||||
|
registerKeyword( "LANGUAGE" );
|
||||||
|
registerKeyword( "DICTIONARY" );
|
||||||
|
registerKeyword( "DATA" );
|
||||||
|
registerKeyword( "TEST" );
|
||||||
|
registerKeyword( "SUPERCLASS" );
|
||||||
|
registerKeyword( "SECTION" );
|
||||||
|
registerKeyword( "LOWER" );
|
||||||
|
registerKeyword( "LIST" );
|
||||||
|
registerKeyword( "OID" );
|
||||||
|
registerKeyword( "DAY" );
|
||||||
|
registerKeyword( "IF" );
|
||||||
|
registerKeyword( "ATTRIBUTE" );
|
||||||
|
registerKeyword( "STRING" );
|
||||||
|
registerKeyword( "SEARCH" );
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public boolean supportsIdentityColumns() {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getIdentityInsertString() {
|
||||||
|
return "NULL";
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean supportsColumnCheck() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean supportsPooledSequences() {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getIdentitySelectString() {
|
||||||
|
return "select last_insert_id()";
|
||||||
|
}
|
||||||
|
|
||||||
|
protected String getIdentityColumnString() {
|
||||||
|
return "not null auto_increment"; //starts with 1, implicitly
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* CUBRID supports "ADD [COLUMN | ATTRIBUTE]"
|
||||||
|
*/
|
||||||
public String getAddColumnString() {
|
public String getAddColumnString() {
|
||||||
return "add";
|
return "add";
|
||||||
}
|
}
|
||||||
|
@ -143,50 +272,39 @@ public class CUBRIDDialect extends Dialect {
|
||||||
return "drop serial " + sequenceName;
|
return "drop serial " + sequenceName;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public String getDropForeignKeyString() {
|
||||||
|
return " drop foreign key ";
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean qualifyIndexName() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
public boolean supportsSequences() {
|
public boolean supportsSequences() {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public boolean supportsExistsInSelect() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
public String getQuerySequencesString() {
|
public String getQuerySequencesString() {
|
||||||
return "select name from db_serial";
|
return "select name from db_serial";
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean dropConstraints() {
|
/**
|
||||||
return false;
|
* The character specific to this dialect used to close a quoted identifier.
|
||||||
}
|
* CUBRID supports square brackets (MSSQL style), backticks (MySQL style),
|
||||||
|
* as well as double quotes (Oracle style).
|
||||||
public boolean supportsLimit() {
|
*
|
||||||
return true;
|
* @return The dialect's specific open quote character.
|
||||||
}
|
*/
|
||||||
|
|
||||||
public String getLimitString(String sql, boolean hasOffset) {
|
|
||||||
// CUBRID 8.3.0 support limit
|
|
||||||
return new StringBuilder( sql.length() + 20 ).append( sql )
|
|
||||||
.append( hasOffset ? " limit ?, ?" : " limit ?" ).toString();
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean bindLimitParametersInReverseOrder() {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean useMaxForLimit() {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean forUpdateOfColumns() {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
public char closeQuote() {
|
|
||||||
return ']';
|
|
||||||
}
|
|
||||||
|
|
||||||
public char openQuote() {
|
public char openQuote() {
|
||||||
return '[';
|
return '[';
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean hasAlterTable() {
|
public char closeQuote() {
|
||||||
return false;
|
return ']';
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getForUpdateString() {
|
public String getForUpdateString() {
|
||||||
|
@ -197,23 +315,31 @@ public class CUBRIDDialect extends Dialect {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean supportsCommentOn() {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean supportsTemporaryTables() {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean supportsCurrentTimestampSelection() {
|
public boolean supportsCurrentTimestampSelection() {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getCurrentTimestampSelectString() {
|
public String getCurrentTimestampSelectString() {
|
||||||
return "select systimestamp from table({1}) as T(X)";
|
return "select now()";
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean isCurrentTimestampSelectStringCallable() {
|
public boolean isCurrentTimestampSelectStringCallable() {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public boolean supportsEmptyInList() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean supportsIfExistsBeforeTableName() {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean supportsTupleDistinctCounts() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
public LimitHandler buildLimitHandler(String sql, RowSelection selection) {
|
||||||
|
return new CUBRIDLimitHandler( this, sql, selection );
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -159,7 +159,7 @@ public class DB2Dialect extends Dialect {
|
||||||
|
|
||||||
registerFunction( "substring", new StandardSQLFunction( "substr", StandardBasicTypes.STRING ) );
|
registerFunction( "substring", new StandardSQLFunction( "substr", StandardBasicTypes.STRING ) );
|
||||||
registerFunction( "bit_length", new SQLFunctionTemplate( StandardBasicTypes.INTEGER, "length(?1)*8" ) );
|
registerFunction( "bit_length", new SQLFunctionTemplate( StandardBasicTypes.INTEGER, "length(?1)*8" ) );
|
||||||
registerFunction( "trim", new AnsiTrimEmulationFunction() );
|
registerFunction( "trim", new SQLFunctionTemplate( StandardBasicTypes.STRING, "trim(?1 ?2 ?3 ?4)" ) );
|
||||||
|
|
||||||
registerFunction( "concat", new VarArgsSQLFunction( StandardBasicTypes.STRING, "", "||", "" ) );
|
registerFunction( "concat", new VarArgsSQLFunction( StandardBasicTypes.STRING, "", "||", "" ) );
|
||||||
|
|
||||||
|
|
|
@ -39,8 +39,6 @@ import java.util.Map;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
import org.hibernate.LockMode;
|
import org.hibernate.LockMode;
|
||||||
import org.hibernate.LockOptions;
|
import org.hibernate.LockOptions;
|
||||||
|
@ -83,6 +81,11 @@ import org.hibernate.metamodel.spi.relational.Sequence;
|
||||||
import org.hibernate.metamodel.spi.relational.Table;
|
import org.hibernate.metamodel.spi.relational.Table;
|
||||||
import org.hibernate.metamodel.spi.relational.UniqueKey;
|
import org.hibernate.metamodel.spi.relational.UniqueKey;
|
||||||
import org.hibernate.persister.entity.Lockable;
|
import org.hibernate.persister.entity.Lockable;
|
||||||
|
import org.hibernate.sql.ANSICaseFragment;
|
||||||
|
import org.hibernate.sql.ANSIJoinFragment;
|
||||||
|
import org.hibernate.sql.CaseFragment;
|
||||||
|
import org.hibernate.sql.ForUpdateFragment;
|
||||||
|
import org.hibernate.sql.JoinFragment;
|
||||||
import org.hibernate.tool.schema.internal.StandardAuxiliaryDatabaseObjectExporter;
|
import org.hibernate.tool.schema.internal.StandardAuxiliaryDatabaseObjectExporter;
|
||||||
import org.hibernate.tool.schema.internal.StandardForeignKeyExporter;
|
import org.hibernate.tool.schema.internal.StandardForeignKeyExporter;
|
||||||
import org.hibernate.tool.schema.internal.StandardIndexExporter;
|
import org.hibernate.tool.schema.internal.StandardIndexExporter;
|
||||||
|
@ -90,15 +93,10 @@ import org.hibernate.tool.schema.internal.StandardSequenceExporter;
|
||||||
import org.hibernate.tool.schema.internal.StandardTableExporter;
|
import org.hibernate.tool.schema.internal.StandardTableExporter;
|
||||||
import org.hibernate.tool.schema.internal.StandardUniqueKeyExporter;
|
import org.hibernate.tool.schema.internal.StandardUniqueKeyExporter;
|
||||||
import org.hibernate.tool.schema.spi.Exporter;
|
import org.hibernate.tool.schema.spi.Exporter;
|
||||||
import org.hibernate.sql.ANSICaseFragment;
|
|
||||||
import org.hibernate.sql.ANSIJoinFragment;
|
|
||||||
import org.hibernate.sql.CaseFragment;
|
|
||||||
import org.hibernate.sql.ForUpdateFragment;
|
|
||||||
import org.hibernate.sql.JoinFragment;
|
|
||||||
import org.hibernate.type.StandardBasicTypes;
|
import org.hibernate.type.StandardBasicTypes;
|
||||||
import org.hibernate.type.descriptor.sql.BlobTypeDescriptor;
|
|
||||||
import org.hibernate.type.descriptor.sql.ClobTypeDescriptor;
|
import org.hibernate.type.descriptor.sql.ClobTypeDescriptor;
|
||||||
import org.hibernate.type.descriptor.sql.SqlTypeDescriptor;
|
import org.hibernate.type.descriptor.sql.SqlTypeDescriptor;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Represents a dialect of SQL implemented by a particular RDBMS.
|
* Represents a dialect of SQL implemented by a particular RDBMS.
|
||||||
|
@ -327,6 +325,23 @@ public abstract class Dialect implements ConversionContext {
|
||||||
return getTypeName( code, Column.DEFAULT_LENGTH, Column.DEFAULT_PRECISION, Column.DEFAULT_SCALE );
|
return getTypeName( code, Column.DEFAULT_LENGTH, Column.DEFAULT_PRECISION, Column.DEFAULT_SCALE );
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public String cast(String value, int jdbcTypeCode, int length, int precision, int scale) {
|
||||||
|
if ( jdbcTypeCode == Types.CHAR ) {
|
||||||
|
return "cast(" + value + " as char(" + length + "))";
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
return "cast(" + value + "as " + getTypeName( jdbcTypeCode, length, precision, scale ) + ")";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public String cast(String value, int jdbcTypeCode, int length) {
|
||||||
|
return cast( value, jdbcTypeCode, length, Column.DEFAULT_PRECISION, Column.DEFAULT_SCALE );
|
||||||
|
}
|
||||||
|
|
||||||
|
public String cast(String value, int jdbcTypeCode, int precision, int scale) {
|
||||||
|
return cast( value, jdbcTypeCode, Column.DEFAULT_LENGTH, precision, scale );
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Subclasses register a type name for the given type code and maximum
|
* Subclasses register a type name for the given type code and maximum
|
||||||
* column length. <tt>$l</tt> in the type name with be replaced by the
|
* column length. <tt>$l</tt> in the type name with be replaced by the
|
||||||
|
@ -391,10 +406,6 @@ public abstract class Dialect implements ConversionContext {
|
||||||
protected SqlTypeDescriptor getSqlTypeDescriptorOverride(int sqlCode) {
|
protected SqlTypeDescriptor getSqlTypeDescriptorOverride(int sqlCode) {
|
||||||
SqlTypeDescriptor descriptor;
|
SqlTypeDescriptor descriptor;
|
||||||
switch ( sqlCode ) {
|
switch ( sqlCode ) {
|
||||||
case Types.BLOB: {
|
|
||||||
descriptor = useInputStreamToInsertBlob() ? BlobTypeDescriptor.STREAM_BINDING : null;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case Types.CLOB: {
|
case Types.CLOB: {
|
||||||
descriptor = useInputStreamToInsertBlob() ? ClobTypeDescriptor.STREAM_BINDING : null;
|
descriptor = useInputStreamToInsertBlob() ? ClobTypeDescriptor.STREAM_BINDING : null;
|
||||||
break;
|
break;
|
||||||
|
@ -617,7 +628,9 @@ public abstract class Dialect implements ConversionContext {
|
||||||
// function support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
// function support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
protected void registerFunction(String name, SQLFunction function) {
|
protected void registerFunction(String name, SQLFunction function) {
|
||||||
sqlFunctions.put( name, function );
|
// HHH-7721: SQLFunctionRegistry expects all lowercase. Enforce,
|
||||||
|
// just in case a user's customer dialect uses mixed cases.
|
||||||
|
sqlFunctions.put( name.toLowerCase(), function );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -2419,4 +2432,15 @@ public abstract class Dialect implements ConversionContext {
|
||||||
public int getInExpressionCountLimit() {
|
public int getInExpressionCountLimit() {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* HHH-4635
|
||||||
|
* Oracle expects all Lob values to be last in inserts and updates.
|
||||||
|
*
|
||||||
|
* @return boolean True of Lob values should be last, false if it
|
||||||
|
* does not matter.
|
||||||
|
*/
|
||||||
|
public boolean forceLobAsLastValue() {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -27,8 +27,6 @@ import java.io.Serializable;
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
import java.sql.Types;
|
import java.sql.Types;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.JDBCException;
|
import org.hibernate.JDBCException;
|
||||||
import org.hibernate.LockMode;
|
import org.hibernate.LockMode;
|
||||||
import org.hibernate.StaleObjectStateException;
|
import org.hibernate.StaleObjectStateException;
|
||||||
|
@ -53,6 +51,7 @@ import org.hibernate.internal.util.JdbcExceptionHelper;
|
||||||
import org.hibernate.internal.util.ReflectHelper;
|
import org.hibernate.internal.util.ReflectHelper;
|
||||||
import org.hibernate.persister.entity.Lockable;
|
import org.hibernate.persister.entity.Lockable;
|
||||||
import org.hibernate.type.StandardBasicTypes;
|
import org.hibernate.type.StandardBasicTypes;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* An SQL dialect compatible with HSQLDB (HyperSQL).
|
* An SQL dialect compatible with HSQLDB (HyperSQL).
|
||||||
|
@ -123,8 +122,8 @@ public class HSQLDialect extends Dialect {
|
||||||
registerColumnType( Types.CLOB, "longvarchar" );
|
registerColumnType( Types.CLOB, "longvarchar" );
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
registerColumnType( Types.BLOB, "blob" );
|
registerColumnType( Types.BLOB, "blob($l)" );
|
||||||
registerColumnType( Types.CLOB, "clob" );
|
registerColumnType( Types.CLOB, "clob($l)" );
|
||||||
}
|
}
|
||||||
|
|
||||||
// aggregate functions
|
// aggregate functions
|
||||||
|
@ -244,8 +243,13 @@ public class HSQLDialect extends Dialect {
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getForUpdateString() {
|
public String getForUpdateString() {
|
||||||
|
if ( hsqldbVersion >= 20 ) {
|
||||||
|
return " for update";
|
||||||
|
}
|
||||||
|
else {
|
||||||
return "";
|
return "";
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
public boolean supportsUnique() {
|
public boolean supportsUnique() {
|
||||||
return false;
|
return false;
|
||||||
|
|
|
@ -123,6 +123,7 @@ public class Oracle8iDialect extends Dialect {
|
||||||
registerFunction( "acos", new StandardSQLFunction("acos", StandardBasicTypes.DOUBLE) );
|
registerFunction( "acos", new StandardSQLFunction("acos", StandardBasicTypes.DOUBLE) );
|
||||||
registerFunction( "asin", new StandardSQLFunction("asin", StandardBasicTypes.DOUBLE) );
|
registerFunction( "asin", new StandardSQLFunction("asin", StandardBasicTypes.DOUBLE) );
|
||||||
registerFunction( "atan", new StandardSQLFunction("atan", StandardBasicTypes.DOUBLE) );
|
registerFunction( "atan", new StandardSQLFunction("atan", StandardBasicTypes.DOUBLE) );
|
||||||
|
registerFunction( "bitand", new StandardSQLFunction("bitand") );
|
||||||
registerFunction( "cos", new StandardSQLFunction("cos", StandardBasicTypes.DOUBLE) );
|
registerFunction( "cos", new StandardSQLFunction("cos", StandardBasicTypes.DOUBLE) );
|
||||||
registerFunction( "cosh", new StandardSQLFunction("cosh", StandardBasicTypes.DOUBLE) );
|
registerFunction( "cosh", new StandardSQLFunction("cosh", StandardBasicTypes.DOUBLE) );
|
||||||
registerFunction( "exp", new StandardSQLFunction("exp", StandardBasicTypes.DOUBLE) );
|
registerFunction( "exp", new StandardSQLFunction("exp", StandardBasicTypes.DOUBLE) );
|
||||||
|
@ -571,4 +572,9 @@ public class Oracle8iDialect extends Dialect {
|
||||||
return PARAM_LIST_SIZE_LIMIT;
|
return PARAM_LIST_SIZE_LIMIT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public boolean forceLobAsLastValue() {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -163,6 +163,10 @@ public class PostgreSQL81Dialect extends Dialect {
|
||||||
SqlTypeDescriptor descriptor;
|
SqlTypeDescriptor descriptor;
|
||||||
switch ( sqlCode ) {
|
switch ( sqlCode ) {
|
||||||
case Types.BLOB: {
|
case Types.BLOB: {
|
||||||
|
// Force BLOB binding. Otherwise, byte[] fields annotated
|
||||||
|
// with @Lob will attempt to use
|
||||||
|
// BlobTypeDescriptor.PRIMITIVE_ARRAY_BINDING. Since the
|
||||||
|
// dialect uses oid for Blobs, byte arrays cannot be used.
|
||||||
descriptor = BlobTypeDescriptor.BLOB_BINDING;
|
descriptor = BlobTypeDescriptor.BLOB_BINDING;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -462,4 +466,14 @@ public class PostgreSQL81Dialect extends Dialect {
|
||||||
public boolean supportsRowValueConstructorSyntax() {
|
public boolean supportsRowValueConstructorSyntax() {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public String getForUpdateNowaitString() {
|
||||||
|
return getForUpdateString() + " nowait ";
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public String getForUpdateNowaitString(String aliases) {
|
||||||
|
return getForUpdateString(aliases) + " nowait ";
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -23,6 +23,11 @@
|
||||||
*/
|
*/
|
||||||
package org.hibernate.dialect;
|
package org.hibernate.dialect;
|
||||||
|
|
||||||
|
import java.sql.Types;
|
||||||
|
|
||||||
|
import org.hibernate.type.descriptor.sql.BlobTypeDescriptor;
|
||||||
|
import org.hibernate.type.descriptor.sql.SqlTypeDescriptor;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* All Sybase dialects share an IN list size limit.
|
* All Sybase dialects share an IN list size limit.
|
||||||
|
@ -40,4 +45,9 @@ public class SybaseDialect extends AbstractTransactSQLDialect {
|
||||||
public int getInExpressionCountLimit() {
|
public int getInExpressionCountLimit() {
|
||||||
return PARAM_LIST_SIZE_LIMIT;
|
return PARAM_LIST_SIZE_LIMIT;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
protected SqlTypeDescriptor getSqlTypeDescriptorOverride(int sqlCode) {
|
||||||
|
return sqlCode == Types.BLOB ? BlobTypeDescriptor.PRIMITIVE_ARRAY_BINDING : super.getSqlTypeDescriptorOverride( sqlCode );
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -38,7 +38,6 @@ public class SQLFunctionRegistry {
|
||||||
}
|
}
|
||||||
|
|
||||||
public SQLFunction findSQLFunction(String functionName) {
|
public SQLFunction findSQLFunction(String functionName) {
|
||||||
// TODO: lower casing done here. Was done "at random" before; maybe not needed at all ?
|
|
||||||
String name = functionName.toLowerCase();
|
String name = functionName.toLowerCase();
|
||||||
SQLFunction userFunction = userFunctions.get( name );
|
SQLFunction userFunction = userFunctions.get( name );
|
||||||
return userFunction != null
|
return userFunction != null
|
||||||
|
@ -47,7 +46,6 @@ public class SQLFunctionRegistry {
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean hasFunction(String functionName) {
|
public boolean hasFunction(String functionName) {
|
||||||
// TODO: toLowerCase was not done before. Only used in Template.
|
|
||||||
String name = functionName.toLowerCase();
|
String name = functionName.toLowerCase();
|
||||||
return userFunctions.containsKey( name ) || dialect.getFunctions().containsKey( name );
|
return userFunctions.containsKey( name ) || dialect.getFunctions().containsKey( name );
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,37 @@
|
||||||
|
package org.hibernate.dialect.pagination;
|
||||||
|
|
||||||
|
import org.hibernate.dialect.Dialect;
|
||||||
|
import org.hibernate.engine.spi.RowSelection;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Limit handler that delegates all operations to the underlying dialect.
|
||||||
|
*
|
||||||
|
* @author Esen Sagynov (kadishmal at gmail dot com)
|
||||||
|
*/
|
||||||
|
public class CUBRIDLimitHandler extends AbstractLimitHandler {
|
||||||
|
private final Dialect dialect;
|
||||||
|
|
||||||
|
public CUBRIDLimitHandler(Dialect dialect, String sql, RowSelection selection) {
|
||||||
|
super( sql, selection );
|
||||||
|
this.dialect = dialect;
|
||||||
|
}
|
||||||
|
|
||||||
|
public boolean supportsLimit() {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getProcessedSql() {
|
||||||
|
if (LimitHelper.useLimit(this, selection)) {
|
||||||
|
// useLimitOffset: whether "offset" is set or not;
|
||||||
|
// if set, use "LIMIT offset, row_count" syntax;
|
||||||
|
// if not, use "LIMIT row_count"
|
||||||
|
boolean useLimitOffset = LimitHelper.hasFirstRow(selection);
|
||||||
|
|
||||||
|
return new StringBuilder(sql.length() + 20).append(sql)
|
||||||
|
.append(useLimitOffset ? " limit ?, ?" : " limit ?").toString();
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
return sql; // or return unaltered SQL
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -252,9 +252,12 @@ public final class Cascade {
|
||||||
loadedValue = null;
|
loadedValue = null;
|
||||||
}
|
}
|
||||||
if ( loadedValue != null ) {
|
if ( loadedValue != null ) {
|
||||||
final String entityName = entry.getPersister().getEntityName();
|
final EntityEntry valueEntry = eventSource
|
||||||
|
.getPersistenceContext().getEntry(
|
||||||
|
loadedValue );
|
||||||
|
final String entityName = valueEntry.getPersister().getEntityName();
|
||||||
if ( LOG.isTraceEnabled() ) {
|
if ( LOG.isTraceEnabled() ) {
|
||||||
final Serializable id = entry.getPersister().getIdentifier( loadedValue, eventSource );
|
final Serializable id = valueEntry.getPersister().getIdentifier( loadedValue, eventSource );
|
||||||
final String description = MessageHelper.infoString( entityName, id );
|
final String description = MessageHelper.infoString( entityName, id );
|
||||||
LOG.tracev( "Deleting orphaned entity instance: {0}", description );
|
LOG.tracev( "Deleting orphaned entity instance: {0}", description );
|
||||||
}
|
}
|
||||||
|
|
|
@ -25,8 +25,6 @@ package org.hibernate.engine.internal;
|
||||||
|
|
||||||
import java.io.Serializable;
|
import java.io.Serializable;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.AssertionFailure;
|
import org.hibernate.AssertionFailure;
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
import org.hibernate.collection.spi.PersistentCollection;
|
import org.hibernate.collection.spi.PersistentCollection;
|
||||||
|
@ -41,6 +39,7 @@ import org.hibernate.internal.CoreMessageLogger;
|
||||||
import org.hibernate.persister.collection.CollectionPersister;
|
import org.hibernate.persister.collection.CollectionPersister;
|
||||||
import org.hibernate.pretty.MessageHelper;
|
import org.hibernate.pretty.MessageHelper;
|
||||||
import org.hibernate.type.CollectionType;
|
import org.hibernate.type.CollectionType;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Implements book-keeping for the collection persistence by reachability algorithm
|
* Implements book-keeping for the collection persistence by reachability algorithm
|
||||||
|
@ -76,10 +75,8 @@ public final class Collections {
|
||||||
if ( LOG.isDebugEnabled() && loadedPersister != null ) {
|
if ( LOG.isDebugEnabled() && loadedPersister != null ) {
|
||||||
LOG.debugf(
|
LOG.debugf(
|
||||||
"Collection dereferenced: %s",
|
"Collection dereferenced: %s",
|
||||||
MessageHelper.collectionInfoString(
|
MessageHelper.collectionInfoString( loadedPersister,
|
||||||
loadedPersister,
|
coll, entry.getLoadedKey(), session
|
||||||
entry.getLoadedKey(),
|
|
||||||
session.getFactory()
|
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
@ -135,7 +132,9 @@ public final class Collections {
|
||||||
|
|
||||||
if ( LOG.isDebugEnabled() ) {
|
if ( LOG.isDebugEnabled() ) {
|
||||||
LOG.debugf( "Found collection with unloaded owner: %s",
|
LOG.debugf( "Found collection with unloaded owner: %s",
|
||||||
MessageHelper.collectionInfoString( entry.getLoadedPersister(), entry.getLoadedKey(), session.getFactory() ) );
|
MessageHelper.collectionInfoString(
|
||||||
|
entry.getLoadedPersister(), coll,
|
||||||
|
entry.getLoadedKey(), session ) );
|
||||||
}
|
}
|
||||||
|
|
||||||
entry.setCurrentPersister( entry.getLoadedPersister() );
|
entry.setCurrentPersister( entry.getLoadedPersister() );
|
||||||
|
@ -189,13 +188,13 @@ public final class Collections {
|
||||||
|
|
||||||
if (LOG.isDebugEnabled()) {
|
if (LOG.isDebugEnabled()) {
|
||||||
if (collection.wasInitialized()) LOG.debugf("Collection found: %s, was: %s (initialized)",
|
if (collection.wasInitialized()) LOG.debugf("Collection found: %s, was: %s (initialized)",
|
||||||
MessageHelper.collectionInfoString(persister, ce.getCurrentKey(), factory),
|
MessageHelper.collectionInfoString(persister, collection, ce.getCurrentKey(), session),
|
||||||
MessageHelper.collectionInfoString(ce.getLoadedPersister(),
|
MessageHelper.collectionInfoString(ce.getLoadedPersister(), collection,
|
||||||
ce.getLoadedKey(),
|
ce.getLoadedKey(),
|
||||||
factory));
|
session));
|
||||||
else LOG.debugf("Collection found: %s, was: %s (uninitialized)",
|
else LOG.debugf("Collection found: %s, was: %s (uninitialized)",
|
||||||
MessageHelper.collectionInfoString(persister, ce.getCurrentKey(), factory),
|
MessageHelper.collectionInfoString(persister, collection, ce.getCurrentKey(), session),
|
||||||
MessageHelper.collectionInfoString(ce.getLoadedPersister(), ce.getLoadedKey(), factory));
|
MessageHelper.collectionInfoString(ce.getLoadedPersister(), collection, ce.getLoadedKey(), session));
|
||||||
}
|
}
|
||||||
|
|
||||||
prepareCollectionForUpdate( collection, ce, factory );
|
prepareCollectionForUpdate( collection, ce, factory );
|
||||||
|
|
|
@ -71,31 +71,46 @@ public final class JoinHelper {
|
||||||
* be used in the join
|
* be used in the join
|
||||||
*/
|
*/
|
||||||
public static String[] getAliasedLHSColumnNames(
|
public static String[] getAliasedLHSColumnNames(
|
||||||
AssociationType type,
|
AssociationType associationType,
|
||||||
String alias,
|
String columnQualifier,
|
||||||
int property,
|
int propertyIndex,
|
||||||
int begin,
|
int begin,
|
||||||
OuterJoinLoadable lhsPersister,
|
OuterJoinLoadable lhsPersister,
|
||||||
Mapping mapping
|
Mapping mapping) {
|
||||||
) {
|
if ( associationType.useLHSPrimaryKey() ) {
|
||||||
if ( type.useLHSPrimaryKey() ) {
|
return StringHelper.qualify( columnQualifier, lhsPersister.getIdentifierColumnNames() );
|
||||||
return StringHelper.qualify( alias, lhsPersister.getIdentifierColumnNames() );
|
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
String propertyName = type.getLHSPropertyName();
|
String propertyName = associationType.getLHSPropertyName();
|
||||||
if ( propertyName == null ) {
|
if ( propertyName == null ) {
|
||||||
return ArrayHelper.slice(
|
return ArrayHelper.slice(
|
||||||
lhsPersister.toColumns(alias, property),
|
toColumns( lhsPersister, columnQualifier, propertyIndex ),
|
||||||
begin,
|
begin,
|
||||||
type.getColumnSpan(mapping)
|
associationType.getColumnSpan( mapping )
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
return ( (PropertyMapping) lhsPersister ).toColumns(alias, propertyName); //bad cast
|
return ( (PropertyMapping) lhsPersister ).toColumns(columnQualifier, propertyName); //bad cast
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private static String[] toColumns(OuterJoinLoadable persister, String columnQualifier, int propertyIndex) {
|
||||||
|
if ( propertyIndex >= 0 ) {
|
||||||
|
return persister.toColumns( columnQualifier, propertyIndex );
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
final String[] cols = persister.getIdentifierColumnNames();
|
||||||
|
final String[] result = new String[cols.length];
|
||||||
|
|
||||||
|
for ( int j = 0; j < cols.length; j++ ) {
|
||||||
|
result[j] = StringHelper.qualify( columnQualifier, cols[j] );
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the columns of the owning entity which are to
|
* Get the columns of the owning entity which are to
|
||||||
* be used in the join
|
* be used in the join
|
||||||
|
@ -117,7 +132,9 @@ public final class JoinHelper {
|
||||||
//slice, to get the columns for this component
|
//slice, to get the columns for this component
|
||||||
//property
|
//property
|
||||||
return ArrayHelper.slice(
|
return ArrayHelper.slice(
|
||||||
lhsPersister.getSubclassPropertyColumnNames(property),
|
property < 0
|
||||||
|
? lhsPersister.getIdentifierColumnNames()
|
||||||
|
: lhsPersister.getSubclassPropertyColumnNames(property),
|
||||||
begin,
|
begin,
|
||||||
type.getColumnSpan(mapping)
|
type.getColumnSpan(mapping)
|
||||||
);
|
);
|
||||||
|
@ -132,10 +149,9 @@ public final class JoinHelper {
|
||||||
|
|
||||||
public static String getLHSTableName(
|
public static String getLHSTableName(
|
||||||
AssociationType type,
|
AssociationType type,
|
||||||
int property,
|
int propertyIndex,
|
||||||
OuterJoinLoadable lhsPersister
|
OuterJoinLoadable lhsPersister) {
|
||||||
) {
|
if ( type.useLHSPrimaryKey() || propertyIndex < 0 ) {
|
||||||
if ( type.useLHSPrimaryKey() ) {
|
|
||||||
return lhsPersister.getTableName();
|
return lhsPersister.getTableName();
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
|
@ -144,7 +160,7 @@ public final class JoinHelper {
|
||||||
//if there is no property-ref, assume the join
|
//if there is no property-ref, assume the join
|
||||||
//is to the subclass table (ie. the table of the
|
//is to the subclass table (ie. the table of the
|
||||||
//subclass that the association belongs to)
|
//subclass that the association belongs to)
|
||||||
return lhsPersister.getSubclassPropertyTableName(property);
|
return lhsPersister.getSubclassPropertyTableName(propertyIndex);
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
//handle a property-ref
|
//handle a property-ref
|
||||||
|
@ -157,7 +173,7 @@ public final class JoinHelper {
|
||||||
//assumes that the property-ref refers to a property of the subclass
|
//assumes that the property-ref refers to a property of the subclass
|
||||||
//table that the association belongs to (a reasonable guess)
|
//table that the association belongs to (a reasonable guess)
|
||||||
//TODO: fix this, add: OuterJoinLoadable.getSubclassPropertyTableName(String propertyName)
|
//TODO: fix this, add: OuterJoinLoadable.getSubclassPropertyTableName(String propertyName)
|
||||||
propertyRefTable = lhsPersister.getSubclassPropertyTableName(property);
|
propertyRefTable = lhsPersister.getSubclassPropertyTableName(propertyIndex);
|
||||||
}
|
}
|
||||||
return propertyRefTable;
|
return propertyRefTable;
|
||||||
}
|
}
|
||||||
|
|
|
@ -93,6 +93,8 @@ public class StatefulPersistenceContext implements PersistenceContext {
|
||||||
|
|
||||||
private static final CoreMessageLogger LOG = Logger.getMessageLogger( CoreMessageLogger.class, StatefulPersistenceContext.class.getName() );
|
private static final CoreMessageLogger LOG = Logger.getMessageLogger( CoreMessageLogger.class, StatefulPersistenceContext.class.getName() );
|
||||||
|
|
||||||
|
private static final boolean tracing = LOG.isTraceEnabled();
|
||||||
|
|
||||||
public static final Object NO_ROW = new MarkerObject( "NO_ROW" );
|
public static final Object NO_ROW = new MarkerObject( "NO_ROW" );
|
||||||
|
|
||||||
private static final int INIT_COLL_SIZE = 8;
|
private static final int INIT_COLL_SIZE = 8;
|
||||||
|
@ -893,6 +895,9 @@ public class StatefulPersistenceContext implements PersistenceContext {
|
||||||
public void addUninitializedCollection(CollectionPersister persister, PersistentCollection collection, Serializable id) {
|
public void addUninitializedCollection(CollectionPersister persister, PersistentCollection collection, Serializable id) {
|
||||||
CollectionEntry ce = new CollectionEntry(collection, persister, id, flushing);
|
CollectionEntry ce = new CollectionEntry(collection, persister, id, flushing);
|
||||||
addCollection(collection, ce, id);
|
addCollection(collection, ce, id);
|
||||||
|
if ( persister.getBatchSize() > 1 ) {
|
||||||
|
getBatchFetchQueue().addBatchLoadableCollection( collection, ce );
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -902,6 +907,9 @@ public class StatefulPersistenceContext implements PersistenceContext {
|
||||||
public void addUninitializedDetachedCollection(CollectionPersister persister, PersistentCollection collection) {
|
public void addUninitializedDetachedCollection(CollectionPersister persister, PersistentCollection collection) {
|
||||||
CollectionEntry ce = new CollectionEntry( persister, collection.getKey() );
|
CollectionEntry ce = new CollectionEntry( persister, collection.getKey() );
|
||||||
addCollection( collection, ce, collection.getKey() );
|
addCollection( collection, ce, collection.getKey() );
|
||||||
|
if ( persister.getBatchSize() > 1 ) {
|
||||||
|
getBatchFetchQueue().addBatchLoadableCollection( collection, ce );
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -1003,7 +1011,9 @@ public class StatefulPersistenceContext implements PersistenceContext {
|
||||||
@Override
|
@Override
|
||||||
public void initializeNonLazyCollections() throws HibernateException {
|
public void initializeNonLazyCollections() throws HibernateException {
|
||||||
if ( loadCounter == 0 ) {
|
if ( loadCounter == 0 ) {
|
||||||
LOG.debug( "Initializing non-lazy collections" );
|
if (tracing)
|
||||||
|
LOG.trace( "Initializing non-lazy collections" );
|
||||||
|
|
||||||
//do this work only at the very highest level of the load
|
//do this work only at the very highest level of the load
|
||||||
loadCounter++; //don't let this method be called recursively
|
loadCounter++; //don't let this method be called recursively
|
||||||
try {
|
try {
|
||||||
|
@ -1861,14 +1871,14 @@ public class StatefulPersistenceContext implements PersistenceContext {
|
||||||
CachedNaturalIdValueSource source) {
|
CachedNaturalIdValueSource source) {
|
||||||
final NaturalIdRegionAccessStrategy naturalIdCacheAccessStrategy = persister.getNaturalIdCacheAccessStrategy();
|
final NaturalIdRegionAccessStrategy naturalIdCacheAccessStrategy = persister.getNaturalIdCacheAccessStrategy();
|
||||||
final NaturalIdCacheKey naturalIdCacheKey = new NaturalIdCacheKey( naturalIdValues, persister, session );
|
final NaturalIdCacheKey naturalIdCacheKey = new NaturalIdCacheKey( naturalIdValues, persister, session );
|
||||||
if (naturalIdCacheAccessStrategy.get(naturalIdCacheKey, session.getTimestamp()) != null) {
|
|
||||||
return; // prevent identical re-cachings
|
|
||||||
}
|
|
||||||
|
|
||||||
final SessionFactoryImplementor factory = session.getFactory();
|
final SessionFactoryImplementor factory = session.getFactory();
|
||||||
|
|
||||||
switch ( source ) {
|
switch ( source ) {
|
||||||
case LOAD: {
|
case LOAD: {
|
||||||
|
if (naturalIdCacheAccessStrategy.get(naturalIdCacheKey, session.getTimestamp()) != null) {
|
||||||
|
return; // prevent identical re-cachings
|
||||||
|
}
|
||||||
final boolean put = naturalIdCacheAccessStrategy.putFromLoad(
|
final boolean put = naturalIdCacheAccessStrategy.putFromLoad(
|
||||||
naturalIdCacheKey,
|
naturalIdCacheKey,
|
||||||
id,
|
id,
|
||||||
|
@ -1915,6 +1925,9 @@ public class StatefulPersistenceContext implements PersistenceContext {
|
||||||
}
|
}
|
||||||
case UPDATE: {
|
case UPDATE: {
|
||||||
final NaturalIdCacheKey previousCacheKey = new NaturalIdCacheKey( previousNaturalIdValues, persister, session );
|
final NaturalIdCacheKey previousCacheKey = new NaturalIdCacheKey( previousNaturalIdValues, persister, session );
|
||||||
|
if (naturalIdCacheKey.equals(previousCacheKey)) {
|
||||||
|
return; // prevent identical re-caching, solves HHH-7309
|
||||||
|
}
|
||||||
final SoftLock removalLock = naturalIdCacheAccessStrategy.lockItem( previousCacheKey, null );
|
final SoftLock removalLock = naturalIdCacheAccessStrategy.lockItem( previousCacheKey, null );
|
||||||
naturalIdCacheAccessStrategy.remove( previousCacheKey );
|
naturalIdCacheAccessStrategy.remove( previousCacheKey );
|
||||||
|
|
||||||
|
@ -2078,6 +2091,15 @@ public class StatefulPersistenceContext implements PersistenceContext {
|
||||||
public void cleanupFromSynchronizations() {
|
public void cleanupFromSynchronizations() {
|
||||||
naturalIdXrefDelegate.unStashInvalidNaturalIdReferences();
|
naturalIdXrefDelegate.unStashInvalidNaturalIdReferences();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void handleEviction(Object object, EntityPersister persister, Serializable identifier) {
|
||||||
|
naturalIdXrefDelegate.removeNaturalIdCrossReference(
|
||||||
|
persister,
|
||||||
|
identifier,
|
||||||
|
findCachedNaturalId( persister, identifier )
|
||||||
|
);
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
|
|
@ -281,19 +281,6 @@ public final class TwoPhaseLoad {
|
||||||
session
|
session
|
||||||
);
|
);
|
||||||
|
|
||||||
if ( session.isEventSource() ) {
|
|
||||||
postLoadEvent.setEntity( entity ).setId( id ).setPersister( persister );
|
|
||||||
|
|
||||||
final EventListenerGroup<PostLoadEventListener> listenerGroup = session
|
|
||||||
.getFactory()
|
|
||||||
.getServiceRegistry()
|
|
||||||
.getService( EventListenerRegistry.class )
|
|
||||||
.getEventListenerGroup( EventType.POST_LOAD );
|
|
||||||
for ( PostLoadEventListener listener : listenerGroup.listeners() ) {
|
|
||||||
listener.onPostLoad( postLoadEvent );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ( LOG.isDebugEnabled() ) {
|
if ( LOG.isDebugEnabled() ) {
|
||||||
LOG.debugf(
|
LOG.debugf(
|
||||||
"Done materializing entity %s",
|
"Done materializing entity %s",
|
||||||
|
@ -306,6 +293,45 @@ public final class TwoPhaseLoad {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* PostLoad cannot occur during initializeEntity, as that call occurs *before*
|
||||||
|
* the Set collections are added to the persistence context by Loader.
|
||||||
|
* Without the split, LazyInitializationExceptions can occur in the Entity's
|
||||||
|
* postLoad if it acts upon the collection.
|
||||||
|
*
|
||||||
|
*
|
||||||
|
* HHH-6043
|
||||||
|
*
|
||||||
|
* @param entity
|
||||||
|
* @param session
|
||||||
|
* @param postLoadEvent
|
||||||
|
*/
|
||||||
|
public static void postLoad(
|
||||||
|
final Object entity,
|
||||||
|
final SessionImplementor session,
|
||||||
|
final PostLoadEvent postLoadEvent) {
|
||||||
|
|
||||||
|
if ( session.isEventSource() ) {
|
||||||
|
final PersistenceContext persistenceContext
|
||||||
|
= session.getPersistenceContext();
|
||||||
|
final EntityEntry entityEntry = persistenceContext.getEntry(entity);
|
||||||
|
final Serializable id = entityEntry.getId();
|
||||||
|
|
||||||
|
postLoadEvent.setEntity( entity ).setId( entityEntry.getId() )
|
||||||
|
.setPersister( entityEntry.getPersister() );
|
||||||
|
|
||||||
|
final EventListenerGroup<PostLoadEventListener> listenerGroup
|
||||||
|
= session
|
||||||
|
.getFactory()
|
||||||
|
.getServiceRegistry()
|
||||||
|
.getService( EventListenerRegistry.class )
|
||||||
|
.getEventListenerGroup( EventType.POST_LOAD );
|
||||||
|
for ( PostLoadEventListener listener : listenerGroup.listeners() ) {
|
||||||
|
listener.onPostLoad( postLoadEvent );
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
private static boolean useMinimalPuts(SessionImplementor session, EntityEntry entityEntry) {
|
private static boolean useMinimalPuts(SessionImplementor session, EntityEntry entityEntry) {
|
||||||
return ( session.getFactory().getServiceRegistry().getService( RegionFactory.class ).isMinimalPutsEnabled() &&
|
return ( session.getFactory().getServiceRegistry().getService( RegionFactory.class ).isMinimalPutsEnabled() &&
|
||||||
session.getCacheMode()!=CacheMode.REFRESH ) ||
|
session.getCacheMode()!=CacheMode.REFRESH ) ||
|
||||||
|
|
|
@ -31,16 +31,12 @@ import java.sql.Clob;
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
public abstract class AbstractLobCreator implements LobCreator {
|
public abstract class AbstractLobCreator implements LobCreator {
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Blob wrap(Blob blob) {
|
public Blob wrap(Blob blob) {
|
||||||
return SerializableBlobProxy.generateProxy( blob );
|
return SerializableBlobProxy.generateProxy( blob );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Clob wrap(Clob clob) {
|
public Clob wrap(Clob clob) {
|
||||||
if ( SerializableNClobProxy.isNClob( clob ) ) {
|
if ( SerializableNClobProxy.isNClob( clob ) ) {
|
||||||
return SerializableNClobProxy.generateProxy( clob );
|
return SerializableNClobProxy.generateProxy( clob );
|
||||||
|
|
|
@ -21,7 +21,8 @@
|
||||||
* 51 Franklin Street, Fifth Floor
|
* 51 Franklin Street, Fifth Floor
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.type.descriptor;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.io.InputStream;
|
import java.io.InputStream;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -49,5 +50,10 @@ public interface BinaryStream {
|
||||||
*
|
*
|
||||||
* @return The input stream length
|
* @return The input stream length
|
||||||
*/
|
*/
|
||||||
public int getLength();
|
public long getLength();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Release any underlying resources.
|
||||||
|
*/
|
||||||
|
public void release();
|
||||||
}
|
}
|
|
@ -23,11 +23,16 @@
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Marker interface for non-contextually created {@link java.sql.Blob} instances..
|
* Marker interface for non-contextually created {@link java.sql.Blob} instances..
|
||||||
*
|
*
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
public interface BlobImplementer {
|
public interface BlobImplementer {
|
||||||
|
/**
|
||||||
|
* Gets access to the data underlying this BLOB.
|
||||||
|
*
|
||||||
|
* @return Access to the underlying data.
|
||||||
|
*/
|
||||||
|
public BinaryStream getUnderlyingStream();
|
||||||
}
|
}
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.io.InputStream;
|
import java.io.InputStream;
|
||||||
import java.lang.reflect.InvocationHandler;
|
import java.lang.reflect.InvocationHandler;
|
||||||
|
@ -30,12 +31,12 @@ import java.lang.reflect.Proxy;
|
||||||
import java.sql.Blob;
|
import java.sql.Blob;
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
|
|
||||||
import org.hibernate.type.descriptor.java.BinaryStreamImpl;
|
import org.hibernate.engine.jdbc.internal.BinaryStreamImpl;
|
||||||
import org.hibernate.type.descriptor.java.DataHelper;
|
import org.hibernate.type.descriptor.java.DataHelper;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Manages aspects of proxying {@link Blob Blobs} for non-contextual creation, including proxy creation and
|
* Manages aspects of proxying {@link Blob} references for non-contextual creation, including proxy creation and
|
||||||
* handling proxy invocations.
|
* handling proxy invocations. We use proxies here solely to avoid JDBC version incompatibilities.
|
||||||
*
|
*
|
||||||
* @author Gavin King
|
* @author Gavin King
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
|
@ -44,8 +45,7 @@ import org.hibernate.type.descriptor.java.DataHelper;
|
||||||
public class BlobProxy implements InvocationHandler {
|
public class BlobProxy implements InvocationHandler {
|
||||||
private static final Class[] PROXY_INTERFACES = new Class[] { Blob.class, BlobImplementer.class };
|
private static final Class[] PROXY_INTERFACES = new Class[] { Blob.class, BlobImplementer.class };
|
||||||
|
|
||||||
private InputStream stream;
|
private BinaryStream binaryStream;
|
||||||
private long length;
|
|
||||||
private boolean needsReset = false;
|
private boolean needsReset = false;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -55,8 +55,7 @@ public class BlobProxy implements InvocationHandler {
|
||||||
* @see #generateProxy(byte[])
|
* @see #generateProxy(byte[])
|
||||||
*/
|
*/
|
||||||
private BlobProxy(byte[] bytes) {
|
private BlobProxy(byte[] bytes) {
|
||||||
this.stream = new BinaryStreamImpl( bytes );
|
binaryStream = new BinaryStreamImpl( bytes );
|
||||||
this.length = bytes.length;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -67,15 +66,15 @@ public class BlobProxy implements InvocationHandler {
|
||||||
* @see #generateProxy(java.io.InputStream, long)
|
* @see #generateProxy(java.io.InputStream, long)
|
||||||
*/
|
*/
|
||||||
private BlobProxy(InputStream stream, long length) {
|
private BlobProxy(InputStream stream, long length) {
|
||||||
this.stream = stream;
|
this.binaryStream = new StreamBackedBinaryStream( stream, length );
|
||||||
this.length = length;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private long getLength() {
|
private long getLength() {
|
||||||
return length;
|
return binaryStream.getLength();
|
||||||
}
|
}
|
||||||
|
|
||||||
private InputStream getStream() throws SQLException {
|
private InputStream getStream() throws SQLException {
|
||||||
|
InputStream stream = binaryStream.getInputStream();
|
||||||
try {
|
try {
|
||||||
if ( needsReset ) {
|
if ( needsReset ) {
|
||||||
stream.reset();
|
stream.reset();
|
||||||
|
@ -94,6 +93,7 @@ public class BlobProxy implements InvocationHandler {
|
||||||
* @throws UnsupportedOperationException if any methods other than {@link Blob#length()}
|
* @throws UnsupportedOperationException if any methods other than {@link Blob#length()}
|
||||||
* or {@link Blob#getBinaryStream} are invoked.
|
* or {@link Blob#getBinaryStream} are invoked.
|
||||||
*/
|
*/
|
||||||
|
@Override
|
||||||
@SuppressWarnings({ "UnnecessaryBoxing" })
|
@SuppressWarnings({ "UnnecessaryBoxing" })
|
||||||
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
||||||
final String methodName = method.getName();
|
final String methodName = method.getName();
|
||||||
|
@ -102,6 +102,9 @@ public class BlobProxy implements InvocationHandler {
|
||||||
if ( "length".equals( methodName ) && argCount == 0 ) {
|
if ( "length".equals( methodName ) && argCount == 0 ) {
|
||||||
return Long.valueOf( getLength() );
|
return Long.valueOf( getLength() );
|
||||||
}
|
}
|
||||||
|
if ( "getUnderlyingStream".equals( methodName ) ) {
|
||||||
|
return binaryStream;
|
||||||
|
}
|
||||||
if ( "getBinaryStream".equals( methodName ) ) {
|
if ( "getBinaryStream".equals( methodName ) ) {
|
||||||
if ( argCount == 0 ) {
|
if ( argCount == 0 ) {
|
||||||
return getStream();
|
return getStream();
|
||||||
|
@ -137,7 +140,7 @@ public class BlobProxy implements InvocationHandler {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if ( "free".equals( methodName ) && argCount == 0 ) {
|
if ( "free".equals( methodName ) && argCount == 0 ) {
|
||||||
stream.close();
|
binaryStream.release();
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
if ( "toString".equals( methodName ) && argCount == 0 ) {
|
if ( "toString".equals( methodName ) && argCount == 0 ) {
|
||||||
|
@ -197,4 +200,43 @@ public class BlobProxy implements InvocationHandler {
|
||||||
}
|
}
|
||||||
return cl;
|
return cl;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private static class StreamBackedBinaryStream implements BinaryStream {
|
||||||
|
private final InputStream stream;
|
||||||
|
private final long length;
|
||||||
|
|
||||||
|
private byte[] bytes;
|
||||||
|
|
||||||
|
private StreamBackedBinaryStream(InputStream stream, long length) {
|
||||||
|
this.stream = stream;
|
||||||
|
this.length = length;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public InputStream getInputStream() {
|
||||||
|
return stream;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public byte[] getBytes() {
|
||||||
|
if ( bytes == null ) {
|
||||||
|
bytes = DataHelper.extractBytes( stream );
|
||||||
|
}
|
||||||
|
return bytes;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public long getLength() {
|
||||||
|
return (int) length;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void release() {
|
||||||
|
try {
|
||||||
|
stream.close();
|
||||||
|
}
|
||||||
|
catch (IOException ignore) {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -21,7 +21,9 @@
|
||||||
* 51 Franklin Street, Fifth Floor
|
* 51 Franklin Street, Fifth Floor
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.type.descriptor;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
|
import java.io.InputStream;
|
||||||
import java.io.Reader;
|
import java.io.Reader;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -32,17 +34,28 @@ import java.io.Reader;
|
||||||
*/
|
*/
|
||||||
public interface CharacterStream {
|
public interface CharacterStream {
|
||||||
/**
|
/**
|
||||||
* Retrieve the reader.
|
* Provides access to the underlying data as a Reader.
|
||||||
*
|
*
|
||||||
* @return The reader.
|
* @return The reader.
|
||||||
*/
|
*/
|
||||||
public Reader getReader();
|
public Reader asReader();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Retrieve the number of characters. JDBC 3 and earlier defined the length in terms of int type rather than
|
* Provides access to the underlying data as a String.
|
||||||
* long type :(
|
*
|
||||||
|
* @return The underlying String data
|
||||||
|
*/
|
||||||
|
public String asString();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Retrieve the number of characters.
|
||||||
*
|
*
|
||||||
* @return The number of characters.
|
* @return The number of characters.
|
||||||
*/
|
*/
|
||||||
public int getLength();
|
public long getLength();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Release any underlying resources.
|
||||||
|
*/
|
||||||
|
public void release();
|
||||||
}
|
}
|
|
@ -23,11 +23,16 @@
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Marker interface for non-contextually created {@link java.sql.Clob} instances..
|
* Marker interface for non-contextually created {@link java.sql.Clob} instances..
|
||||||
*
|
*
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
public interface ClobImplementer {
|
public interface ClobImplementer {
|
||||||
|
/**
|
||||||
|
* Gets access to the data underlying this CLOB.
|
||||||
|
*
|
||||||
|
* @return Access to the underlying data.
|
||||||
|
*/
|
||||||
|
public CharacterStream getUnderlyingStream();
|
||||||
}
|
}
|
||||||
|
|
|
@ -33,11 +33,12 @@ import java.lang.reflect.Proxy;
|
||||||
import java.sql.Clob;
|
import java.sql.Clob;
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
|
|
||||||
|
import org.hibernate.engine.jdbc.internal.CharacterStreamImpl;
|
||||||
import org.hibernate.type.descriptor.java.DataHelper;
|
import org.hibernate.type.descriptor.java.DataHelper;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Manages aspects of proxying {@link Clob Clobs} for non-contextual creation, including proxy creation and
|
* Manages aspects of proxying {@link Clob Clobs} for non-contextual creation, including proxy creation and
|
||||||
* handling proxy invocations.
|
* handling proxy invocations. We use proxies here solely to avoid JDBC version incompatibilities.
|
||||||
*
|
*
|
||||||
* @author Gavin King
|
* @author Gavin King
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
|
@ -46,12 +47,9 @@ import org.hibernate.type.descriptor.java.DataHelper;
|
||||||
public class ClobProxy implements InvocationHandler {
|
public class ClobProxy implements InvocationHandler {
|
||||||
private static final Class[] PROXY_INTERFACES = new Class[] { Clob.class, ClobImplementer.class };
|
private static final Class[] PROXY_INTERFACES = new Class[] { Clob.class, ClobImplementer.class };
|
||||||
|
|
||||||
private String string;
|
private final CharacterStream characterStream;
|
||||||
private Reader reader;
|
|
||||||
private long length;
|
|
||||||
private boolean needsReset = false;
|
private boolean needsReset = false;
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Constructor used to build {@link Clob} from string data.
|
* Constructor used to build {@link Clob} from string data.
|
||||||
*
|
*
|
||||||
|
@ -59,9 +57,7 @@ public class ClobProxy implements InvocationHandler {
|
||||||
* @see #generateProxy(String)
|
* @see #generateProxy(String)
|
||||||
*/
|
*/
|
||||||
protected ClobProxy(String string) {
|
protected ClobProxy(String string) {
|
||||||
this.string = string;
|
this.characterStream = new CharacterStreamImpl( string );
|
||||||
reader = new StringReader(string);
|
|
||||||
length = string.length();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -72,28 +68,25 @@ public class ClobProxy implements InvocationHandler {
|
||||||
* @see #generateProxy(java.io.Reader, long)
|
* @see #generateProxy(java.io.Reader, long)
|
||||||
*/
|
*/
|
||||||
protected ClobProxy(Reader reader, long length) {
|
protected ClobProxy(Reader reader, long length) {
|
||||||
this.reader = reader;
|
this.characterStream = new CharacterStreamImpl( reader, length );
|
||||||
this.length = length;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
protected long getLength() {
|
protected long getLength() {
|
||||||
return length;
|
return characterStream.getLength();
|
||||||
}
|
}
|
||||||
|
|
||||||
protected InputStream getAsciiStream() throws SQLException {
|
protected InputStream getAsciiStream() throws SQLException {
|
||||||
resetIfNeeded();
|
resetIfNeeded();
|
||||||
return new ReaderInputStream( reader );
|
return new ReaderInputStream( characterStream.asReader() );
|
||||||
}
|
}
|
||||||
|
|
||||||
protected Reader getCharacterStream() throws SQLException {
|
protected Reader getCharacterStream() throws SQLException {
|
||||||
resetIfNeeded();
|
resetIfNeeded();
|
||||||
return reader;
|
return characterStream.asReader();
|
||||||
}
|
}
|
||||||
|
|
||||||
protected String getSubString(long start, int length) {
|
protected String getSubString(long start, int length) {
|
||||||
if ( string == null ) {
|
final String string = characterStream.asString();
|
||||||
throw new UnsupportedOperationException( "Clob was not created from string; cannot substring" );
|
|
||||||
}
|
|
||||||
// semi-naive implementation
|
// semi-naive implementation
|
||||||
int endIndex = Math.min( ((int)start)+length, string.length() );
|
int endIndex = Math.min( ((int)start)+length, string.length() );
|
||||||
return string.substring( (int)start, endIndex );
|
return string.substring( (int)start, endIndex );
|
||||||
|
@ -105,6 +98,7 @@ public class ClobProxy implements InvocationHandler {
|
||||||
* @throws UnsupportedOperationException if any methods other than {@link Clob#length()},
|
* @throws UnsupportedOperationException if any methods other than {@link Clob#length()},
|
||||||
* {@link Clob#getAsciiStream()}, or {@link Clob#getCharacterStream()} are invoked.
|
* {@link Clob#getAsciiStream()}, or {@link Clob#getCharacterStream()} are invoked.
|
||||||
*/
|
*/
|
||||||
|
@Override
|
||||||
@SuppressWarnings({ "UnnecessaryBoxing" })
|
@SuppressWarnings({ "UnnecessaryBoxing" })
|
||||||
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
||||||
final String methodName = method.getName();
|
final String methodName = method.getName();
|
||||||
|
@ -113,6 +107,9 @@ public class ClobProxy implements InvocationHandler {
|
||||||
if ( "length".equals( methodName ) && argCount == 0 ) {
|
if ( "length".equals( methodName ) && argCount == 0 ) {
|
||||||
return Long.valueOf( getLength() );
|
return Long.valueOf( getLength() );
|
||||||
}
|
}
|
||||||
|
if ( "getUnderlyingStream".equals( methodName ) ) {
|
||||||
|
return characterStream;
|
||||||
|
}
|
||||||
if ( "getAsciiStream".equals( methodName ) && argCount == 0 ) {
|
if ( "getAsciiStream".equals( methodName ) && argCount == 0 ) {
|
||||||
return getAsciiStream();
|
return getAsciiStream();
|
||||||
}
|
}
|
||||||
|
@ -152,7 +149,7 @@ public class ClobProxy implements InvocationHandler {
|
||||||
return getSubString( start-1, length );
|
return getSubString( start-1, length );
|
||||||
}
|
}
|
||||||
if ( "free".equals( methodName ) && argCount == 0 ) {
|
if ( "free".equals( methodName ) && argCount == 0 ) {
|
||||||
reader.close();
|
characterStream.release();
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
if ( "toString".equals( methodName ) && argCount == 0 ) {
|
if ( "toString".equals( methodName ) && argCount == 0 ) {
|
||||||
|
@ -171,7 +168,7 @@ public class ClobProxy implements InvocationHandler {
|
||||||
protected void resetIfNeeded() throws SQLException {
|
protected void resetIfNeeded() throws SQLException {
|
||||||
try {
|
try {
|
||||||
if ( needsReset ) {
|
if ( needsReset ) {
|
||||||
reader.reset();
|
characterStream.asReader().reset();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
catch ( IOException ioe ) {
|
catch ( IOException ioe ) {
|
||||||
|
|
|
@ -59,9 +59,7 @@ public class ContextualLobCreator extends AbstractLobCreator implements LobCreat
|
||||||
return lobCreationContext.execute( CREATE_BLOB_CALLBACK );
|
return lobCreationContext.execute( CREATE_BLOB_CALLBACK );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Blob createBlob(byte[] bytes) {
|
public Blob createBlob(byte[] bytes) {
|
||||||
try {
|
try {
|
||||||
Blob blob = createBlob();
|
Blob blob = createBlob();
|
||||||
|
@ -73,25 +71,11 @@ public class ContextualLobCreator extends AbstractLobCreator implements LobCreat
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Blob createBlob(InputStream inputStream, long length) {
|
public Blob createBlob(InputStream inputStream, long length) {
|
||||||
try {
|
// IMPL NOTE : it is inefficient to use JDBC LOB locator creation to create a LOB
|
||||||
Blob blob = createBlob();
|
// backed by a given stream. So just wrap the stream (which is what the NonContextualLobCreator does).
|
||||||
OutputStream byteStream = blob.setBinaryStream( 1 );
|
return NonContextualLobCreator.INSTANCE.createBlob( inputStream, length );
|
||||||
StreamUtils.copy( inputStream, byteStream );
|
|
||||||
byteStream.flush();
|
|
||||||
byteStream.close();
|
|
||||||
// todo : validate length written versus length given?
|
|
||||||
return blob;
|
|
||||||
}
|
|
||||||
catch ( SQLException e ) {
|
|
||||||
throw new JDBCException( "Unable to prepare BLOB binary stream for writing",e );
|
|
||||||
}
|
|
||||||
catch ( IOException e ) {
|
|
||||||
throw new HibernateException( "Unable to write stream contents to BLOB", e );
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -103,9 +87,7 @@ public class ContextualLobCreator extends AbstractLobCreator implements LobCreat
|
||||||
return lobCreationContext.execute( CREATE_CLOB_CALLBACK );
|
return lobCreationContext.execute( CREATE_CLOB_CALLBACK );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Clob createClob(String string) {
|
public Clob createClob(String string) {
|
||||||
try {
|
try {
|
||||||
Clob clob = createClob();
|
Clob clob = createClob();
|
||||||
|
@ -117,24 +99,11 @@ public class ContextualLobCreator extends AbstractLobCreator implements LobCreat
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Clob createClob(Reader reader, long length) {
|
public Clob createClob(Reader reader, long length) {
|
||||||
try {
|
// IMPL NOTE : it is inefficient to use JDBC LOB locator creation to create a LOB
|
||||||
Clob clob = createClob();
|
// backed by a given stream. So just wrap the stream (which is what the NonContextualLobCreator does).
|
||||||
Writer writer = clob.setCharacterStream( 1 );
|
return NonContextualLobCreator.INSTANCE.createClob( reader, length );
|
||||||
StreamUtils.copy( reader, writer );
|
|
||||||
writer.flush();
|
|
||||||
writer.close();
|
|
||||||
return clob;
|
|
||||||
}
|
|
||||||
catch ( SQLException e ) {
|
|
||||||
throw new JDBCException( "Unable to prepare CLOB stream for writing", e );
|
|
||||||
}
|
|
||||||
catch ( IOException e ) {
|
|
||||||
throw new HibernateException( "Unable to write CLOB stream content", e );
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -146,9 +115,7 @@ public class ContextualLobCreator extends AbstractLobCreator implements LobCreat
|
||||||
return lobCreationContext.execute( CREATE_NCLOB_CALLBACK );
|
return lobCreationContext.execute( CREATE_NCLOB_CALLBACK );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public NClob createNClob(String string) {
|
public NClob createNClob(String string) {
|
||||||
try {
|
try {
|
||||||
NClob nclob = createNClob();
|
NClob nclob = createNClob();
|
||||||
|
@ -160,24 +127,11 @@ public class ContextualLobCreator extends AbstractLobCreator implements LobCreat
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public NClob createNClob(Reader reader, long length) {
|
public NClob createNClob(Reader reader, long length) {
|
||||||
try {
|
// IMPL NOTE : it is inefficient to use JDBC LOB locator creation to create a LOB
|
||||||
NClob nclob = createNClob();
|
// backed by a given stream. So just wrap the stream (which is what the NonContextualLobCreator does).
|
||||||
Writer writer = nclob.setCharacterStream( 1 );
|
return NonContextualLobCreator.INSTANCE.createNClob( reader, length );
|
||||||
StreamUtils.copy( reader, writer );
|
|
||||||
writer.flush();
|
|
||||||
writer.close();
|
|
||||||
return nclob;
|
|
||||||
}
|
|
||||||
catch ( SQLException e ) {
|
|
||||||
throw new JDBCException( "Unable to prepare NCLOB stream for writing", e );
|
|
||||||
}
|
|
||||||
catch ( IOException e ) {
|
|
||||||
throw new HibernateException( "Unable to write NCLOB stream content", e );
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public static final LobCreationContext.Callback<Blob> CREATE_BLOB_CALLBACK = new LobCreationContext.Callback<Blob>() {
|
public static final LobCreationContext.Callback<Blob> CREATE_BLOB_CALLBACK = new LobCreationContext.Callback<Blob>() {
|
||||||
|
|
|
@ -31,8 +31,6 @@ import java.sql.NClob;
|
||||||
/**
|
/**
|
||||||
* Contract for creating various LOB references.
|
* Contract for creating various LOB references.
|
||||||
*
|
*
|
||||||
* @todo LobCreator really needs to be an api since we expose it to users.
|
|
||||||
*
|
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
* @author Gail Badner
|
* @author Gail Badner
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.io.Reader;
|
import java.io.Reader;
|
||||||
import java.lang.reflect.Proxy;
|
import java.lang.reflect.Proxy;
|
||||||
import java.sql.Clob;
|
import java.sql.Clob;
|
||||||
|
@ -29,10 +30,10 @@ import java.sql.NClob;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Manages aspects of proxying java.sql.NClobs for non-contextual creation, including proxy creation and
|
* Manages aspects of proxying java.sql.NClobs for non-contextual creation, including proxy creation and
|
||||||
* handling proxy invocations.
|
* handling proxy invocations. We use proxies here solely to avoid JDBC version incompatibilities.
|
||||||
* <p/>
|
* <p/>
|
||||||
* Generated proxies are typed as {@link java.sql.Clob} (java.sql.NClob extends {@link java.sql.Clob}) and in JDK 1.6 environments, they
|
* Generated proxies are typed as {@link java.sql.Clob} (java.sql.NClob extends {@link java.sql.Clob})
|
||||||
* are also typed to java.sql.NClob
|
* and in JDK 1.6+ environments, they are also typed to java.sql.NClob
|
||||||
*
|
*
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.io.InputStream;
|
import java.io.InputStream;
|
||||||
import java.io.Reader;
|
import java.io.Reader;
|
||||||
import java.sql.Blob;
|
import java.sql.Blob;
|
||||||
|
@ -41,44 +42,32 @@ public class NonContextualLobCreator extends AbstractLobCreator implements LobCr
|
||||||
private NonContextualLobCreator() {
|
private NonContextualLobCreator() {
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Blob createBlob(byte[] bytes) {
|
public Blob createBlob(byte[] bytes) {
|
||||||
return BlobProxy.generateProxy( bytes );
|
return BlobProxy.generateProxy( bytes );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Blob createBlob(InputStream stream, long length) {
|
public Blob createBlob(InputStream stream, long length) {
|
||||||
return BlobProxy.generateProxy( stream, length );
|
return BlobProxy.generateProxy( stream, length );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Clob createClob(String string) {
|
public Clob createClob(String string) {
|
||||||
return ClobProxy.generateProxy( string );
|
return ClobProxy.generateProxy( string );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Clob createClob(Reader reader, long length) {
|
public Clob createClob(Reader reader, long length) {
|
||||||
return ClobProxy.generateProxy( reader, length );
|
return ClobProxy.generateProxy( reader, length );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public NClob createNClob(String string) {
|
public NClob createNClob(String string) {
|
||||||
return NClobProxy.generateProxy( string );
|
return NClobProxy.generateProxy( string );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public NClob createNClob(Reader reader, long length) {
|
public NClob createNClob(Reader reader, long length) {
|
||||||
return NClobProxy.generateProxy( reader, length );
|
return NClobProxy.generateProxy( reader, length );
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
/*
|
/*
|
||||||
* Hibernate, Relational Persistence for Idiomatic Java
|
* Hibernate, Relational Persistence for Idiomatic Java
|
||||||
*
|
*
|
||||||
* Copyright (c) 2008, Red Hat Middleware LLC or third-party contributors as
|
* Copyright (c) 2008, Red Hat Inc. or third-party contributors as
|
||||||
* indicated by the @author tags or express copyright attribution
|
* indicated by the @author tags or express copyright attribution
|
||||||
* statements applied by the authors. All third-party contributions are
|
* statements applied by the authors. All third-party contributions are
|
||||||
* distributed under license by Red Hat Middleware LLC.
|
* distributed under license by Red Hat Inc.
|
||||||
*
|
*
|
||||||
* This copyrighted material is made available to anyone wishing to use, modify,
|
* This copyrighted material is made available to anyone wishing to use, modify,
|
||||||
* copy, or redistribute it subject to the terms and conditions of the GNU
|
* copy, or redistribute it subject to the terms and conditions of the GNU
|
||||||
|
@ -20,7 +20,6 @@
|
||||||
* Free Software Foundation, Inc.
|
* Free Software Foundation, Inc.
|
||||||
* 51 Franklin Street, Fifth Floor
|
* 51 Franklin Street, Fifth Floor
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*
|
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
@ -42,5 +41,4 @@ public class ReaderInputStream extends InputStream {
|
||||||
public int read() throws IOException {
|
public int read() throws IOException {
|
||||||
return reader.read();
|
return reader.read();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.io.Serializable;
|
import java.io.Serializable;
|
||||||
import java.lang.reflect.InvocationHandler;
|
import java.lang.reflect.InvocationHandler;
|
||||||
import java.lang.reflect.InvocationTargetException;
|
import java.lang.reflect.InvocationTargetException;
|
||||||
|
@ -62,9 +63,7 @@ public class SerializableBlobProxy implements InvocationHandler, Serializable {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
||||||
if ( "getWrappedBlob".equals( method.getName() ) ) {
|
if ( "getWrappedBlob".equals( method.getName() ) ) {
|
||||||
return getWrappedBlob();
|
return getWrappedBlob();
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.io.Serializable;
|
import java.io.Serializable;
|
||||||
import java.lang.reflect.InvocationHandler;
|
import java.lang.reflect.InvocationHandler;
|
||||||
import java.lang.reflect.InvocationTargetException;
|
import java.lang.reflect.InvocationTargetException;
|
||||||
|
@ -62,9 +63,7 @@ public class SerializableClobProxy implements InvocationHandler, Serializable {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Override
|
||||||
* {@inheritDoc}
|
|
||||||
*/
|
|
||||||
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
|
||||||
if ( "getWrappedClob".equals( method.getName() ) ) {
|
if ( "getWrappedClob".equals( method.getName() ) ) {
|
||||||
return getWrappedClob();
|
return getWrappedClob();
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.lang.reflect.Proxy;
|
import java.lang.reflect.Proxy;
|
||||||
import java.sql.Clob;
|
import java.sql.Clob;
|
||||||
|
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.io.InputStream;
|
import java.io.InputStream;
|
||||||
import java.io.OutputStream;
|
import java.io.OutputStream;
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.sql.Blob;
|
import java.sql.Blob;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.engine.jdbc;
|
package org.hibernate.engine.jdbc;
|
||||||
|
|
||||||
import java.sql.Clob;
|
import java.sql.Clob;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -29,6 +29,7 @@ import java.sql.SQLException;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
|
@ -68,7 +69,7 @@ public class DriverManagerConnectionProviderImpl
|
||||||
private boolean autocommit;
|
private boolean autocommit;
|
||||||
|
|
||||||
private final ArrayList<Connection> pool = new ArrayList<Connection>();
|
private final ArrayList<Connection> pool = new ArrayList<Connection>();
|
||||||
private int checkedOut = 0;
|
private final AtomicInteger checkedOut = new AtomicInteger();
|
||||||
|
|
||||||
private boolean stopped;
|
private boolean stopped;
|
||||||
|
|
||||||
|
@ -133,7 +134,8 @@ public class DriverManagerConnectionProviderImpl
|
||||||
LOG.autoCommitMode( autocommit );
|
LOG.autoCommitMode( autocommit );
|
||||||
|
|
||||||
isolation = ConfigurationHelper.getInteger( AvailableSettings.ISOLATION, configurationValues );
|
isolation = ConfigurationHelper.getInteger( AvailableSettings.ISOLATION, configurationValues );
|
||||||
if (isolation != null) LOG.jdbcIsolationLevel(Environment.isolationLevelToString(isolation.intValue()));
|
if ( isolation != null )
|
||||||
|
LOG.jdbcIsolationLevel( Environment.isolationLevelToString( isolation.intValue() ) );
|
||||||
|
|
||||||
url = (String) configurationValues.get( AvailableSettings.URL );
|
url = (String) configurationValues.get( AvailableSettings.URL );
|
||||||
if ( url == null ) {
|
if ( url == null ) {
|
||||||
|
@ -168,13 +170,14 @@ public class DriverManagerConnectionProviderImpl
|
||||||
}
|
}
|
||||||
|
|
||||||
public Connection getConnection() throws SQLException {
|
public Connection getConnection() throws SQLException {
|
||||||
LOG.tracev( "Total checked-out connections: {0}", checkedOut );
|
final boolean traceEnabled = LOG.isTraceEnabled();
|
||||||
|
if ( traceEnabled ) LOG.tracev( "Total checked-out connections: {0}", checkedOut.intValue() );
|
||||||
|
|
||||||
// essentially, if we have available connections in the pool, use one...
|
// essentially, if we have available connections in the pool, use one...
|
||||||
synchronized (pool) {
|
synchronized (pool) {
|
||||||
if ( !pool.isEmpty() ) {
|
if ( !pool.isEmpty() ) {
|
||||||
int last = pool.size() - 1;
|
int last = pool.size() - 1;
|
||||||
LOG.tracev( "Using pooled JDBC connection, pool size: {0}", last );
|
if ( traceEnabled ) LOG.tracev( "Using pooled JDBC connection, pool size: {0}", last );
|
||||||
Connection pooled = pool.remove( last );
|
Connection pooled = pool.remove( last );
|
||||||
if ( isolation != null ) {
|
if ( isolation != null ) {
|
||||||
pooled.setTransactionIsolation( isolation.intValue() );
|
pooled.setTransactionIsolation( isolation.intValue() );
|
||||||
|
@ -182,14 +185,16 @@ public class DriverManagerConnectionProviderImpl
|
||||||
if ( pooled.getAutoCommit() != autocommit ) {
|
if ( pooled.getAutoCommit() != autocommit ) {
|
||||||
pooled.setAutoCommit( autocommit );
|
pooled.setAutoCommit( autocommit );
|
||||||
}
|
}
|
||||||
checkedOut++;
|
checkedOut.incrementAndGet();
|
||||||
return pooled;
|
return pooled;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// otherwise we open a new connection...
|
// otherwise we open a new connection...
|
||||||
|
|
||||||
LOG.debug( "Opening new JDBC connection" );
|
final boolean debugEnabled = LOG.isDebugEnabled();
|
||||||
|
if ( debugEnabled ) LOG.debug( "Opening new JDBC connection" );
|
||||||
|
|
||||||
Connection conn = DriverManager.getConnection( url, connectionProps );
|
Connection conn = DriverManager.getConnection( url, connectionProps );
|
||||||
if ( isolation != null ) {
|
if ( isolation != null ) {
|
||||||
conn.setTransactionIsolation( isolation.intValue() );
|
conn.setTransactionIsolation( isolation.intValue() );
|
||||||
|
@ -198,22 +203,23 @@ public class DriverManagerConnectionProviderImpl
|
||||||
conn.setAutoCommit(autocommit);
|
conn.setAutoCommit(autocommit);
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( LOG.isDebugEnabled() ) {
|
if ( debugEnabled ) {
|
||||||
LOG.debugf( "Created connection to: %s, Isolation Level: %s", url, conn.getTransactionIsolation() );
|
LOG.debugf( "Created connection to: %s, Isolation Level: %s", url, conn.getTransactionIsolation() );
|
||||||
}
|
}
|
||||||
|
|
||||||
checkedOut++;
|
checkedOut.incrementAndGet();
|
||||||
return conn;
|
return conn;
|
||||||
}
|
}
|
||||||
|
|
||||||
public void closeConnection(Connection conn) throws SQLException {
|
public void closeConnection(Connection conn) throws SQLException {
|
||||||
checkedOut--;
|
checkedOut.decrementAndGet();
|
||||||
|
|
||||||
|
final boolean traceEnabled = LOG.isTraceEnabled();
|
||||||
// add to the pool if the max size is not yet reached.
|
// add to the pool if the max size is not yet reached.
|
||||||
synchronized ( pool ) {
|
synchronized ( pool ) {
|
||||||
int currentSize = pool.size();
|
int currentSize = pool.size();
|
||||||
if ( currentSize < poolSize ) {
|
if ( currentSize < poolSize ) {
|
||||||
LOG.tracev( "Returning connection to pool, pool size: {0}", ( currentSize + 1 ) );
|
if ( traceEnabled ) LOG.tracev( "Returning connection to pool, pool size: {0}", ( currentSize + 1 ) );
|
||||||
pool.add( conn );
|
pool.add( conn );
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -88,7 +88,7 @@ public class StandardDialectResolver extends AbstractDialectResolver {
|
||||||
|
|
||||||
if ( "PostgreSQL".equals( databaseName ) ) {
|
if ( "PostgreSQL".equals( databaseName ) ) {
|
||||||
final int databaseMinorVersion = metaData.getDatabaseMinorVersion();
|
final int databaseMinorVersion = metaData.getDatabaseMinorVersion();
|
||||||
if (databaseMajorVersion >= 8 && databaseMinorVersion >= 2) {
|
if ( databaseMajorVersion > 8 || ( databaseMajorVersion == 8 && databaseMinorVersion >= 2 ) ) {
|
||||||
return new PostgreSQL82Dialect();
|
return new PostgreSQL82Dialect();
|
||||||
}
|
}
|
||||||
return new PostgreSQL81Dialect();
|
return new PostgreSQL81Dialect();
|
||||||
|
@ -133,6 +133,7 @@ public class StandardDialectResolver extends AbstractDialectResolver {
|
||||||
case 9:
|
case 9:
|
||||||
return new SQLServer2005Dialect();
|
return new SQLServer2005Dialect();
|
||||||
case 10:
|
case 10:
|
||||||
|
case 11:
|
||||||
return new SQLServer2008Dialect();
|
return new SQLServer2008Dialect();
|
||||||
default:
|
default:
|
||||||
LOG.unknownSqlServerVersion(databaseMajorVersion);
|
LOG.unknownSqlServerVersion(databaseMajorVersion);
|
||||||
|
|
|
@ -21,12 +21,13 @@
|
||||||
* 51 Franklin Street, Fifth Floor
|
* 51 Franklin Street, Fifth Floor
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*/
|
*/
|
||||||
package org.hibernate.type.descriptor.java;
|
package org.hibernate.engine.jdbc.internal;
|
||||||
|
|
||||||
import java.io.ByteArrayInputStream;
|
import java.io.ByteArrayInputStream;
|
||||||
|
import java.io.IOException;
|
||||||
import java.io.InputStream;
|
import java.io.InputStream;
|
||||||
|
|
||||||
import org.hibernate.type.descriptor.BinaryStream;
|
import org.hibernate.engine.jdbc.BinaryStream;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Implementation of {@link BinaryStream}
|
* Implementation of {@link BinaryStream}
|
||||||
|
@ -50,7 +51,16 @@ public class BinaryStreamImpl extends ByteArrayInputStream implements BinaryStre
|
||||||
return buf;
|
return buf;
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getLength() {
|
public long getLength() {
|
||||||
return length;
|
return length;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void release() {
|
||||||
|
try {
|
||||||
|
super.close();
|
||||||
|
}
|
||||||
|
catch (IOException ignore) {
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
|
@ -0,0 +1,87 @@
|
||||||
|
/*
|
||||||
|
* Hibernate, Relational Persistence for Idiomatic Java
|
||||||
|
*
|
||||||
|
* Copyright (c) 2010, Red Hat Inc. or third-party contributors as
|
||||||
|
* indicated by the @author tags or express copyright attribution
|
||||||
|
* statements applied by the authors. All third-party contributions are
|
||||||
|
* distributed under license by Red Hat Inc.
|
||||||
|
*
|
||||||
|
* This copyrighted material is made available to anyone wishing to use, modify,
|
||||||
|
* copy, or redistribute it subject to the terms and conditions of the GNU
|
||||||
|
* Lesser General Public License, as published by the Free Software Foundation.
|
||||||
|
*
|
||||||
|
* This program is distributed in the hope that it will be useful,
|
||||||
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
|
||||||
|
* or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
|
||||||
|
* for more details.
|
||||||
|
*
|
||||||
|
* You should have received a copy of the GNU Lesser General Public License
|
||||||
|
* along with this distribution; if not, write to:
|
||||||
|
* Free Software Foundation, Inc.
|
||||||
|
* 51 Franklin Street, Fifth Floor
|
||||||
|
* Boston, MA 02110-1301 USA
|
||||||
|
*/
|
||||||
|
package org.hibernate.engine.jdbc.internal;
|
||||||
|
|
||||||
|
import java.io.IOException;
|
||||||
|
import java.io.InputStream;
|
||||||
|
import java.io.Reader;
|
||||||
|
import java.io.StringReader;
|
||||||
|
|
||||||
|
import org.hibernate.engine.jdbc.CharacterStream;
|
||||||
|
import org.hibernate.type.descriptor.java.DataHelper;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Implementation of {@link CharacterStream}
|
||||||
|
*
|
||||||
|
* @author Steve Ebersole
|
||||||
|
*/
|
||||||
|
public class CharacterStreamImpl implements CharacterStream {
|
||||||
|
private final long length;
|
||||||
|
|
||||||
|
private Reader reader;
|
||||||
|
private String string;
|
||||||
|
|
||||||
|
public CharacterStreamImpl(String chars) {
|
||||||
|
this.string = chars;
|
||||||
|
this.length = chars.length();
|
||||||
|
}
|
||||||
|
|
||||||
|
public CharacterStreamImpl(Reader reader, long length) {
|
||||||
|
this.reader = reader;
|
||||||
|
this.length = length;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Reader asReader() {
|
||||||
|
if ( reader == null ) {
|
||||||
|
reader = new StringReader( string );
|
||||||
|
}
|
||||||
|
return reader;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public String asString() {
|
||||||
|
if ( string == null ) {
|
||||||
|
string = DataHelper.extractString( reader );
|
||||||
|
}
|
||||||
|
return string;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public long getLength() {
|
||||||
|
return length;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void release() {
|
||||||
|
if ( reader == null ) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
reader.close();
|
||||||
|
}
|
||||||
|
catch (IOException ignore) {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -31,8 +31,6 @@ import java.util.Iterator;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.CacheMode;
|
import org.hibernate.CacheMode;
|
||||||
import org.hibernate.EntityMode;
|
import org.hibernate.EntityMode;
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
|
@ -48,6 +46,7 @@ import org.hibernate.engine.spi.Status;
|
||||||
import org.hibernate.internal.CoreMessageLogger;
|
import org.hibernate.internal.CoreMessageLogger;
|
||||||
import org.hibernate.persister.collection.CollectionPersister;
|
import org.hibernate.persister.collection.CollectionPersister;
|
||||||
import org.hibernate.pretty.MessageHelper;
|
import org.hibernate.pretty.MessageHelper;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Represents state associated with the processing of a given {@link ResultSet}
|
* Represents state associated with the processing of a given {@link ResultSet}
|
||||||
|
@ -253,8 +252,13 @@ public class CollectionLoadContext {
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
ce.postInitialize( lce.getCollection() );
|
ce.postInitialize( lce.getCollection() );
|
||||||
|
// if (ce.getLoadedPersister().getBatchSize() > 1) { // not the best place for doing this, moved into ce.postInitialize
|
||||||
|
// getLoadContext().getPersistenceContext().getBatchFetchQueue().removeBatchLoadableCollection(ce);
|
||||||
|
// }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
boolean addToCache = hasNoQueuedAdds && // there were no queued additions
|
boolean addToCache = hasNoQueuedAdds && // there were no queued additions
|
||||||
persister.hasCache() && // and the role has a cache
|
persister.hasCache() && // and the role has a cache
|
||||||
session.getCacheMode().isPutEnabled() &&
|
session.getCacheMode().isPutEnabled() &&
|
||||||
|
@ -266,7 +270,7 @@ public class CollectionLoadContext {
|
||||||
if ( LOG.isDebugEnabled() ) {
|
if ( LOG.isDebugEnabled() ) {
|
||||||
LOG.debugf(
|
LOG.debugf(
|
||||||
"Collection fully initialized: %s",
|
"Collection fully initialized: %s",
|
||||||
MessageHelper.collectionInfoString(persister, lce.getKey(), session.getFactory())
|
MessageHelper.collectionInfoString(persister, lce.getCollection(), lce.getKey(), session)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if ( session.getFactory().getStatistics().isStatisticsEnabled() ) {
|
if ( session.getFactory().getStatistics().isStatisticsEnabled() ) {
|
||||||
|
@ -285,7 +289,7 @@ public class CollectionLoadContext {
|
||||||
final SessionFactoryImplementor factory = session.getFactory();
|
final SessionFactoryImplementor factory = session.getFactory();
|
||||||
|
|
||||||
if ( LOG.isDebugEnabled() ) {
|
if ( LOG.isDebugEnabled() ) {
|
||||||
LOG.debugf( "Caching collection: %s", MessageHelper.collectionInfoString( persister, lce.getKey(), factory ) );
|
LOG.debugf( "Caching collection: %s", MessageHelper.collectionInfoString( persister, lce.getCollection(), lce.getKey(), session ) );
|
||||||
}
|
}
|
||||||
|
|
||||||
if ( !session.getEnabledFilters().isEmpty() && persister.isAffectedByEnabledFilters( session ) ) {
|
if ( !session.getEnabledFilters().isEmpty() && persister.isAffectedByEnabledFilters( session ) ) {
|
||||||
|
@ -318,7 +322,7 @@ public class CollectionLoadContext {
|
||||||
if ( collectionOwner == null ) {
|
if ( collectionOwner == null ) {
|
||||||
throw new HibernateException(
|
throw new HibernateException(
|
||||||
"Unable to resolve owner of loading collection [" +
|
"Unable to resolve owner of loading collection [" +
|
||||||
MessageHelper.collectionInfoString( persister, lce.getKey(), factory ) +
|
MessageHelper.collectionInfoString( persister, lce.getCollection(), lce.getKey(), session ) +
|
||||||
"] for second level caching"
|
"] for second level caching"
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
@ -201,11 +201,6 @@ public class LoadContexts {
|
||||||
}
|
}
|
||||||
return lce.getCollection();
|
return lce.getCollection();
|
||||||
}
|
}
|
||||||
// TODO : should really move this log statement to CollectionType, where this is used from...
|
|
||||||
if ( LOG.isTraceEnabled() ) {
|
|
||||||
LOG.tracef( "Creating collection wrapper: %s",
|
|
||||||
MessageHelper.collectionInfoString( persister, ownerKey, getSession().getFactory() ) );
|
|
||||||
}
|
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -25,15 +25,17 @@ package org.hibernate.engine.spi;
|
||||||
|
|
||||||
import java.io.Serializable;
|
import java.io.Serializable;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
import java.util.Iterator;
|
|
||||||
import java.util.LinkedHashMap;
|
import java.util.LinkedHashMap;
|
||||||
|
import java.util.LinkedHashSet;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
import java.util.Map.Entry;
|
||||||
|
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
import org.hibernate.EntityMode;
|
import org.hibernate.EntityMode;
|
||||||
import org.hibernate.cache.spi.CacheKey;
|
import org.hibernate.cache.spi.CacheKey;
|
||||||
import org.hibernate.collection.spi.PersistentCollection;
|
import org.hibernate.collection.spi.PersistentCollection;
|
||||||
import org.hibernate.internal.util.MarkerObject;
|
import org.hibernate.internal.CoreMessageLogger;
|
||||||
import org.hibernate.internal.util.collections.IdentityMap;
|
|
||||||
import org.hibernate.persister.collection.CollectionPersister;
|
import org.hibernate.persister.collection.CollectionPersister;
|
||||||
import org.hibernate.persister.entity.EntityPersister;
|
import org.hibernate.persister.entity.EntityPersister;
|
||||||
|
|
||||||
|
@ -43,33 +45,35 @@ import org.hibernate.persister.entity.EntityPersister;
|
||||||
* can be re-used as a subquery for loading owned collections.
|
* can be re-used as a subquery for loading owned collections.
|
||||||
*
|
*
|
||||||
* @author Gavin King
|
* @author Gavin King
|
||||||
|
* @author Steve Ebersole
|
||||||
|
* @author Guenther Demetz
|
||||||
*/
|
*/
|
||||||
public class BatchFetchQueue {
|
public class BatchFetchQueue {
|
||||||
|
private static final CoreMessageLogger LOG = Logger.getMessageLogger( CoreMessageLogger.class, BatchFetchQueue.class.getName() );
|
||||||
|
|
||||||
public static final Object MARKER = new MarkerObject( "MARKER" );
|
private final PersistenceContext context;
|
||||||
|
|
||||||
/**
|
|
||||||
* Defines a sequence of {@link EntityKey} elements that are currently
|
|
||||||
* elegible for batch-fetching.
|
|
||||||
* <p/>
|
|
||||||
* Even though this is a map, we only use the keys. A map was chosen in
|
|
||||||
* order to utilize a {@link LinkedHashMap} to maintain sequencing
|
|
||||||
* as well as uniqueness.
|
|
||||||
* <p/>
|
|
||||||
* TODO : this would be better as a SequencedReferenceSet, but no such beast exists!
|
|
||||||
*/
|
|
||||||
private final Map batchLoadableEntityKeys = new LinkedHashMap(8);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* A map of {@link SubselectFetch subselect-fetch descriptors} keyed by the
|
* A map of {@link SubselectFetch subselect-fetch descriptors} keyed by the
|
||||||
* {@link EntityKey) against which the descriptor is registered.
|
* {@link EntityKey) against which the descriptor is registered.
|
||||||
*/
|
*/
|
||||||
private final Map subselectsByEntityKey = new HashMap(8);
|
private final Map<EntityKey, SubselectFetch> subselectsByEntityKey = new HashMap<EntityKey, SubselectFetch>(8);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The owning persistence context.
|
* Used to hold information about the entities that are currently eligible for batch-fetching. Ultimately
|
||||||
|
* used by {@link #getEntityBatch} to build entity load batches.
|
||||||
|
* <p/>
|
||||||
|
* A Map structure is used to segment the keys by entity type since loading can only be done for a particular entity
|
||||||
|
* type at a time.
|
||||||
*/
|
*/
|
||||||
private final PersistenceContext context;
|
private final Map <String,LinkedHashSet<EntityKey>> batchLoadableEntityKeys = new HashMap <String,LinkedHashSet<EntityKey>>(8);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Used to hold information about the collections that are currently eligible for batch-fetching. Ultimately
|
||||||
|
* used by {@link #getCollectionBatch} to build collection load batches.
|
||||||
|
*/
|
||||||
|
private final Map<String, LinkedHashMap<CollectionEntry, PersistentCollection>> batchLoadableCollections =
|
||||||
|
new HashMap<String, LinkedHashMap <CollectionEntry, PersistentCollection>>(8);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Constructs a queue for the given context.
|
* Constructs a queue for the given context.
|
||||||
|
@ -85,9 +89,13 @@ public class BatchFetchQueue {
|
||||||
*/
|
*/
|
||||||
public void clear() {
|
public void clear() {
|
||||||
batchLoadableEntityKeys.clear();
|
batchLoadableEntityKeys.clear();
|
||||||
|
batchLoadableCollections.clear();
|
||||||
subselectsByEntityKey.clear();
|
subselectsByEntityKey.clear();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// sub-select support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Retrieve the fetch descriptor associated with the given entity key.
|
* Retrieve the fetch descriptor associated with the given entity key.
|
||||||
*
|
*
|
||||||
|
@ -96,7 +104,7 @@ public class BatchFetchQueue {
|
||||||
* this entity key.
|
* this entity key.
|
||||||
*/
|
*/
|
||||||
public SubselectFetch getSubselect(EntityKey key) {
|
public SubselectFetch getSubselect(EntityKey key) {
|
||||||
return (SubselectFetch) subselectsByEntityKey.get(key);
|
return subselectsByEntityKey.get( key );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -128,6 +136,9 @@ public class BatchFetchQueue {
|
||||||
subselectsByEntityKey.clear();
|
subselectsByEntityKey.clear();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// entity batch support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* If an EntityKey represents a batch loadable entity, add
|
* If an EntityKey represents a batch loadable entity, add
|
||||||
* it to the queue.
|
* it to the queue.
|
||||||
|
@ -140,81 +151,30 @@ public class BatchFetchQueue {
|
||||||
*/
|
*/
|
||||||
public void addBatchLoadableEntityKey(EntityKey key) {
|
public void addBatchLoadableEntityKey(EntityKey key) {
|
||||||
if ( key.isBatchLoadable() ) {
|
if ( key.isBatchLoadable() ) {
|
||||||
batchLoadableEntityKeys.put( key, MARKER );
|
LinkedHashSet<EntityKey> set = batchLoadableEntityKeys.get( key.getEntityName());
|
||||||
|
if (set == null) {
|
||||||
|
set = new LinkedHashSet<EntityKey>(8);
|
||||||
|
batchLoadableEntityKeys.put( key.getEntityName(), set);
|
||||||
|
}
|
||||||
|
set.add(key);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* After evicting or deleting or loading an entity, we don't
|
* After evicting or deleting or loading an entity, we don't
|
||||||
* need to batch fetch it anymore, remove it from the queue
|
* need to batch fetch it anymore, remove it from the queue
|
||||||
* if necessary
|
* if necessary
|
||||||
*/
|
*/
|
||||||
public void removeBatchLoadableEntityKey(EntityKey key) {
|
public void removeBatchLoadableEntityKey(EntityKey key) {
|
||||||
if ( key.isBatchLoadable() ) batchLoadableEntityKeys.remove(key);
|
if ( key.isBatchLoadable() ) {
|
||||||
}
|
LinkedHashSet<EntityKey> set = batchLoadableEntityKeys.get( key.getEntityName());
|
||||||
|
if (set != null) {
|
||||||
/**
|
set.remove(key);
|
||||||
* Get a batch of uninitialized collection keys for a given role
|
|
||||||
*
|
|
||||||
* @param collectionPersister The persister for the collection role.
|
|
||||||
* @param id A key that must be included in the batch fetch
|
|
||||||
* @param batchSize the maximum number of keys to return
|
|
||||||
* @return an array of collection keys, of length batchSize (padded with nulls)
|
|
||||||
*/
|
|
||||||
public Serializable[] getCollectionBatch(
|
|
||||||
final CollectionPersister collectionPersister,
|
|
||||||
final Serializable id,
|
|
||||||
final int batchSize) {
|
|
||||||
Serializable[] keys = new Serializable[batchSize];
|
|
||||||
keys[0] = id;
|
|
||||||
int i = 1;
|
|
||||||
//int count = 0;
|
|
||||||
int end = -1;
|
|
||||||
boolean checkForEnd = false;
|
|
||||||
// this only works because collection entries are kept in a sequenced
|
|
||||||
// map by persistence context (maybe we should do like entities and
|
|
||||||
// keep a separate sequences set...)
|
|
||||||
|
|
||||||
for ( Map.Entry<PersistentCollection,CollectionEntry> me :
|
|
||||||
IdentityMap.concurrentEntries( (Map<PersistentCollection,CollectionEntry>) context.getCollectionEntries() )) {
|
|
||||||
|
|
||||||
CollectionEntry ce = me.getValue();
|
|
||||||
PersistentCollection collection = me.getKey();
|
|
||||||
if ( !collection.wasInitialized() && ce.getLoadedPersister() == collectionPersister ) {
|
|
||||||
|
|
||||||
if ( checkForEnd && i == end ) {
|
|
||||||
return keys; //the first key found after the given key
|
|
||||||
}
|
|
||||||
|
|
||||||
//if ( end == -1 && count > batchSize*10 ) return keys; //try out ten batches, max
|
|
||||||
|
|
||||||
final boolean isEqual = collectionPersister.getKeyType().isEqual(
|
|
||||||
id,
|
|
||||||
ce.getLoadedKey(),
|
|
||||||
collectionPersister.getFactory()
|
|
||||||
);
|
|
||||||
|
|
||||||
if ( isEqual ) {
|
|
||||||
end = i;
|
|
||||||
//checkForEnd = false;
|
|
||||||
}
|
|
||||||
else if ( !isCached( ce.getLoadedKey(), collectionPersister ) ) {
|
|
||||||
keys[i++] = ce.getLoadedKey();
|
|
||||||
//count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
if ( i == batchSize ) {
|
|
||||||
i = 1; //end of array, start filling again from start
|
|
||||||
if ( end != -1 ) {
|
|
||||||
checkForEnd = true;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
|
||||||
return keys; //we ran out of keys to try
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get a batch of unloaded identifiers for this class, using a slightly
|
* Get a batch of unloaded identifiers for this class, using a slightly
|
||||||
* complex algorithm that tries to grab keys registered immediately after
|
* complex algorithm that tries to grab keys registered immediately after
|
||||||
|
@ -236,10 +196,11 @@ public class BatchFetchQueue {
|
||||||
int end = -1;
|
int end = -1;
|
||||||
boolean checkForEnd = false;
|
boolean checkForEnd = false;
|
||||||
|
|
||||||
Iterator iter = batchLoadableEntityKeys.keySet().iterator();
|
// TODO: this needn't exclude subclasses...
|
||||||
while ( iter.hasNext() ) {
|
|
||||||
EntityKey key = (EntityKey) iter.next();
|
LinkedHashSet<EntityKey> set = batchLoadableEntityKeys.get( persister.getEntityName() );
|
||||||
if ( key.getEntityName().equals( persister.getEntityName() ) ) { //TODO: this needn't exclude subclasses...
|
if ( set != null ) {
|
||||||
|
for ( EntityKey key : set ) {
|
||||||
if ( checkForEnd && i == end ) {
|
if ( checkForEnd && i == end ) {
|
||||||
//the first id found after the given id
|
//the first id found after the given id
|
||||||
return ids;
|
return ids;
|
||||||
|
@ -254,7 +215,9 @@ public class BatchFetchQueue {
|
||||||
}
|
}
|
||||||
if ( i == batchSize ) {
|
if ( i == batchSize ) {
|
||||||
i = 1; // end of array, start filling again from start
|
i = 1; // end of array, start filling again from start
|
||||||
if (end!=-1) checkForEnd = true;
|
if ( end != -1 ) {
|
||||||
|
checkForEnd = true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -273,6 +236,98 @@ public class BatchFetchQueue {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// collection batch support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
/**
|
||||||
|
* If an CollectionEntry represents a batch loadable collection, add
|
||||||
|
* it to the queue.
|
||||||
|
*/
|
||||||
|
public void addBatchLoadableCollection(PersistentCollection collection, CollectionEntry ce) {
|
||||||
|
final CollectionPersister persister = ce.getLoadedPersister();
|
||||||
|
|
||||||
|
LinkedHashMap<CollectionEntry, PersistentCollection> map = batchLoadableCollections.get( persister.getRole() );
|
||||||
|
if ( map == null ) {
|
||||||
|
map = new LinkedHashMap<CollectionEntry, PersistentCollection>( 16 );
|
||||||
|
batchLoadableCollections.put( persister.getRole(), map );
|
||||||
|
}
|
||||||
|
map.put( ce, collection );
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* After a collection was initialized or evicted, we don't
|
||||||
|
* need to batch fetch it anymore, remove it from the queue
|
||||||
|
* if necessary
|
||||||
|
*/
|
||||||
|
public void removeBatchLoadableCollection(CollectionEntry ce) {
|
||||||
|
LinkedHashMap<CollectionEntry, PersistentCollection> map = batchLoadableCollections.get( ce.getLoadedPersister().getRole() );
|
||||||
|
if ( map != null ) {
|
||||||
|
map.remove( ce );
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get a batch of uninitialized collection keys for a given role
|
||||||
|
*
|
||||||
|
* @param collectionPersister The persister for the collection role.
|
||||||
|
* @param id A key that must be included in the batch fetch
|
||||||
|
* @param batchSize the maximum number of keys to return
|
||||||
|
* @return an array of collection keys, of length batchSize (padded with nulls)
|
||||||
|
*/
|
||||||
|
public Serializable[] getCollectionBatch(
|
||||||
|
final CollectionPersister collectionPersister,
|
||||||
|
final Serializable id,
|
||||||
|
final int batchSize) {
|
||||||
|
|
||||||
|
Serializable[] keys = new Serializable[batchSize];
|
||||||
|
keys[0] = id;
|
||||||
|
|
||||||
|
int i = 1;
|
||||||
|
int end = -1;
|
||||||
|
boolean checkForEnd = false;
|
||||||
|
|
||||||
|
final LinkedHashMap<CollectionEntry, PersistentCollection> map = batchLoadableCollections.get( collectionPersister.getRole() );
|
||||||
|
if ( map != null ) {
|
||||||
|
for ( Entry<CollectionEntry, PersistentCollection> me : map.entrySet() ) {
|
||||||
|
final CollectionEntry ce = me.getKey();
|
||||||
|
final PersistentCollection collection = me.getValue();
|
||||||
|
|
||||||
|
if ( collection.wasInitialized() ) {
|
||||||
|
// should never happen
|
||||||
|
LOG.warn( "Encountered initialized collection in BatchFetchQueue, this should not happen." );
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( checkForEnd && i == end ) {
|
||||||
|
return keys; //the first key found after the given key
|
||||||
|
}
|
||||||
|
|
||||||
|
final boolean isEqual = collectionPersister.getKeyType().isEqual(
|
||||||
|
id,
|
||||||
|
ce.getLoadedKey(),
|
||||||
|
collectionPersister.getFactory()
|
||||||
|
);
|
||||||
|
|
||||||
|
if ( isEqual ) {
|
||||||
|
end = i;
|
||||||
|
//checkForEnd = false;
|
||||||
|
}
|
||||||
|
else if ( !isCached( ce.getLoadedKey(), collectionPersister ) ) {
|
||||||
|
keys[i++] = ce.getLoadedKey();
|
||||||
|
//count++;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( i == batchSize ) {
|
||||||
|
i = 1; //end of array, start filling again from start
|
||||||
|
if ( end != -1 ) {
|
||||||
|
checkForEnd = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return keys; //we ran out of keys to try
|
||||||
|
}
|
||||||
|
|
||||||
private boolean isCached(Serializable collectionKey, CollectionPersister persister) {
|
private boolean isCached(Serializable collectionKey, CollectionPersister persister) {
|
||||||
if ( persister.hasCache() ) {
|
if ( persister.hasCache() ) {
|
||||||
CacheKey cacheKey = context.getSession().generateCacheKey(
|
CacheKey cacheKey = context.getSession().generateCacheKey(
|
||||||
|
|
|
@ -34,6 +34,7 @@ import org.jboss.logging.Logger;
|
||||||
import org.hibernate.AssertionFailure;
|
import org.hibernate.AssertionFailure;
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
import org.hibernate.MappingException;
|
import org.hibernate.MappingException;
|
||||||
|
import org.hibernate.collection.internal.AbstractPersistentCollection;
|
||||||
import org.hibernate.collection.spi.PersistentCollection;
|
import org.hibernate.collection.spi.PersistentCollection;
|
||||||
import org.hibernate.internal.CoreMessageLogger;
|
import org.hibernate.internal.CoreMessageLogger;
|
||||||
import org.hibernate.persister.collection.CollectionPersister;
|
import org.hibernate.persister.collection.CollectionPersister;
|
||||||
|
@ -215,6 +216,9 @@ public final class CollectionEntry implements Serializable {
|
||||||
collection.getSnapshot( getLoadedPersister() ) :
|
collection.getSnapshot( getLoadedPersister() ) :
|
||||||
null;
|
null;
|
||||||
collection.setSnapshot(loadedKey, role, snapshot);
|
collection.setSnapshot(loadedKey, role, snapshot);
|
||||||
|
if (getLoadedPersister().getBatchSize() > 1) {
|
||||||
|
((AbstractPersistentCollection) collection).getSession().getPersistenceContext().getBatchFetchQueue().removeBatchLoadableCollection(this);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -260,6 +264,20 @@ public final class CollectionEntry implements Serializable {
|
||||||
return snapshot;
|
return snapshot;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Reset the stored snapshot for both the persistent collection and this collection entry.
|
||||||
|
* Used during the merge of detached collections.
|
||||||
|
*
|
||||||
|
* @param collection the persistentcollection to be updated
|
||||||
|
* @param storedSnapshot the new stored snapshot
|
||||||
|
*/
|
||||||
|
public void resetStoredSnapshot(PersistentCollection collection, Serializable storedSnapshot) {
|
||||||
|
LOG.debugf("Reset storedSnapshot to %s for %s", storedSnapshot, this);
|
||||||
|
|
||||||
|
snapshot = storedSnapshot;
|
||||||
|
collection.setSnapshot(loadedKey, role, snapshot);
|
||||||
|
}
|
||||||
|
|
||||||
private void setLoadedPersister(CollectionPersister persister) {
|
private void setLoadedPersister(CollectionPersister persister) {
|
||||||
loadedPersister = persister;
|
loadedPersister = persister;
|
||||||
setRole( persister == null ? null : persister.getRole() );
|
setRole( persister == null ? null : persister.getRole() );
|
||||||
|
|
|
@ -270,9 +270,15 @@ public final class EntityEntry implements Serializable {
|
||||||
}
|
}
|
||||||
|
|
||||||
public Object getLoadedValue(String propertyName) {
|
public Object getLoadedValue(String propertyName) {
|
||||||
int propertyIndex = ( (UniqueKeyLoadable) persister ).getPropertyIndex(propertyName);
|
if ( loadedState == null ) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
int propertyIndex = ( (UniqueKeyLoadable) persister )
|
||||||
|
.getPropertyIndex( propertyName );
|
||||||
return loadedState[propertyIndex];
|
return loadedState[propertyIndex];
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Not sure this is the best method name, but the general idea here is to return {@code true} if the entity can
|
* Not sure this is the best method name, but the general idea here is to return {@code true} if the entity can
|
||||||
|
|
|
@ -830,6 +830,15 @@ public interface PersistenceContext {
|
||||||
* of old values as no longer valid.
|
* of old values as no longer valid.
|
||||||
*/
|
*/
|
||||||
public void cleanupFromSynchronizations();
|
public void cleanupFromSynchronizations();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Called on {@link org.hibernate.Session#evict} to give a chance to clean up natural-id cross refs.
|
||||||
|
*
|
||||||
|
* @param object The entity instance.
|
||||||
|
* @param persister The entity persister
|
||||||
|
* @param identifier The entity identifier
|
||||||
|
*/
|
||||||
|
public void handleEviction(Object object, EntityPersister persister, Serializable identifier);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -80,7 +80,7 @@ public class DefaultEvictEventListener implements EvictEventListener {
|
||||||
if ( !li.isUninitialized() ) {
|
if ( !li.isUninitialized() ) {
|
||||||
final Object entity = persistenceContext.removeEntity( key );
|
final Object entity = persistenceContext.removeEntity( key );
|
||||||
if ( entity != null ) {
|
if ( entity != null ) {
|
||||||
EntityEntry e = event.getSession().getPersistenceContext().removeEntry( entity );
|
EntityEntry e = persistenceContext.removeEntry( entity );
|
||||||
doEvict( entity, key, e.getPersister(), event.getSession() );
|
doEvict( entity, key, e.getPersister(), event.getSession() );
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -106,6 +106,10 @@ public class DefaultEvictEventListener implements EvictEventListener {
|
||||||
LOG.tracev( "Evicting {0}", MessageHelper.infoString( persister ) );
|
LOG.tracev( "Evicting {0}", MessageHelper.infoString( persister ) );
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ( persister.hasNaturalIdentifier() ) {
|
||||||
|
session.getPersistenceContext().getNaturalIdHelper().handleEviction( object, persister, key.getIdentifier() );
|
||||||
|
}
|
||||||
|
|
||||||
// remove all collections for the entity from the session-level cache
|
// remove all collections for the entity from the session-level cache
|
||||||
if ( persister.hasCollections() ) {
|
if ( persister.hasCollections() ) {
|
||||||
new EvictVisitor( session ).process( object, persister );
|
new EvictVisitor( session ).process( object, persister );
|
||||||
|
|
|
@ -25,8 +25,6 @@ package org.hibernate.event.internal;
|
||||||
|
|
||||||
import java.io.Serializable;
|
import java.io.Serializable;
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
import org.hibernate.cache.spi.CacheKey;
|
import org.hibernate.cache.spi.CacheKey;
|
||||||
import org.hibernate.cache.spi.entry.CollectionCacheEntry;
|
import org.hibernate.cache.spi.entry.CollectionCacheEntry;
|
||||||
|
@ -40,6 +38,7 @@ import org.hibernate.event.spi.InitializeCollectionEventListener;
|
||||||
import org.hibernate.internal.CoreMessageLogger;
|
import org.hibernate.internal.CoreMessageLogger;
|
||||||
import org.hibernate.persister.collection.CollectionPersister;
|
import org.hibernate.persister.collection.CollectionPersister;
|
||||||
import org.hibernate.pretty.MessageHelper;
|
import org.hibernate.pretty.MessageHelper;
|
||||||
|
import org.jboss.logging.Logger;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @author Gavin King
|
* @author Gavin King
|
||||||
|
@ -62,8 +61,7 @@ public class DefaultInitializeCollectionEventListener implements InitializeColle
|
||||||
if ( !collection.wasInitialized() ) {
|
if ( !collection.wasInitialized() ) {
|
||||||
if ( LOG.isTraceEnabled() ) {
|
if ( LOG.isTraceEnabled() ) {
|
||||||
LOG.tracev( "Initializing collection {0}",
|
LOG.tracev( "Initializing collection {0}",
|
||||||
MessageHelper.collectionInfoString( ce.getLoadedPersister(), ce.getLoadedKey(),
|
MessageHelper.collectionInfoString( ce.getLoadedPersister(), collection, ce.getLoadedKey(), source ) );
|
||||||
source.getFactory() ) );
|
|
||||||
}
|
}
|
||||||
|
|
||||||
LOG.trace( "Checking second-level cache" );
|
LOG.trace( "Checking second-level cache" );
|
||||||
|
|
|
@ -79,8 +79,12 @@ public class EvictVisitor extends AbstractVisitor {
|
||||||
if ( LOG.isDebugEnabled() ) {
|
if ( LOG.isDebugEnabled() ) {
|
||||||
LOG.debugf( "Evicting collection: %s",
|
LOG.debugf( "Evicting collection: %s",
|
||||||
MessageHelper.collectionInfoString( ce.getLoadedPersister(),
|
MessageHelper.collectionInfoString( ce.getLoadedPersister(),
|
||||||
|
collection,
|
||||||
ce.getLoadedKey(),
|
ce.getLoadedKey(),
|
||||||
getSession().getFactory() ) );
|
getSession() ) );
|
||||||
|
}
|
||||||
|
if (ce.getLoadedPersister() != null && ce.getLoadedPersister().getBatchSize() > 1) {
|
||||||
|
getSession().getPersistenceContext().getBatchFetchQueue().removeBatchLoadableCollection(ce);
|
||||||
}
|
}
|
||||||
if ( ce.getLoadedPersister() != null && ce.getLoadedKey() != null ) {
|
if ( ce.getLoadedPersister() != null && ce.getLoadedKey() != null ) {
|
||||||
//TODO: is this 100% correct?
|
//TODO: is this 100% correct?
|
||||||
|
|
|
@ -383,11 +383,15 @@ public final class HqlParser extends HqlBaseParser {
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void processMemberOf(Token n, AST p, ASTPair currentAST) {
|
public void processMemberOf(Token n, AST p, ASTPair currentAST) {
|
||||||
AST inAst = n == null ? astFactory.create( IN, "in" ) : astFactory.create( NOT_IN, "not in" );
|
// convert MEMBER OF to the equivalent IN ELEMENTS structure...
|
||||||
astFactory.makeASTRoot( currentAST, inAst );
|
AST inNode = n == null ? astFactory.create( IN, "in" ) : astFactory.create( NOT_IN, "not in" );
|
||||||
AST ast = createSubquery( p );
|
astFactory.makeASTRoot( currentAST, inNode );
|
||||||
ast = ASTUtil.createParent( astFactory, IN_LIST, "inList", ast );
|
|
||||||
inAst.addChild( ast );
|
AST inListNode = astFactory.create( IN_LIST, "inList" );
|
||||||
|
inNode.addChild( inListNode );
|
||||||
|
AST elementsNode = astFactory.create( ELEMENTS, "elements" );
|
||||||
|
inListNode.addChild( elementsNode );
|
||||||
|
elementsNode.addChild( p );
|
||||||
}
|
}
|
||||||
|
|
||||||
static public void panic() {
|
static public void panic() {
|
||||||
|
|
|
@ -31,9 +31,12 @@ import java.util.List;
|
||||||
import antlr.RecognitionException;
|
import antlr.RecognitionException;
|
||||||
|
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
|
import org.hibernate.action.internal.BulkOperationCleanupAction;
|
||||||
import org.hibernate.engine.spi.QueryParameters;
|
import org.hibernate.engine.spi.QueryParameters;
|
||||||
import org.hibernate.engine.spi.RowSelection;
|
import org.hibernate.engine.spi.RowSelection;
|
||||||
|
import org.hibernate.engine.spi.SessionFactoryImplementor;
|
||||||
import org.hibernate.engine.spi.SessionImplementor;
|
import org.hibernate.engine.spi.SessionImplementor;
|
||||||
|
import org.hibernate.event.spi.EventSource;
|
||||||
import org.hibernate.hql.internal.ast.HqlSqlWalker;
|
import org.hibernate.hql.internal.ast.HqlSqlWalker;
|
||||||
import org.hibernate.hql.internal.ast.QuerySyntaxException;
|
import org.hibernate.hql.internal.ast.QuerySyntaxException;
|
||||||
import org.hibernate.hql.internal.ast.SqlGenerator;
|
import org.hibernate.hql.internal.ast.SqlGenerator;
|
||||||
|
@ -45,17 +48,17 @@ import org.hibernate.persister.entity.Queryable;
|
||||||
*
|
*
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
public class BasicExecutor extends AbstractStatementExecutor {
|
public class BasicExecutor implements StatementExecutor {
|
||||||
|
private final SessionFactoryImplementor factory;
|
||||||
private final Queryable persister;
|
private final Queryable persister;
|
||||||
private final String sql;
|
private final String sql;
|
||||||
private final List parameterSpecifications;
|
private final List parameterSpecifications;
|
||||||
|
|
||||||
public BasicExecutor(HqlSqlWalker walker, Queryable persister) {
|
public BasicExecutor(HqlSqlWalker walker, Queryable persister) {
|
||||||
super(walker, null);
|
this.factory = walker.getSessionFactoryHelper().getFactory();
|
||||||
this.persister = persister;
|
this.persister = persister;
|
||||||
try {
|
try {
|
||||||
SqlGenerator gen = new SqlGenerator( getFactory() );
|
SqlGenerator gen = new SqlGenerator( factory );
|
||||||
gen.statement( walker.getAST() );
|
gen.statement( walker.getAST() );
|
||||||
sql = gen.getSQL();
|
sql = gen.getSQL();
|
||||||
gen.getParseErrorHandler().throwQueryException();
|
gen.getParseErrorHandler().throwQueryException();
|
||||||
|
@ -71,8 +74,13 @@ public class BasicExecutor extends AbstractStatementExecutor {
|
||||||
}
|
}
|
||||||
|
|
||||||
public int execute(QueryParameters parameters, SessionImplementor session) throws HibernateException {
|
public int execute(QueryParameters parameters, SessionImplementor session) throws HibernateException {
|
||||||
|
BulkOperationCleanupAction action = new BulkOperationCleanupAction( session, persister );
|
||||||
coordinateSharedCacheCleanup( session );
|
if ( session.isEventSource() ) {
|
||||||
|
( (EventSource) session ).getActionQueue().addAction( action );
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
action.getAfterTransactionCompletionProcess().doAfterTransactionCompletion( true, session );
|
||||||
|
}
|
||||||
|
|
||||||
PreparedStatement st = null;
|
PreparedStatement st = null;
|
||||||
RowSelection selection = parameters.getRowSelection();
|
RowSelection selection = parameters.getRowSelection();
|
||||||
|
@ -101,16 +109,7 @@ public class BasicExecutor extends AbstractStatementExecutor {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
catch( SQLException sqle ) {
|
catch( SQLException sqle ) {
|
||||||
throw getFactory().getSQLExceptionHelper().convert(
|
throw factory.getSQLExceptionHelper().convert( sqle, "could not execute update query", sql );
|
||||||
sqle,
|
|
||||||
"could not execute update query",
|
|
||||||
sql
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
protected Queryable[] getAffectedQueryables() {
|
|
||||||
return new Queryable[] { persister };
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
/*
|
/*
|
||||||
* Hibernate, Relational Persistence for Idiomatic Java
|
* Hibernate, Relational Persistence for Idiomatic Java
|
||||||
*
|
*
|
||||||
* Copyright (c) 2008, Red Hat Middleware LLC or third-party contributors as
|
* Copyright (c) 2008, 2012, Red Hat Inc. or third-party contributors as
|
||||||
* indicated by the @author tags or express copyright attribution
|
* indicated by the @author tags or express copyright attribution
|
||||||
* statements applied by the authors. All third-party contributions are
|
* statements applied by the authors. All third-party contributions are
|
||||||
* distributed under license by Red Hat Middleware LLC.
|
* distributed under license by Red Hat Inc.
|
||||||
*
|
*
|
||||||
* This copyrighted material is made available to anyone wishing to use, modify,
|
* This copyrighted material is made available to anyone wishing to use, modify,
|
||||||
* copy, or redistribute it subject to the terms and conditions of the GNU
|
* copy, or redistribute it subject to the terms and conditions of the GNU
|
||||||
|
@ -20,147 +20,46 @@
|
||||||
* Free Software Foundation, Inc.
|
* Free Software Foundation, Inc.
|
||||||
* 51 Franklin Street, Fifth Floor
|
* 51 Franklin Street, Fifth Floor
|
||||||
* Boston, MA 02110-1301 USA
|
* Boston, MA 02110-1301 USA
|
||||||
*
|
|
||||||
*/
|
*/
|
||||||
package org.hibernate.hql.internal.ast.exec;
|
package org.hibernate.hql.internal.ast.exec;
|
||||||
|
|
||||||
import java.sql.PreparedStatement;
|
|
||||||
import java.sql.SQLException;
|
|
||||||
import java.util.Iterator;
|
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
|
import org.hibernate.action.internal.BulkOperationCleanupAction;
|
||||||
import org.hibernate.engine.spi.QueryParameters;
|
import org.hibernate.engine.spi.QueryParameters;
|
||||||
import org.hibernate.engine.spi.SessionImplementor;
|
import org.hibernate.engine.spi.SessionImplementor;
|
||||||
|
import org.hibernate.event.spi.EventSource;
|
||||||
import org.hibernate.hql.internal.ast.HqlSqlWalker;
|
import org.hibernate.hql.internal.ast.HqlSqlWalker;
|
||||||
import org.hibernate.hql.internal.ast.tree.DeleteStatement;
|
import org.hibernate.hql.spi.MultiTableBulkIdStrategy;
|
||||||
import org.hibernate.hql.internal.ast.tree.FromElement;
|
|
||||||
import org.hibernate.internal.CoreMessageLogger;
|
|
||||||
import org.hibernate.internal.util.StringHelper;
|
|
||||||
import org.hibernate.param.ParameterSpecification;
|
|
||||||
import org.hibernate.persister.entity.Queryable;
|
|
||||||
import org.hibernate.sql.Delete;
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Implementation of MultiTableDeleteExecutor.
|
* Implementation of MultiTableDeleteExecutor.
|
||||||
*
|
*
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
public class MultiTableDeleteExecutor extends AbstractStatementExecutor {
|
public class MultiTableDeleteExecutor implements StatementExecutor {
|
||||||
|
private final MultiTableBulkIdStrategy.DeleteHandler deleteHandler;
|
||||||
private static final CoreMessageLogger LOG = Logger.getMessageLogger(CoreMessageLogger.class,
|
|
||||||
MultiTableDeleteExecutor.class.getName());
|
|
||||||
|
|
||||||
private final Queryable persister;
|
|
||||||
private final String idInsertSelect;
|
|
||||||
private final String[] deletes;
|
|
||||||
|
|
||||||
public MultiTableDeleteExecutor(HqlSqlWalker walker) {
|
public MultiTableDeleteExecutor(HqlSqlWalker walker) {
|
||||||
super(walker, null);
|
MultiTableBulkIdStrategy strategy = walker.getSessionFactoryHelper()
|
||||||
|
.getFactory()
|
||||||
if ( !walker.getSessionFactoryHelper().getFactory().getDialect().supportsTemporaryTables() ) {
|
.getSettings()
|
||||||
throw new HibernateException( "cannot doAfterTransactionCompletion multi-table deletes using dialect not supporting temp tables" );
|
.getMultiTableBulkIdStrategy();
|
||||||
}
|
this.deleteHandler = strategy.buildDeleteHandler( walker.getSessionFactoryHelper().getFactory(), walker );
|
||||||
|
|
||||||
DeleteStatement deleteStatement = ( DeleteStatement ) walker.getAST();
|
|
||||||
FromElement fromElement = deleteStatement.getFromClause().getFromElement();
|
|
||||||
String bulkTargetAlias = fromElement.getTableAlias();
|
|
||||||
this.persister = fromElement.getQueryable();
|
|
||||||
|
|
||||||
this.idInsertSelect = generateIdInsertSelect( persister, bulkTargetAlias, deleteStatement.getWhereClause() );
|
|
||||||
LOG.tracev( "Generated ID-INSERT-SELECT SQL (multi-table delete) : {0}", idInsertSelect );
|
|
||||||
|
|
||||||
String[] tableNames = persister.getConstraintOrderedTableNameClosure();
|
|
||||||
String[][] columnNames = persister.getContraintOrderedTableKeyColumnClosure();
|
|
||||||
String idSubselect = generateIdSubselect( persister );
|
|
||||||
|
|
||||||
deletes = new String[tableNames.length];
|
|
||||||
for ( int i = tableNames.length - 1; i >= 0; i-- ) {
|
|
||||||
// TODO : an optimization here would be to consider cascade deletes and not gen those delete statements;
|
|
||||||
// the difficulty is the ordering of the tables here vs the cascade attributes on the persisters ->
|
|
||||||
// the table info gotten here should really be self-contained (i.e., a class representation
|
|
||||||
// defining all the needed attributes), then we could then get an array of those
|
|
||||||
final Delete delete = new Delete()
|
|
||||||
.setTableName( tableNames[i] )
|
|
||||||
.setWhere( "(" + StringHelper.join( ", ", columnNames[i] ) + ") IN (" + idSubselect + ")" );
|
|
||||||
if ( getFactory().getSettings().isCommentsEnabled() ) {
|
|
||||||
delete.setComment( "bulk delete" );
|
|
||||||
}
|
|
||||||
|
|
||||||
deletes[i] = delete.toStatementString();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public String[] getSqlStatements() {
|
public String[] getSqlStatements() {
|
||||||
return deletes;
|
return deleteHandler.getSqlStatements();
|
||||||
}
|
}
|
||||||
|
|
||||||
public int execute(QueryParameters parameters, SessionImplementor session) throws HibernateException {
|
public int execute(QueryParameters parameters, SessionImplementor session) throws HibernateException {
|
||||||
coordinateSharedCacheCleanup( session );
|
BulkOperationCleanupAction action = new BulkOperationCleanupAction( session, deleteHandler.getTargetedQueryable() );
|
||||||
|
if ( session.isEventSource() ) {
|
||||||
createTemporaryTableIfNecessary( persister, session );
|
( (EventSource) session ).getActionQueue().addAction( action );
|
||||||
|
|
||||||
try {
|
|
||||||
// First, save off the pertinent ids, saving the number of pertinent ids for return
|
|
||||||
PreparedStatement ps = null;
|
|
||||||
int resultCount = 0;
|
|
||||||
try {
|
|
||||||
try {
|
|
||||||
ps = session.getTransactionCoordinator().getJdbcCoordinator().getStatementPreparer().prepareStatement( idInsertSelect, false );
|
|
||||||
Iterator paramSpecifications = getIdSelectParameterSpecifications().iterator();
|
|
||||||
int pos = 1;
|
|
||||||
while ( paramSpecifications.hasNext() ) {
|
|
||||||
final ParameterSpecification paramSpec = ( ParameterSpecification ) paramSpecifications.next();
|
|
||||||
pos += paramSpec.bind( ps, parameters, session, pos );
|
|
||||||
}
|
}
|
||||||
resultCount = ps.executeUpdate();
|
else {
|
||||||
}
|
action.getAfterTransactionCompletionProcess().doAfterTransactionCompletion( true, session );
|
||||||
finally {
|
|
||||||
if ( ps != null ) {
|
|
||||||
ps.close();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
catch( SQLException e ) {
|
|
||||||
throw getFactory().getSQLExceptionHelper().convert(
|
|
||||||
e,
|
|
||||||
"could not insert/select ids for bulk delete",
|
|
||||||
idInsertSelect
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start performing the deletes
|
return deleteHandler.execute( session, parameters );
|
||||||
for ( int i = 0; i < deletes.length; i++ ) {
|
|
||||||
try {
|
|
||||||
try {
|
|
||||||
ps = session.getTransactionCoordinator().getJdbcCoordinator().getStatementPreparer().prepareStatement( deletes[i], false );
|
|
||||||
ps.executeUpdate();
|
|
||||||
}
|
|
||||||
finally {
|
|
||||||
if ( ps != null ) {
|
|
||||||
ps.close();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
catch( SQLException e ) {
|
|
||||||
throw getFactory().getSQLExceptionHelper().convert(
|
|
||||||
e,
|
|
||||||
"error performing bulk delete",
|
|
||||||
deletes[i]
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return resultCount;
|
|
||||||
}
|
|
||||||
finally {
|
|
||||||
dropTemporaryTableIfNecessary( persister, session );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
protected Queryable[] getAffectedQueryables() {
|
|
||||||
return new Queryable[] { persister };
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -24,178 +24,44 @@
|
||||||
*/
|
*/
|
||||||
package org.hibernate.hql.internal.ast.exec;
|
package org.hibernate.hql.internal.ast.exec;
|
||||||
|
|
||||||
import java.sql.PreparedStatement;
|
|
||||||
import java.sql.SQLException;
|
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.Iterator;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
import org.jboss.logging.Logger;
|
|
||||||
|
|
||||||
import org.hibernate.HibernateException;
|
import org.hibernate.HibernateException;
|
||||||
|
import org.hibernate.action.internal.BulkOperationCleanupAction;
|
||||||
import org.hibernate.engine.spi.QueryParameters;
|
import org.hibernate.engine.spi.QueryParameters;
|
||||||
import org.hibernate.engine.spi.SessionImplementor;
|
import org.hibernate.engine.spi.SessionImplementor;
|
||||||
|
import org.hibernate.event.spi.EventSource;
|
||||||
import org.hibernate.hql.internal.ast.HqlSqlWalker;
|
import org.hibernate.hql.internal.ast.HqlSqlWalker;
|
||||||
import org.hibernate.hql.internal.ast.tree.AssignmentSpecification;
|
import org.hibernate.hql.spi.MultiTableBulkIdStrategy;
|
||||||
import org.hibernate.hql.internal.ast.tree.FromElement;
|
|
||||||
import org.hibernate.hql.internal.ast.tree.UpdateStatement;
|
|
||||||
import org.hibernate.internal.CoreMessageLogger;
|
|
||||||
import org.hibernate.internal.util.StringHelper;
|
|
||||||
import org.hibernate.param.ParameterSpecification;
|
|
||||||
import org.hibernate.persister.entity.Queryable;
|
|
||||||
import org.hibernate.sql.Update;
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Implementation of MultiTableUpdateExecutor.
|
* Implementation of MultiTableUpdateExecutor.
|
||||||
*
|
*
|
||||||
* @author Steve Ebersole
|
* @author Steve Ebersole
|
||||||
*/
|
*/
|
||||||
public class MultiTableUpdateExecutor extends AbstractStatementExecutor {
|
public class MultiTableUpdateExecutor implements StatementExecutor {
|
||||||
|
private final MultiTableBulkIdStrategy.UpdateHandler updateHandler;
|
||||||
private static final CoreMessageLogger LOG = Logger.getMessageLogger(CoreMessageLogger.class,
|
|
||||||
MultiTableUpdateExecutor.class.getName());
|
|
||||||
|
|
||||||
private final Queryable persister;
|
|
||||||
private final String idInsertSelect;
|
|
||||||
private final String[] updates;
|
|
||||||
private final ParameterSpecification[][] hqlParameters;
|
|
||||||
|
|
||||||
public MultiTableUpdateExecutor(HqlSqlWalker walker) {
|
public MultiTableUpdateExecutor(HqlSqlWalker walker) {
|
||||||
super(walker, null);
|
MultiTableBulkIdStrategy strategy = walker.getSessionFactoryHelper()
|
||||||
|
.getFactory()
|
||||||
if ( !walker.getSessionFactoryHelper().getFactory().getDialect().supportsTemporaryTables() ) {
|
.getSettings()
|
||||||
throw new HibernateException( "cannot doAfterTransactionCompletion multi-table updates using dialect not supporting temp tables" );
|
.getMultiTableBulkIdStrategy();
|
||||||
}
|
this.updateHandler = strategy.buildUpdateHandler( walker.getSessionFactoryHelper().getFactory(), walker );
|
||||||
|
|
||||||
UpdateStatement updateStatement = ( UpdateStatement ) walker.getAST();
|
|
||||||
FromElement fromElement = updateStatement.getFromClause().getFromElement();
|
|
||||||
String bulkTargetAlias = fromElement.getTableAlias();
|
|
||||||
this.persister = fromElement.getQueryable();
|
|
||||||
|
|
||||||
this.idInsertSelect = generateIdInsertSelect( persister, bulkTargetAlias, updateStatement.getWhereClause() );
|
|
||||||
LOG.tracev( "Generated ID-INSERT-SELECT SQL (multi-table update) : {0}", idInsertSelect );
|
|
||||||
|
|
||||||
String[] tableNames = persister.getConstraintOrderedTableNameClosure();
|
|
||||||
String[][] columnNames = persister.getContraintOrderedTableKeyColumnClosure();
|
|
||||||
|
|
||||||
String idSubselect = generateIdSubselect( persister );
|
|
||||||
List assignmentSpecifications = walker.getAssignmentSpecifications();
|
|
||||||
|
|
||||||
updates = new String[tableNames.length];
|
|
||||||
hqlParameters = new ParameterSpecification[tableNames.length][];
|
|
||||||
for ( int tableIndex = 0; tableIndex < tableNames.length; tableIndex++ ) {
|
|
||||||
boolean affected = false;
|
|
||||||
List parameterList = new ArrayList();
|
|
||||||
Update update = new Update( getFactory().getDialect() )
|
|
||||||
.setTableName( tableNames[tableIndex] )
|
|
||||||
.setWhere( "(" + StringHelper.join( ", ", columnNames[tableIndex] ) + ") IN (" + idSubselect + ")" );
|
|
||||||
if ( getFactory().getSettings().isCommentsEnabled() ) {
|
|
||||||
update.setComment( "bulk update" );
|
|
||||||
}
|
|
||||||
final Iterator itr = assignmentSpecifications.iterator();
|
|
||||||
while ( itr.hasNext() ) {
|
|
||||||
final AssignmentSpecification specification = ( AssignmentSpecification ) itr.next();
|
|
||||||
if ( specification.affectsTable( tableNames[tableIndex] ) ) {
|
|
||||||
affected = true;
|
|
||||||
update.appendAssignmentFragment( specification.getSqlAssignmentFragment() );
|
|
||||||
if ( specification.getParameters() != null ) {
|
|
||||||
for ( int paramIndex = 0; paramIndex < specification.getParameters().length; paramIndex++ ) {
|
|
||||||
parameterList.add( specification.getParameters()[paramIndex] );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if ( affected ) {
|
|
||||||
updates[tableIndex] = update.toStatementString();
|
|
||||||
hqlParameters[tableIndex] = ( ParameterSpecification[] ) parameterList.toArray( new ParameterSpecification[0] );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public Queryable getAffectedQueryable() {
|
|
||||||
return persister;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public String[] getSqlStatements() {
|
public String[] getSqlStatements() {
|
||||||
return updates;
|
return updateHandler.getSqlStatements();
|
||||||
}
|
}
|
||||||
|
|
||||||
public int execute(QueryParameters parameters, SessionImplementor session) throws HibernateException {
|
public int execute(QueryParameters parameters, SessionImplementor session) throws HibernateException {
|
||||||
coordinateSharedCacheCleanup( session );
|
BulkOperationCleanupAction action = new BulkOperationCleanupAction( session, updateHandler.getTargetedQueryable() );
|
||||||
|
|
||||||
createTemporaryTableIfNecessary( persister, session );
|
if ( session.isEventSource() ) {
|
||||||
|
( (EventSource) session ).getActionQueue().addAction( action );
|
||||||
try {
|
|
||||||
// First, save off the pertinent ids, as the return value
|
|
||||||
PreparedStatement ps = null;
|
|
||||||
int resultCount = 0;
|
|
||||||
try {
|
|
||||||
try {
|
|
||||||
ps = session.getTransactionCoordinator().getJdbcCoordinator().getStatementPreparer().prepareStatement( idInsertSelect, false );
|
|
||||||
// int parameterStart = getWalker().getNumberOfParametersInSetClause();
|
|
||||||
// List allParams = getIdSelectParameterSpecifications();
|
|
||||||
// Iterator whereParams = allParams.subList( parameterStart, allParams.size() ).iterator();
|
|
||||||
Iterator whereParams = getIdSelectParameterSpecifications().iterator();
|
|
||||||
int sum = 1; // jdbc params are 1-based
|
|
||||||
while ( whereParams.hasNext() ) {
|
|
||||||
sum += ( ( ParameterSpecification ) whereParams.next() ).bind( ps, parameters, session, sum );
|
|
||||||
}
|
}
|
||||||
resultCount = ps.executeUpdate();
|
else {
|
||||||
}
|
action.getAfterTransactionCompletionProcess().doAfterTransactionCompletion( true, session );
|
||||||
finally {
|
|
||||||
if ( ps != null ) {
|
|
||||||
ps.close();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
catch( SQLException e ) {
|
|
||||||
throw getFactory().getSQLExceptionHelper().convert(
|
|
||||||
e,
|
|
||||||
"could not insert/select ids for bulk update",
|
|
||||||
idInsertSelect
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start performing the updates
|
return updateHandler.execute( session, parameters );
|
||||||
for ( int i = 0; i < updates.length; i++ ) {
|
|
||||||
if ( updates[i] == null ) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
try {
|
|
||||||
try {
|
|
||||||
ps = session.getTransactionCoordinator().getJdbcCoordinator().getStatementPreparer().prepareStatement( updates[i], false );
|
|
||||||
if ( hqlParameters[i] != null ) {
|
|
||||||
int position = 1; // jdbc params are 1-based
|
|
||||||
for ( int x = 0; x < hqlParameters[i].length; x++ ) {
|
|
||||||
position += hqlParameters[i][x].bind( ps, parameters, session, position );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ps.executeUpdate();
|
|
||||||
}
|
|
||||||
finally {
|
|
||||||
if ( ps != null ) {
|
|
||||||
ps.close();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
catch( SQLException e ) {
|
|
||||||
throw getFactory().getSQLExceptionHelper().convert(
|
|
||||||
e,
|
|
||||||
"error performing bulk update",
|
|
||||||
updates[i]
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return resultCount;
|
|
||||||
}
|
|
||||||
finally {
|
|
||||||
dropTemporaryTableIfNecessary( persister, session );
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
protected Queryable[] getAffectedQueryables() {
|
|
||||||
return new Queryable[] { persister };
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -23,6 +23,8 @@
|
||||||
*/
|
*/
|
||||||
package org.hibernate.hql.internal.ast.tree;
|
package org.hibernate.hql.internal.ast.tree;
|
||||||
|
|
||||||
|
import java.util.Arrays;
|
||||||
|
|
||||||
import antlr.SemanticException;
|
import antlr.SemanticException;
|
||||||
import antlr.collections.AST;
|
import antlr.collections.AST;
|
||||||
|
|
||||||
|
@ -191,9 +193,7 @@ public class BinaryLogicOperatorNode extends HqlSqlWalkerNode implements BinaryO
|
||||||
protected static String[] extractMutationTexts(Node operand, int count) {
|
protected static String[] extractMutationTexts(Node operand, int count) {
|
||||||
if ( operand instanceof ParameterNode ) {
|
if ( operand instanceof ParameterNode ) {
|
||||||
String[] rtn = new String[count];
|
String[] rtn = new String[count];
|
||||||
for ( int i = 0; i < count; i++ ) {
|
Arrays.fill( rtn, "?" );
|
||||||
rtn[i] = "?";
|
|
||||||
}
|
|
||||||
return rtn;
|
return rtn;
|
||||||
}
|
}
|
||||||
else if ( operand.getType() == HqlSqlTokenTypes.VECTOR_EXPR ) {
|
else if ( operand.getType() == HqlSqlTokenTypes.VECTOR_EXPR ) {
|
||||||
|
|
|
@ -121,50 +121,70 @@ public class InLogicOperatorNode extends BinaryLogicOperatorNode implements Bina
|
||||||
|| ( !ParameterNode.class.isInstance( getLeftHandOperand() ) ) ? null
|
|| ( !ParameterNode.class.isInstance( getLeftHandOperand() ) ) ? null
|
||||||
: ( (ParameterNode) getLeftHandOperand() )
|
: ( (ParameterNode) getLeftHandOperand() )
|
||||||
.getHqlParameterSpecification();
|
.getHqlParameterSpecification();
|
||||||
|
|
||||||
|
final boolean negated = getType() == HqlSqlTokenTypes.NOT_IN;
|
||||||
|
|
||||||
|
if ( rhsNode != null && rhsNode.getNextSibling() == null ) {
|
||||||
/**
|
/**
|
||||||
* only one element in "in" cluster, e.g.
|
* only one element in the vector grouping.
|
||||||
* <code> where (a,b) in ( (1,2) ) </code> this will be mutated to
|
* <code> where (a,b) in ( (1,2) ) </code> this will be mutated to
|
||||||
* <code>where a=1 and b=2 </code>
|
* <code>where a=1 and b=2 </code>
|
||||||
*/
|
*/
|
||||||
if ( rhsNode != null && rhsNode.getNextSibling() == null ) {
|
String[] rhsElementTexts = extractMutationTexts( rhsNode, rhsColumnSpan );
|
||||||
String[] rhsElementTexts = extractMutationTexts( rhsNode,
|
setType( negated ? HqlTokenTypes.OR : HqlSqlTokenTypes.AND );
|
||||||
rhsColumnSpan );
|
setText( negated ? "or" : "and" );
|
||||||
setType( HqlSqlTokenTypes.AND );
|
ParameterSpecification rhsEmbeddedCompositeParameterSpecification =
|
||||||
setText( "AND" );
|
rhsNode == null || ( !ParameterNode.class.isInstance( rhsNode ) )
|
||||||
ParameterSpecification rhsEmbeddedCompositeParameterSpecification = rhsNode == null
|
? null
|
||||||
|| ( !ParameterNode.class.isInstance( rhsNode ) ) ? null
|
: ( (ParameterNode) rhsNode ).getHqlParameterSpecification();
|
||||||
: ( (ParameterNode) rhsNode )
|
translate(
|
||||||
.getHqlParameterSpecification();
|
lhsColumnSpan,
|
||||||
translate( lhsColumnSpan, HqlSqlTokenTypes.EQ, "=", lhsElementTexts,
|
negated ? HqlSqlTokenTypes.NE : HqlSqlTokenTypes.EQ,
|
||||||
|
negated ? "<>" : "=",
|
||||||
|
lhsElementTexts,
|
||||||
rhsElementTexts,
|
rhsElementTexts,
|
||||||
lhsEmbeddedCompositeParameterSpecification,
|
lhsEmbeddedCompositeParameterSpecification,
|
||||||
rhsEmbeddedCompositeParameterSpecification, this );
|
rhsEmbeddedCompositeParameterSpecification,
|
||||||
} else {
|
this
|
||||||
|
);
|
||||||
|
}
|
||||||
|
else {
|
||||||
List andElementsNodeList = new ArrayList();
|
List andElementsNodeList = new ArrayList();
|
||||||
while ( rhsNode != null ) {
|
while ( rhsNode != null ) {
|
||||||
String[] rhsElementTexts = extractMutationTexts( rhsNode,
|
String[] rhsElementTexts = extractMutationTexts( rhsNode, rhsColumnSpan );
|
||||||
rhsColumnSpan );
|
AST group = getASTFactory().create(
|
||||||
AST and = getASTFactory().create( HqlSqlTokenTypes.AND, "AND" );
|
negated ? HqlSqlTokenTypes.OR : HqlSqlTokenTypes.AND,
|
||||||
ParameterSpecification rhsEmbeddedCompositeParameterSpecification = rhsNode == null
|
negated ? "or" : "and"
|
||||||
|| ( !ParameterNode.class.isInstance( rhsNode ) ) ? null
|
);
|
||||||
: ( (ParameterNode) rhsNode )
|
ParameterSpecification rhsEmbeddedCompositeParameterSpecification =
|
||||||
.getHqlParameterSpecification();
|
rhsNode == null || ( !ParameterNode.class.isInstance( rhsNode ) )
|
||||||
translate( lhsColumnSpan, HqlSqlTokenTypes.EQ, "=",
|
? null
|
||||||
lhsElementTexts, rhsElementTexts,
|
: ( (ParameterNode) rhsNode ).getHqlParameterSpecification();
|
||||||
|
translate(
|
||||||
|
lhsColumnSpan,
|
||||||
|
negated ? HqlSqlTokenTypes.NE : HqlSqlTokenTypes.EQ,
|
||||||
|
negated ? "<>" : "=",
|
||||||
|
lhsElementTexts,
|
||||||
|
rhsElementTexts,
|
||||||
lhsEmbeddedCompositeParameterSpecification,
|
lhsEmbeddedCompositeParameterSpecification,
|
||||||
rhsEmbeddedCompositeParameterSpecification, and );
|
rhsEmbeddedCompositeParameterSpecification,
|
||||||
andElementsNodeList.add( and );
|
group
|
||||||
|
);
|
||||||
|
andElementsNodeList.add( group );
|
||||||
rhsNode = (Node) rhsNode.getNextSibling();
|
rhsNode = (Node) rhsNode.getNextSibling();
|
||||||
}
|
}
|
||||||
setType( HqlSqlTokenTypes.OR );
|
setType( negated ? HqlSqlTokenTypes.AND : HqlSqlTokenTypes.OR );
|
||||||
setText( "OR" );
|
setText( negated ? "and" : "or" );
|
||||||
AST curNode = this;
|
AST curNode = this;
|
||||||
for ( int i = andElementsNodeList.size() - 1; i > 1; i-- ) {
|
for ( int i = andElementsNodeList.size() - 1; i > 1; i-- ) {
|
||||||
AST or = getASTFactory().create( HqlSqlTokenTypes.OR, "OR" );
|
AST group = getASTFactory().create(
|
||||||
curNode.setFirstChild( or );
|
negated ? HqlSqlTokenTypes.AND : HqlSqlTokenTypes.OR,
|
||||||
curNode = or;
|
negated ? "and" : "or"
|
||||||
|
);
|
||||||
|
curNode.setFirstChild( group );
|
||||||
|
curNode = group;
|
||||||
AST and = (AST) andElementsNodeList.get( i );
|
AST and = (AST) andElementsNodeList.get( i );
|
||||||
or.setNextSibling( and );
|
group.setNextSibling( and );
|
||||||
}
|
}
|
||||||
AST node0 = (AST) andElementsNodeList.get( 0 );
|
AST node0 = (AST) andElementsNodeList.get( 0 );
|
||||||
AST node1 = (AST) andElementsNodeList.get( 1 );
|
AST node1 = (AST) andElementsNodeList.get( 1 );
|
||||||
|
|
|
@ -361,7 +361,7 @@ public class SessionFactoryHelper {
|
||||||
* @return The sql function, or null if not found.
|
* @return The sql function, or null if not found.
|
||||||
*/
|
*/
|
||||||
public SQLFunction findSQLFunction(String functionName) {
|
public SQLFunction findSQLFunction(String functionName) {
|
||||||
return sfi.getSqlFunctionRegistry().findSQLFunction( functionName.toLowerCase() );
|
return sfi.getSqlFunctionRegistry().findSQLFunction( functionName );
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -0,0 +1,184 @@
|
||||||
|
/*
|
||||||
|
* Hibernate, Relational Persistence for Idiomatic Java
|
||||||
|
*
|
||||||
|
* Copyright (c) 2012, Red Hat Inc. or third-party contributors as
|
||||||
|
* indicated by the @author tags or express copyright attribution
|
||||||
|
* statements applied by the authors. All third-party contributions are
|
||||||
|
* distributed under license by Red Hat Inc.
|
||||||
|
*
|
||||||
|
* This copyrighted material is made available to anyone wishing to use, modify,
|
||||||
|
* copy, or redistribute it subject to the terms and conditions of the GNU
|
||||||
|
* Lesser General Public License, as published by the Free Software Foundation.
|
||||||
|
*
|
||||||
|
* This program is distributed in the hope that it will be useful,
|
||||||
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
|
||||||
|
* or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
|
||||||
|
* for more details.
|
||||||
|
*
|
||||||
|
* You should have received a copy of the GNU Lesser General Public License
|
||||||
|
* along with this distribution; if not, write to:
|
||||||
|
* Free Software Foundation, Inc.
|
||||||
|
* 51 Franklin Street, Fifth Floor
|
||||||
|
* Boston, MA 02110-1301 USA
|
||||||
|
*/
|
||||||
|
package org.hibernate.hql.spi;
|
||||||
|
|
||||||
|
import java.sql.SQLException;
|
||||||
|
import java.util.Collections;
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
|
import antlr.RecognitionException;
|
||||||
|
import antlr.collections.AST;
|
||||||
|
|
||||||
|
import org.hibernate.HibernateException;
|
||||||
|
import org.hibernate.JDBCException;
|
||||||
|
import org.hibernate.engine.jdbc.spi.JdbcServices;
|
||||||
|
import org.hibernate.engine.spi.SessionFactoryImplementor;
|
||||||
|
import org.hibernate.engine.spi.SessionImplementor;
|
||||||
|
import org.hibernate.hql.internal.ast.HqlSqlWalker;
|
||||||
|
import org.hibernate.hql.internal.ast.SqlGenerator;
|
||||||
|
import org.hibernate.internal.util.StringHelper;
|
||||||
|
import org.hibernate.mapping.Table;
|
||||||
|
import org.hibernate.param.ParameterSpecification;
|
||||||
|
import org.hibernate.persister.entity.Queryable;
|
||||||
|
import org.hibernate.sql.InsertSelect;
|
||||||
|
import org.hibernate.sql.Select;
|
||||||
|
import org.hibernate.sql.SelectValues;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @author Steve Ebersole
|
||||||
|
*/
|
||||||
|
public class AbstractTableBasedBulkIdHandler {
|
||||||
|
private final SessionFactoryImplementor sessionFactory;
|
||||||
|
private final HqlSqlWalker walker;
|
||||||
|
|
||||||
|
private final String catalog;
|
||||||
|
private final String schema;
|
||||||
|
|
||||||
|
public AbstractTableBasedBulkIdHandler(
|
||||||
|
SessionFactoryImplementor sessionFactory,
|
||||||
|
HqlSqlWalker walker,
|
||||||
|
String catalog,
|
||||||
|
String schema) {
|
||||||
|
this.sessionFactory = sessionFactory;
|
||||||
|
this.walker = walker;
|
||||||
|
this.catalog = catalog;
|
||||||
|
this.schema = schema;
|
||||||
|
}
|
||||||
|
|
||||||
|
protected SessionFactoryImplementor factory() {
|
||||||
|
return sessionFactory;
|
||||||
|
}
|
||||||
|
|
||||||
|
protected HqlSqlWalker walker() {
|
||||||
|
return walker;
|
||||||
|
}
|
||||||
|
|
||||||
|
protected JDBCException convert(SQLException e, String message, String sql) {
|
||||||
|
throw factory().getSQLExceptionHelper().convert( e, message, sql );
|
||||||
|
}
|
||||||
|
|
||||||
|
protected static class ProcessedWhereClause {
|
||||||
|
public static final ProcessedWhereClause NO_WHERE_CLAUSE = new ProcessedWhereClause();
|
||||||
|
|
||||||
|
private final String userWhereClauseFragment;
|
||||||
|
private final List<ParameterSpecification> idSelectParameterSpecifications;
|
||||||
|
|
||||||
|
private ProcessedWhereClause() {
|
||||||
|
this( "", Collections.<ParameterSpecification>emptyList() );
|
||||||
|
}
|
||||||
|
|
||||||
|
public ProcessedWhereClause(String userWhereClauseFragment, List<ParameterSpecification> idSelectParameterSpecifications) {
|
||||||
|
this.userWhereClauseFragment = userWhereClauseFragment;
|
||||||
|
this.idSelectParameterSpecifications = idSelectParameterSpecifications;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getUserWhereClauseFragment() {
|
||||||
|
return userWhereClauseFragment;
|
||||||
|
}
|
||||||
|
|
||||||
|
public List<ParameterSpecification> getIdSelectParameterSpecifications() {
|
||||||
|
return idSelectParameterSpecifications;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@SuppressWarnings("unchecked")
|
||||||
|
protected ProcessedWhereClause processWhereClause(AST whereClause) {
|
||||||
|
if ( whereClause.getNumberOfChildren() != 0 ) {
|
||||||
|
// If a where clause was specified in the update/delete query, use it to limit the
|
||||||
|
// returned ids here...
|
||||||
|
try {
|
||||||
|
SqlGenerator sqlGenerator = new SqlGenerator( sessionFactory );
|
||||||
|
sqlGenerator.whereClause( whereClause );
|
||||||
|
String userWhereClause = sqlGenerator.getSQL().substring( 7 ); // strip the " where "
|
||||||
|
List<ParameterSpecification> idSelectParameterSpecifications = sqlGenerator.getCollectedParameters();
|
||||||
|
|
||||||
|
return new ProcessedWhereClause( userWhereClause, idSelectParameterSpecifications );
|
||||||
|
}
|
||||||
|
catch ( RecognitionException e ) {
|
||||||
|
throw new HibernateException( "Unable to generate id select for DML operation", e );
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
return ProcessedWhereClause.NO_WHERE_CLAUSE;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
protected String generateIdInsertSelect(Queryable persister, String tableAlias, ProcessedWhereClause whereClause) {
|
||||||
|
Select select = new Select( sessionFactory.getDialect() );
|
||||||
|
SelectValues selectClause = new SelectValues( sessionFactory.getDialect() )
|
||||||
|
.addColumns( tableAlias, persister.getIdentifierColumnNames(), persister.getIdentifierColumnNames() );
|
||||||
|
addAnyExtraIdSelectValues( selectClause );
|
||||||
|
select.setSelectClause( selectClause.render() );
|
||||||
|
|
||||||
|
String rootTableName = persister.getTableName();
|
||||||
|
String fromJoinFragment = persister.fromJoinFragment( tableAlias, true, false );
|
||||||
|
String whereJoinFragment = persister.whereJoinFragment( tableAlias, true, false );
|
||||||
|
|
||||||
|
select.setFromClause( rootTableName + ' ' + tableAlias + fromJoinFragment );
|
||||||
|
|
||||||
|
if ( whereJoinFragment == null ) {
|
||||||
|
whereJoinFragment = "";
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
whereJoinFragment = whereJoinFragment.trim();
|
||||||
|
if ( whereJoinFragment.startsWith( "and" ) ) {
|
||||||
|
whereJoinFragment = whereJoinFragment.substring( 4 );
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ( whereClause.getUserWhereClauseFragment().length() > 0 ) {
|
||||||
|
if ( whereJoinFragment.length() > 0 ) {
|
||||||
|
whereJoinFragment += " and ";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
select.setWhereClause( whereJoinFragment + whereClause.getUserWhereClauseFragment() );
|
||||||
|
|
||||||
|
InsertSelect insert = new InsertSelect( sessionFactory.getDialect() );
|
||||||
|
if ( sessionFactory.getSettings().isCommentsEnabled() ) {
|
||||||
|
insert.setComment( "insert-select for " + persister.getEntityName() + " ids" );
|
||||||
|
}
|
||||||
|
insert.setTableName( determineIdTableName( persister ) );
|
||||||
|
insert.setSelect( select );
|
||||||
|
return insert.toStatementString();
|
||||||
|
}
|
||||||
|
|
||||||
|
protected void addAnyExtraIdSelectValues(SelectValues selectClause) {
|
||||||
|
}
|
||||||
|
|
||||||
|
protected String determineIdTableName(Queryable persister) {
|
||||||
|
// todo : use the identifier/name qualifier service once we pull that over to master
|
||||||
|
return Table.qualify( catalog, schema, persister.getTemporaryIdTableName() );
|
||||||
|
}
|
||||||
|
|
||||||
|
protected String generateIdSubselect(Queryable persister) {
|
||||||
|
return "select " + StringHelper.join( ", ", persister.getIdentifierColumnNames() ) +
|
||||||
|
" from " + determineIdTableName( persister );
|
||||||
|
}
|
||||||
|
|
||||||
|
protected void prepareForUse(Queryable persister, SessionImplementor session) {
|
||||||
|
}
|
||||||
|
|
||||||
|
protected void releaseFromUse(Queryable persister, SessionImplementor session) {
|
||||||
|
}
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue