This commit is fixing:
- a missing commit that can make leak a connection
- restricting default specific commons-dbcp2 to the default data source
- setting poolPreparedStatements true by default
- configured embedded Derby to be in-memory to speedup tests
It add additional required fixes:
- Fixed uncommitted deleted tx records
- Fixed JDBC authorization on test
- Using property-based version for commons-dbcp2
- stopping thread pool after activation to allow JDBC lease locks to release the lock
- centralize JDBC network timeout configuration and save repeating it
- adding dbcp2 as the default pooled DataSource to be used
Replaces direct jdbc connections with dbcp2 datasource. Adds
configuration options to use alternative datasources and to alter the
parameters. While adding slight overhead, this vastly improves the
management and pooling capabilities with db connections.
This is a Large commit where I am refactoring largeMessage Body out of CoreMessage
which is now reused with AMQP.
I had also to fix Reference Counting to fix how Large Messages are Acked
And I also had to make sure Large Messages are transversing correctly when in cluster.
Issue: The BLOB manipulation is done using PostgreSQL internal classes starting from PGConnection.
This leads to ClasCastExceptions if the connection is wrapped in a pool or if the driver is in a different classloader (WildFly).
Fix: unwrap the connection and if the PostgreSQL classes are not directly available uses reflection to manipulate the BLOBs.
Jira: https://issues.apache.org/jira/browse/ARTEMIS-2626
AbstractJDBCDriver would hold an instance to AbstractJDBCDriver through an innner class,
that would hold an ActiveMQServerImpl.
That means Servers would be leaking for the entire duration of the testsuite when using JDBC.
* Upgrading versions
* Adding wildfly-common dependency as jboss-logmanager now depends on it
for simple common operations such as getting hostname or process id
* Updating bootclasspath with wildfly-common
There is no test for this as we don't have a way to embed/test Postgres.
It will need to be verified externally. However, given the exception the
fix looks solid.
DB2 metadata checks should erroneously report stale table existence on
not existing/just deleted table, making the subsequent warning logs
of failed SELECT COUNT useless and scaring: should be better to let
them lowered to INFO level
Compaction is now reusing direct ByteBuffers on both
reading and writing with explicit and deterministic
release to avoid high peak of native memory utilisation
after compaction.
Activate by enabling TRACE logging for:
org.apache.activemq.artemis.jdbc.store.drivers.AbstractJDBCDriver
This doesn't log *all* JDBC operations, just those that are used by the
JDBC store.
DB2 JDBC driver fail to retrieve metadata information
if table names are lower-cases: similarly to Oracle, better
force any table name to be upper-cases to avoid broker being
unable to restart when lower-cases table names are used
It avoid using the system clock to perform the locks logic
by using the DBMS time.
It contains several improvements on the JDBC error handling
and an improved observability thanks to debug logs.
DB2 10.5 doesn't allow to append Blob data to an existing Blob,
producing unexpected errors: a custom DB2 sequential file
can perfom the append by using a customized UPDATE statement.
max-blob-size.db2 and create-file-table.db2 are changed to match the
2 GB max blob size limit allowed by DB2.
In order to avoid out of bounds reads to happen, the reading of the file
should avoid those readings to hit the DMBS and just return the expected
value.
The previous commit about this feature wasn't using the row count query
ResultSet.
The mechanics has been changed to allow the row count query
to fail, because DROP and CREATE aren't transactional and immediate
in most DBMS.
It includes a test that stress its mechanics if used with DBMS like
DB2 10.5 and Oracle 12c.
Additional checks and logs have been added to trace each steps.
JdbcNodeManager is configured to use the same network timeout
value of the journal and to validate all the timeout values
related to a correct HA behaviour.
The JDBC Connection leaks on:
- JDBCFileUtils::getDBFileDriver(DataSource, SQLProvider)
- SharedStoreBackupActivation.FailbackChecker::run on a failed awaitLiveStatus
In some environments it is not allowed to create a schema
by the application itself. With this change the AbstractJDBCDriver
now tests if an existing table is empty and executes further
statements in the same way as if the table does not exist.