Rel 6 10 mb (#5557)
* Allow cached search with consent active when safe (#5387)
Allow the search cache when using consent if safe
* Change package installation behaviour such that it updates the existing SearchParameter base with remaining resources (#5376)
* Change package installation behavior such that it updates the existing SearchParameter base with remaining resources
* Change package installation behavior such that it updates the existing SearchParameter base with remaining resources
* Use resourceType in the package installer output to fix tests. Minor change with resourceType condition. Update changelog description to make it more readable.
* Use resourceType in the package installer output to fix tests. Minor change with resourceType condition. Update changelog description to make it more readable.
* Transaction with conditional update fails if SearchNarrowingInterceptor is registered and Enabled Partitioning (#5389)
* Transaction with conditional update fails if SearchNarrowingInterceptor is registered and Enabled Partitioning - Implementation
* Reverse Chaining searches returns an error when invoked with parameter _lastUpdated. (#5177)
* version bump
* Bump to core release 6.0.22 (#5028)
* Bump to core release 6.0.16
* Bump to core version 6.0.20
* Fix errors thrown as a result of VersionSpecificWorkerContextWrapper
* Bump to core 6.0.22
* Resolve 5126 hfj res ver prov might cause migration error on db that automatically indexes the primary key (#5127)
* dropped old index FK_RESVERPROV_RES_PID on RES_PID column before adding IDX_RESVERPROV_RES_PID
* added changelog
* changed to valid version number
* changed to valid version number, need to be ordered by version number...
* 5123 - Use DEFAULT partition for server-based requests if none specified (#5124)
5123 - Use DEFAULT partition for server-based requests if none specified
* consent remove all suppresses next link in bundle (#5119)
* added FIXME with source of issue
* added FIXME with root cause
* added FIXME with root cause
* Providing solution to the issue and removing fixmes.
* Providing changelog
* auto-formatting.
* Adding new test.
* Adding a new test for standard paging
* let's try this and see if it works...?
* fix tests
* cleanup to trigger a new run
* fixing tests
---------
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
* 5117 MDM Score for No Match Fields Should Not Be Included in Total Score (#5118)
* fix, test, changelog
* fix, test, changelog
---------
Co-authored-by: justindar <justin.dar@smilecdr.com>
* _source search parameter needs to support modifiers (#5095)
_source search parameter needs to support modifiers - added support form :contains, :missing, :above modifiers
* Fix HFQL docs (#5151)
* Expunge operation on codesystem may throw 500 internal error with precondition fail message. (#5156)
* Initial failing test.
* Solution with changelog.
* fixing format.
* Addressing comment from code review.
* fixing failing test.
---------
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
* documentation update (#5154)
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
* Fix hsql jdbc driver deps (#5168)
Avoid non-included classes in jdbc driver dependencies.
* $delete-expunge over 10k resources will now delete all resources (#5144)
* First commit with very rough fix and unit test.
* Refinements to ResourceIdListStep and Batch2DaoSvcImpl. Make LoadIdsStepTest pass. Enhance Batch2DaoSvcImplTest.
* Spotless
* Fix checkstyle errors.
* Fix test failures.
* Minor refactoring. New unit test. Finalize changelist.
* Spotless fix.
* Delete now useless code from unit test.
* Delete more useless code.
* Test pre-commit hook
* More spotless fixes.
* Address most code review feedback.
* Remove use of pageSize parameter and see if this breaks the pipeline.
* Remove use of pageSize parameter and see if this breaks the pipeline.
* Fix the noUrl case by passing an unlimited Pegeable instead. Effectively stop using page size for most databases.
* Deprecate the old method and have it call the new one by default.
* updating documentation (#5170)
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
* _source search parameter modifiers for Subscription matching (#5159)
* _source search parameter modifiers for Subscription matching - test, implementation and changelog
* first fix
* tests and preliminary fixes
* wip, commit before switching to release branch.
* adding capability to handle _lastUpdated in reverse search (_has)
* adding changelog
* applying spotless.
* addressing code review comments.
---------
Co-authored-by: tadgh <garygrantgraham@gmail.com>
Co-authored-by: dotasek <david.otasek@smilecdr.com>
Co-authored-by: Steve Corbett <137920358+steve-corbett-smilecdr@users.noreply.github.com>
Co-authored-by: Ken Stevens <khstevens@gmail.com>
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
Co-authored-by: justindar <justin.dar@smilecdr.com>
Co-authored-by: volodymyr-korzh <132366313+volodymyr-korzh@users.noreply.github.com>
Co-authored-by: Nathan Doef <n.doef@protonmail.com>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
Co-authored-by: michaelabuckley <michaelabuckley@gmail.com>
Co-authored-by: Luke deGruchy <luke.degruchy@smilecdr.com>
* Br 20231019 add cr settings for cds hooks (#5394)
* Add settings used in CR CDS Services. Remove config dependency on Spring Boot.
* Add changelog
* Use String.format rather than concat strings
* spotless apply
* Add javadoc
* Upgrade notes for the forced-id change (#5400)
Add upgrade notes for forced-id
* Clean stale search results more aggressively. (#5396)
Use bulk DMA statements when cleaning the search cache.
The cleaner job now works as long as possible until a deadline based on the scheduling frequency.
* bump version of clinical reasoning (#5406)
* Transaction fails if SearchNarrowingInterceptor is registered and Partitioning Enabled - fix cross-tenant requests failure (#5408)
* Transaction with conditional update fails if SearchNarrowingInterceptor is registered and Enabled Partitioning - fix and tests added
* removed unused alias from SQL query of mdm-clear (#5416)
* Issue 5418 support Boolean class return type in BaseInterceptorService (#5421)
* Enable child classes to use Boolean class return type
* spotless
---------
Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
* If AutoInflateBinaries is enabled, binaries are created on the disk only for the first resource entry of the bundle (#5420)
* If AutoInflateBinaries is enabled, binaries created on disk by bundled requests are created only for the first resource entry - fix
* Revert "Issue 5418 support Boolean class return type in BaseInterceptorService (#5421)" (#5423)
This reverts commit 4e295a59fb
.
Co-authored-by: Nathan Doef <nathaniel.doef@smilecdr.com>
* Use new FHIR_ID column for sorting (#5405)
* Sort `_id` using new FHIR_ID column.
* Fix old tests that put client-assigned ids first.
* Better indexing for sort
* Bump core to 6.1.2.2 (#5425)
* Bump core to 6.1.2.1
Patch release that uses https for primary org.hl7.fhir.core package server
* Bump core to 6.1.2.2
* Make sure to return always a value for Boolean class return type. (#5424)
Implement change in a non-disruptive way for overriders
Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
* Add non-standard __pid SP for breaking ties cheaply during sorts. (#5428)
Add a non-standard __pid SP.
* Review changes for new _pid SP. (#5430)
Change name to _pid to match our standard and add warning.
* Fix VersionCanonicalizer conversion from R5 into DSTU2 for CapabilityStatement, Parameters and StructuredDefinition (#5432)
* Fix VersionCanonicalizer conversion from R5 into DSTU2 for CapabilityStatement, Parameters and StructuredDefinition.
* Fix spotless issue
* CVEs for 6.10.0 (#5433)
* Bump jetty
* Bump okio-jvm
* 8.2.0 mysql connector
* Jena and elastic bumps
* Fix test
* 5412 post bundle on partition incorrect response.link shown (#5413)
* Initial fix and unit test provided
* spottless check
* Made relevant changes to make solution version agnostic
* relevant logic changes made
* spotless changes made
* New logic added to fix failing test cases
* formatting
* New logic to make the function more robust
* spotless checks
* Left a trailing slash in the tests
* Made relevant test changes and changed logic
* spotless changes
* Update hapi-fhir-docs/src/main/resources/ca/uhn/hapi/fhir/changelog/6_10_0/5412-during-partition-fullUrl-not-shown-in-response.yaml
changing changelog
Co-authored-by: volodymyr-korzh <132366313+volodymyr-korzh@users.noreply.github.com>
* Formatting requirements
---------
Co-authored-by: volodymyr-korzh <132366313+volodymyr-korzh@users.noreply.github.com>
* Resolve We don't have guaranteed subscription delivery if a resource is too large (#5414)
* first fix
* - added the ability to handle null payload to SubscriptionDeliveringMessageSubscriber and SubscriptionDeliveringEmailSubscriber
- refactored code to reduce repeated code
- cleaned unnecessary comments and reformatted files
* Changed myResourceModifiedMessagePersistenceSvc to be autowired
* removed unused import
* added error handling when inflating the message to email and message subscriber
* reformatted code
* Fixing subscription tests with mocked IResourceModifiedMessagePersistenceSvc
* Changes by gary
* Reformatted file
* fixed failed tests
* implemented test for message and email delivery subscriber. Fixed logical error. Reformatted File.
* - implemented IT
- fixed logical error
- added changelog
* fix for cdr tests, NOTE: this makes the assumption that we will always succeed for inflating the database in the tests that uses SynchronousSubscriptionMatcherInterceptor
* fix for cdr tests, NOTE: this makes the assumption that we will always succeed for inflating the database in the tests that uses SynchronousSubscriptionMatcherInterceptor
* resolve code review comments
* reformatted files
* fixed tests
* Fix for failing IT test in jpaserver-starter (#5435)
Co-authored-by: dotasek <dotasek.dev@gmail.com>
* wip
* Bump jackson databind
* Pin Version
* Ignored duplicate classes
* Updating version to: 6.10.1 post release.
* Fix pom
* Skip remorte nexus
* make release faster
* Updating version to: 6.10.1 post release.
* remove skiptests
* Oracle create index migration recovery (#5511)
* CLI tool command migrate-database executing in dry-run mode insert entries into table FLY_HFJ_MIGRATION (#5487)
* initial test
* Solution with changelog.
* making spotless hapi
* addressing comments from code reviews
* making the test better.
* addressing code review comment and adding test.
---------
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
* added changelog, fix 6.10.0's version.yaml
* Fix bad resource id migration (#5548)
* Fix bad migration of fhir id.
Fix the original migration ForceIdMigrationCopyTask.
Also add another migration ForceIdMigrationFixTask to trim
the fhir id to correct the data.
* Bump to 6.10.1-SNAPSHOT
* Merge the fhir_id copy migration with the fhir_id fix to avoid traversing hfj_resource twice. (#5552)
Turn off the original migration ForceIdMigrationCopyTask.
Fix it anyway so nobody copies bad code.
Also add another migration ForceIdMigrationFixTask
that fixes the bad data, as well as fills in the fhir_id column for new migrations.
---------
Co-authored-by: michaelabuckley <michaelabuckley@gmail.com>
Co-authored-by: Martha Mitran <martha.mitran@smilecdr.com>
Co-authored-by: volodymyr-korzh <132366313+volodymyr-korzh@users.noreply.github.com>
Co-authored-by: TynerGjs <132295567+TynerGjs@users.noreply.github.com>
Co-authored-by: dotasek <david.otasek@smilecdr.com>
Co-authored-by: Steve Corbett <137920358+steve-corbett-smilecdr@users.noreply.github.com>
Co-authored-by: Ken Stevens <khstevens@gmail.com>
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
Co-authored-by: justindar <justin.dar@smilecdr.com>
Co-authored-by: Nathan Doef <n.doef@protonmail.com>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
Co-authored-by: Luke deGruchy <luke.degruchy@smilecdr.com>
Co-authored-by: Brenin Rhodes <brenin@alphora.com>
Co-authored-by: Justin McKelvy <60718638+Capt-Mac@users.noreply.github.com>
Co-authored-by: jmarchionatto <60409882+jmarchionatto@users.noreply.github.com>
Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
Co-authored-by: Nathan Doef <nathaniel.doef@smilecdr.com>
Co-authored-by: LalithE <132382565+LalithE@users.noreply.github.com>
Co-authored-by: dotasek <dotasek.dev@gmail.com>
Co-authored-by: markiantorno <markiantorno@gmail.com>
Co-authored-by: Long Ma <long@smilecdr.com>
This commit is contained in:
parent
1f7b605a18
commit
db581dd158
|
@ -5,6 +5,9 @@ import ca.uhn.fhir.jpa.migrate.JdbcUtils;
|
|||
import ca.uhn.fhir.jpa.migrate.SchemaMigrator;
|
||||
import ca.uhn.fhir.jpa.migrate.dao.HapiMigrationDao;
|
||||
import ca.uhn.fhir.jpa.migrate.entity.HapiMigrationEntity;
|
||||
import ca.uhn.fhir.jpa.migrate.SchemaMigrator;
|
||||
import ca.uhn.fhir.jpa.migrate.dao.HapiMigrationDao;
|
||||
import ca.uhn.fhir.jpa.migrate.entity.HapiMigrationEntity;
|
||||
import ca.uhn.fhir.jpa.util.RandomTextUtils;
|
||||
import ca.uhn.fhir.system.HapiSystemProperties;
|
||||
import com.google.common.base.Charsets;
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
### Major Database Change
|
||||
|
||||
This release contains a migration that covers every resource.
|
||||
This may take several minutes on a larger system (e.g. 10 minutes for 100 million resources).
|
||||
For zero-downtime, or for larger systems, we recommend you upgrade the schema using the CLI tools.
|
|
@ -0,0 +1,3 @@
|
|||
---
|
||||
release-date: "2023-08-31"
|
||||
codename: "Zed"
|
|
@ -2,5 +2,6 @@
|
|||
type: fix
|
||||
issue: 5486
|
||||
jira: SMILE-7457
|
||||
backport: 6.10.1
|
||||
title: "Previously, testing database migration with cli migrate-database command in dry-run mode would insert in the
|
||||
migration task table. The issue has been fixed."
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
type: fix
|
||||
issue: 5511
|
||||
backport: 6.10.1
|
||||
title: "Previously, when creating an index as a part of a migration, if the index already existed with a different name
|
||||
on Oracle, the migration would fail. This has been fixed so that the create index migration task now recovers with
|
||||
a warning message if the index already exists with a different name."
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
type: fix
|
||||
issue: 5546
|
||||
backport: 6.10.1
|
||||
title: "A database migration added trailing spaces to server-assigned resource ids.
|
||||
This fix removes the bad migration, and adds another migration to fix the errors."
|
|
@ -29,6 +29,7 @@ import ca.uhn.fhir.jpa.migrate.taskdef.CalculateHashesTask;
|
|||
import ca.uhn.fhir.jpa.migrate.taskdef.CalculateOrdinalDatesTask;
|
||||
import ca.uhn.fhir.jpa.migrate.taskdef.ColumnTypeEnum;
|
||||
import ca.uhn.fhir.jpa.migrate.taskdef.ForceIdMigrationCopyTask;
|
||||
import ca.uhn.fhir.jpa.migrate.taskdef.ForceIdMigrationFixTask;
|
||||
import ca.uhn.fhir.jpa.migrate.tasks.api.BaseMigrationTasks;
|
||||
import ca.uhn.fhir.jpa.migrate.tasks.api.Builder;
|
||||
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
|
||||
|
@ -140,10 +141,19 @@ public class HapiFhirJpaMigrationTasks extends BaseMigrationTasks<VersionEnum> {
|
|||
|
||||
// Move forced_id constraints to hfj_resource and the new fhir_id column
|
||||
// Note: we leave the HFJ_FORCED_ID.IDX_FORCEDID_TYPE_FID index in place to support old writers for a while.
|
||||
version.addTask(new ForceIdMigrationCopyTask(version.getRelease(), "20231018.1"));
|
||||
version.addTask(new ForceIdMigrationCopyTask(version.getRelease(), "20231018.1").setDoNothing(true));
|
||||
|
||||
Builder.BuilderWithTableName hfjResource = version.onTable("HFJ_RESOURCE");
|
||||
hfjResource.modifyColumn("20231018.2", "FHIR_ID").nonNullable();
|
||||
// commented out to make numeric space for the fix task below.
|
||||
// This constraint can't be enabled until the column is fully populated, and the shipped version of 20231018.1
|
||||
// was broken.
|
||||
// hfjResource.modifyColumn("20231018.2", "FHIR_ID").nonNullable();
|
||||
|
||||
// this was inserted after the release.
|
||||
version.addTask(new ForceIdMigrationFixTask(version.getRelease(), "20231018.3"));
|
||||
|
||||
// added back in place of 20231018.2. If 20231018.2 already ran, this is a no-op.
|
||||
hfjResource.modifyColumn("20231018.4", "FHIR_ID").nonNullable();
|
||||
|
||||
hfjResource.dropIndex("20231027.1", "IDX_RES_FHIR_ID");
|
||||
hfjResource
|
||||
|
@ -187,6 +197,8 @@ public class HapiFhirJpaMigrationTasks extends BaseMigrationTasks<VersionEnum> {
|
|||
"SP_URI".toLowerCase()),
|
||||
"Column HFJ_SPIDX_STRING.SP_VALUE_NORMALIZED already has a collation of 'C' so doing nothing");
|
||||
}
|
||||
|
||||
version.addTask(new ForceIdMigrationFixTask(version.getRelease(), "20231213.1"));
|
||||
}
|
||||
|
||||
protected void init680() {
|
||||
|
|
|
@ -19,16 +19,18 @@ import org.junit.jupiter.params.ParameterizedTest;
|
|||
import org.junit.jupiter.params.provider.ArgumentsSource;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
import org.testcontainers.junit.jupiter.Testcontainers;
|
||||
|
||||
import javax.sql.DataSource;
|
||||
import java.sql.SQLException;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Properties;
|
||||
import java.util.Set;
|
||||
|
||||
import static ca.uhn.fhir.jpa.embedded.HapiEmbeddedDatabasesExtension.FIRST_TESTED_VERSION;
|
||||
import static ca.uhn.fhir.jpa.migrate.SchemaMigrator.HAPI_FHIR_MIGRATION_TABLENAME;
|
||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||
import static org.junit.jupiter.api.Assertions.assertFalse;
|
||||
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||
|
||||
|
@ -75,7 +77,7 @@ public class HapiSchemaMigrationTest {
|
|||
|
||||
VersionEnum[] allVersions = VersionEnum.values();
|
||||
|
||||
Set<VersionEnum> dataVersions = Set.of(
|
||||
List<VersionEnum> dataVersions = List.of(
|
||||
VersionEnum.V5_2_0,
|
||||
VersionEnum.V5_3_0,
|
||||
VersionEnum.V5_4_0,
|
||||
|
@ -105,6 +107,8 @@ public class HapiSchemaMigrationTest {
|
|||
new HapiForeignKeyIndexHelper()
|
||||
.ensureAllForeignKeysAreIndexed(dataSource);
|
||||
}
|
||||
|
||||
verifyForcedIdMigration(dataSource);
|
||||
}
|
||||
|
||||
private static void migrate(DriverTypeEnum theDriverType, DataSource dataSource, HapiMigrationStorageSvc hapiMigrationStorageSvc, VersionEnum from, VersionEnum to) throws SQLException {
|
||||
|
@ -123,6 +127,19 @@ public class HapiSchemaMigrationTest {
|
|||
schemaMigrator.migrate();
|
||||
}
|
||||
|
||||
/**
|
||||
* For bug https://github.com/hapifhir/hapi-fhir/issues/5546
|
||||
*/
|
||||
private void verifyForcedIdMigration(DataSource theDataSource) throws SQLException {
|
||||
JdbcTemplate jdbcTemplate = new JdbcTemplate(theDataSource);
|
||||
@SuppressWarnings("DataFlowIssue")
|
||||
int nullCount = jdbcTemplate.queryForObject("select count(1) from hfj_resource where fhir_id is null", Integer.class);
|
||||
assertEquals(0, nullCount, "no fhir_id should be null");
|
||||
int trailingSpaceCount = jdbcTemplate.queryForObject("select count(1) from hfj_resource where fhir_id <> trim(fhir_id)", Integer.class);
|
||||
assertEquals(0, trailingSpaceCount, "no fhir_id should contain a space");
|
||||
}
|
||||
|
||||
|
||||
@Test
|
||||
public void testCreateMigrationTableIfRequired() throws SQLException {
|
||||
// Setup
|
||||
|
|
|
@ -69,7 +69,7 @@ public class ForceIdMigrationCopyTask extends BaseTask {
|
|||
"update hfj_resource " + "set fhir_id = coalesce( "
|
||||
+ // use first non-null value: forced_id if present, otherwise res_id
|
||||
" (select f.forced_id from hfj_forced_id f where f.resource_pid = res_id), "
|
||||
+ " cast(res_id as char(64)) "
|
||||
+ " cast(res_id as varchar(64)) "
|
||||
+ " ) "
|
||||
+ "where fhir_id is null "
|
||||
+ "and res_id >= ? and res_id < ?",
|
||||
|
|
|
@ -0,0 +1,121 @@
|
|||
/*-
|
||||
* #%L
|
||||
* HAPI FHIR Server - SQL Migration
|
||||
* %%
|
||||
* Copyright (C) 2014 - 2023 Smile CDR, Inc.
|
||||
* %%
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
* #L%
|
||||
*/
|
||||
package ca.uhn.fhir.jpa.migrate.taskdef;
|
||||
|
||||
import org.apache.commons.lang3.builder.EqualsBuilder;
|
||||
import org.apache.commons.lang3.builder.HashCodeBuilder;
|
||||
import org.apache.commons.lang3.tuple.Pair;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.jdbc.core.JdbcTemplate;
|
||||
|
||||
import java.sql.SQLException;
|
||||
|
||||
/**
|
||||
* Fix for bad version of {@link ForceIdMigrationCopyTask}
|
||||
* The earlier migration had used at cast to char instead of varchar, which is space-padded on Oracle.
|
||||
* This migration includes the copy action, but also adds a trim() call to fixup the bad server-assigned ids.
|
||||
*/
|
||||
public class ForceIdMigrationFixTask extends BaseTask {
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(ForceIdMigrationFixTask.class);
|
||||
|
||||
public ForceIdMigrationFixTask(String theProductVersion, String theSchemaVersion) {
|
||||
super(theProductVersion, theSchemaVersion);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void validate() {
|
||||
// no-op
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute() throws SQLException {
|
||||
logInfo(ourLog, "Starting: migrate fhir_id from hfj_forced_id to hfj_resource.fhir_id");
|
||||
|
||||
JdbcTemplate jdbcTemplate = newJdbcTemplate();
|
||||
|
||||
Pair<Long, Long> range = jdbcTemplate.queryForObject(
|
||||
"select min(RES_ID), max(RES_ID) from HFJ_RESOURCE",
|
||||
(rs, rowNum) -> Pair.of(rs.getLong(1), rs.getLong(2)));
|
||||
|
||||
if (range == null || range.getLeft() == null) {
|
||||
logInfo(ourLog, "HFJ_RESOURCE is empty. No work to do.");
|
||||
return;
|
||||
}
|
||||
|
||||
// run update in batches.
|
||||
int rowsPerBlock = 50; // hfj_resource has roughly 50 rows per 8k block.
|
||||
int batchSize = rowsPerBlock * 2000; // a few thousand IOPS gives a batch size around a second.
|
||||
ourLog.info(
|
||||
"About to migrate ids from {} to {} in batches of size {}",
|
||||
range.getLeft(),
|
||||
range.getRight(),
|
||||
batchSize);
|
||||
for (long batchStart = range.getLeft(); batchStart <= range.getRight(); batchStart = batchStart + batchSize) {
|
||||
long batchEnd = batchStart + batchSize;
|
||||
ourLog.info("Migrating client-assigned ids for pids: {}-{}", batchStart, batchEnd);
|
||||
|
||||
/*
|
||||
We have several cases. Two require no action:
|
||||
1. client-assigned id, with correct value in fhir_id and row in hfj_forced_id
|
||||
2. server-assigned id, with correct value in fhir_id, no row in hfj_forced_id
|
||||
And three require action:
|
||||
3. client-assigned id, no value in fhir_id, but row in hfj_forced_id
|
||||
4. server-assigned id, no value in fhir_id, and row in hfj_forced_id
|
||||
5. bad migration - server-assigned id, with wrong space-padded value in fhir_id, no row in hfj_forced_id
|
||||
*/
|
||||
|
||||
executeSql(
|
||||
"hfj_resource",
|
||||
"update hfj_resource " +
|
||||
// coalesce is varargs and chooses the first non-null value, like ||
|
||||
" set fhir_id = coalesce( "
|
||||
+
|
||||
// case 5.
|
||||
" trim(fhir_id), "
|
||||
+
|
||||
// case 3
|
||||
" (select f.forced_id from hfj_forced_id f where f.resource_pid = res_id), "
|
||||
+
|
||||
// case 4 - use pid as fhir_id
|
||||
" cast(res_id as varchar(64)) "
|
||||
+ " ) "
|
||||
+
|
||||
// avoid useless updates on engines that don't check
|
||||
// skip case 1, 2. Only check 3,4,5
|
||||
" where (fhir_id is null or fhir_id <> trim(fhir_id)) "
|
||||
+
|
||||
// chunk range.
|
||||
" and res_id >= ? and res_id < ?",
|
||||
batchStart,
|
||||
batchEnd);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void generateHashCode(HashCodeBuilder theBuilder) {
|
||||
// no-op - this is a singleton.
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void generateEquals(EqualsBuilder theBuilder, BaseTask theOtherObject) {
|
||||
// no-op - this is a singleton.
|
||||
}
|
||||
}
|
|
@ -71,7 +71,7 @@
|
|||
<version>${project.version}</version>
|
||||
<optional>true</optional>
|
||||
</dependency>
|
||||
|
||||
|
||||
<dependency>
|
||||
<groupId>commons-codec</groupId>
|
||||
<artifactId>commons-codec</artifactId>
|
||||
|
|
Loading…
Reference in New Issue