Rel 6 2 4 mergeback (#4406)
* jm wrong bundle entry url (#4213)
* Bug test
* here you go
* Generate relative URIs for bundle entry.request.url, as specified
* Point jira issue in changelog
* Adjust tests to fixes
Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
Co-authored-by: Tadgh <garygrantgraham@gmail.com>
* improved logging (#4217)
Co-authored-by: Ken Stevens <ken@smilecdr.com>
* Rel 6 1 3 mergeback (#4215)
* Bump for CVE (#3856)
* Bump for CVE
* Bump spring-data version
* Fix compile
* Cut over to spring bom
* Bump to RC1
* remove RC
* do not contrain reindex for common SP updates (#3876)
* only fast-track jobs with exactly one chunk (#3879)
* Fix illegalstateexception when an exception is thrown during stream response (#3882)
* Finish up changelog, minor refactor
* reset buffer only
* Hack for some replacements
* Failure handling
* wip
* Fixed the issue (#3845)
* Fixed the issue
* Changelog modification
* Changelog modification
* Implemented seventh character extended code and the corresponding dis… (#3709)
* Implemented seventh character extended code and the corresponding display
* Modifications
* Changes on previous test according to modifications made in ICD10-CM XML file
* Subscription sending delete events being skipped (#3888)
* fixed bug and added test
* refactor
* Update for CVE (#3895)
* updated pointcuts to work as intended (#3903)
* updated pointcuts to work as intended
* added changelog
* review fixes
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
* 3904 during $delete expunge job hibernate search indexed documents are left orphaned (#3905)
* Add test and implementation
* Add changelog
* 3899 code in limits (#3901)
* Add implementation, changelog, test
* Update hapi-fhir-jpaserver-test-utilities/src/test/java/ca/uhn/fhir/jpa/provider/r4/ResourceProviderR4Test.java
Co-authored-by: Ken Stevens <khstevens@gmail.com>
Co-authored-by: Ken Stevens <khstevens@gmail.com>
* 3884 overlapping searchparameter undetected rel 6 1 (#3909)
* Applying all changes from previous dev branch to current one pointing to rel_6_1
* Fixing merge conflict related to Msg.code value.
* Fixing Msg.code value.
* Making checkstyle happy.
* Making sure that all tests are passing.
* Passing all tests after fixing Msg.code
* Passing all tests.
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
* 3745 - fixed NPE for bundle with duplicate conditional create resourc… (#3746)
* 3745 - fixed NPE for bundle with duplicate conditional create resources and a conditional delete
* created unit test for skip of delete operation while processing duplicating create entries
* moved unit test to FhirSystemDaoR4Test
* 3379 mdm fixes (#3906)
* added MdmLinkCreateSvcimplTest
* fixed creating mdm-link not setting the resource type correctly
* fixed a bug where ResourcePersistenceId was being duplicated instead of passed on
* Update hapi-fhir-jpaserver-mdm/src/test/java/ca/uhn/fhir/jpa/mdm/svc/MdmLinkCreateSvcImplTest.java
Change order of tests such that assertEquals takes expected value then actual value
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
* added changelog, also changed a setup function in test to beforeeach
Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
* Fix to the issue (#3855)
* Fix to the issue
* Progress
* fixed the issue
* Addressing suggestions
* add response status code to MethodOutcome
* Addressing suggestions
Co-authored-by: Ken Stevens <ken@smilecdr.com>
* Fix for caching appearing broken in batch2 for bulkexport jobs (#3912)
* Respect caching in bullk export, fix bug with completed date on empty jobs
* add changelog
* Add impl
* Add breaking test
* Complete failing test
* more broken tests
* Fix more tests'
* Fix paging bug
* Fix another brittle test
* 3915 do not collapse rules with filters (#3916)
* do not attempt to merge compartment permissions with filters
* changelog
* Rename to IT for concurrency problems
Co-authored-by: Tadgh <garygrantgraham@gmail.com>
* Version bump
* fix $mdm-submit output (#3917)
Co-authored-by: Ken Stevens <ken@smilecdr.com>
* Gl3407 bundle offset size (#3918)
* begin with failing test
* fixed
* change log
* rollback default count change and corresponding comments
Co-authored-by: Ken Stevens <ken@smilecdr.com>
* Offset interceptor now only works for external calls
* Initialize some beans (esp interceptors) later in the boot process so they don't slow down startup.
* do not reindex searchparam jobs on startup
* Fix oracle non-enterprise attempting online index add (#3925)
* 3922 delete expunge large dataset (#3923)
* lower batchsize of delete requests so that we do not get sql exceptions
* blah
* fix test
* updated tests to not fail
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
* add index
* Fix up colun grab
* Revert offset mode change
* Revert fix for null/system request details checks for reindex purposes
* Fix bug and add test for SP Validating Interceptor (#3930)
* wip
* Fix uptests
* Fix index online test
* Fix SP validating interceptor logic
* Updating version to: 6.1.1 post release.
* fix compile error
* Deploy to sonatype (#3934)
* adding sonatype profile to checkstyle module
* adding sonatype profile to tinder module
* adding sonatype profile to base pom
* adding final deployToSonatype profile
* wip
* Revert version enum
* Updating version to: 6.1.1 post release.
* Add test, changelog, and implementation
* Add backport info
* Create failing test
* Implemented the fix, fixed existing unit tests
* added changelog
* added test case for no filter, exclude 1 patient
* wip
* Add backport info
* Add info of new version
* Updating version to: 6.1.2 post release.
* bump info and backport for 6.1.2
* Bump for hapi
* Implement bug fixes, add new tests (#4022)
* Implement bug fixes, add new tests
* tidy
* Tidy
* refactor for cleaning
* More tidying
* Lower logging
* Split into nested tests, rename, add todos
* Typo
* Code review
* add backport info
* Updating version to: 6.1.3 post release.
* Updating version to: 6.1.3 post release.
* removed duplicate mention of ver 6.1.3 in versionEnum
* backport pr 4101
* mdm message key (#4111)
* begin with failing test
* fixed 2 tests
* fix tests
* fix tests
* change log
Co-authored-by: Ken Stevens <ken@smilecdr.com>
* backport 6.1.3 docs changes
* fixed typo on doc backport message
* fix test breaking
* Updating version to: 6.1.4 post release.
* wip
Co-authored-by: JasonRoberts-smile <85363818+JasonRoberts-smile@users.noreply.github.com>
Co-authored-by: Qingyixia <106992634+Qingyixia@users.noreply.github.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
Co-authored-by: Ken Stevens <khstevens@gmail.com>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: kateryna-mironova <107507153+kateryna-mironova@users.noreply.github.com>
Co-authored-by: longma1 <32119004+longma1@users.noreply.github.com>
Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: markiantorno <markiantorno@gmail.com>
Co-authored-by: Steven Li <steven@smilecdr.com>
* pin okio-jvm for kotlin vuln (#4216)
* Fix UrlUtil.unescape() by not escaping "+" to " " if this is an "application/..." _outputFormat. (#4220)
* First commit: Failing unit test and a TODO with a vague idea of where the bug happens.
* Don't escape "+" in a URL GET parameter if it starts with "application".
* Remove unnecessary TODO.
* Add changelog.
* Code review feedback on naming. Also, make logic more robust by putting plus and should escape boolean && in parens.
* Ks 20221031 migration lock (#4224)
* started design
* complete with tests
* changelog
* cleanup
* tyop
Co-authored-by: Ken Stevens <ken@smilecdr.com>
* 4207-getpagesoffset-set-to-total-number-of-resources-results-in-inconsistent-amount-of-entries-when-requests-are-sent-consecutively (#4209)
* Added test
* Added solution
* Changelog
* Changes made based on comments
* Fix bug with MDM submit
* fix
* Version bump
* 4234 consent in conjunction with versionedapiconverterinterceptor fails (#4236)
* Add constant for interceptor
* add test, changelog
* Allow Batch2 transition from ERRORED to COMPLETE (#4242)
* Allow Batch2 transition from ERRORED to COMPLETE
* Add changelog
* Test fix
Co-authored-by: James Agnew <james@jamess-mbp.lan>
* 3685 When bulk exporting, if no resource type param is provided, defa… (#4233)
* 3685 When bulk exporting, if no resource type param is provided, default to all registered types.
* Update test case.
* Cleaned up changelog.
* Added test case for multiple resource types.
* Added failing test case for not returning Binary resource.
* Refactor solution.
Co-authored-by: kylejule <kyle.jule@smilecdr.com>
* Add next version
* bulk export permanently reusing cached results (#4249)
* Add test, fix bug, add changelog
* minor refactor
* Fix broken test
* Smile 4892 DocumentReference Attachment url (#4237)
* failing test
* fix
* increase test Attachment url size to new max
* decrease limit to 500
* ci fix
Co-authored-by: nathaniel.doef <nathaniel.doef@smilecdr.com>
* Overlapping SearchParameter with the same code and base are not allowed (#4253)
* Overlapping SearchParameter with the same code and base are not allowed
* Fix existing tests according to changes
* Cleanup dead code and remove related tests
* Version Bump
* ignore misfires in quartz
* Allowing Failures On Index Drops (#4272)
* Allowing failure on index drops.
* Adding changeLog
* Modification to changelog following code review.
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
* Revert "ignore misfires in quartz"
This reverts commit 15c74a46bc
.
* Ignore misfires in quartz (#4273)
* Reindex Behaviour Issues (#4261)
* fixmes for ND
* address FIXME comments
* fix tests
* increase max retries
* fix resource id chunking logic
* fix test
* add modular patient
* change log
* version bump
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: nathaniel.doef <nathaniel.doef@smilecdr.com>
* Set official Version
* license
* Fix up numbers
* Fix up numbers
* Update numbers
* wip
* fix numbers
* Fix test:
* Fix more tests
* TEMP FIX FOR BUILD
* wip
* Updating version to: 6.2.1 post release.
* Add a whack of logging
* wip
* add implementation
* wip and test
* wip
* last-second-fetch
* expose useful method
* remove 10000 limit
* Strip some logging
* Fix up logging
* Unpublicize method
* Fix version
* Make minor changes
* once again on 6.2.1
* re-add version enum
* add folder
* fix test
* DIsable busted test
* Disable more broken tests
* Only submit queued chunks
* Quiet log
* Fix wrong pinned version
* Updating version to: 6.2.2 post release.
* fixes for https://github.com/hapifhir/hapi-fhir/issues/4277 and https… (#4291)
* fixes for https://github.com/hapifhir/hapi-fhir/issues/4277 and https://github.com/hapifhir/hapi-fhir/issues/4276
* Credit for #4291
Co-authored-by: James Agnew <jamesagnew@gmail.com>
* backport and changelog for 6.2.2
* Updating version to: 6.2.3 post release.
* fix https://simpaticois.atlassian.net/browse/SMILE-5781
* Version bump to 6.2.3-SNAPSHOT
* Auto retry on MDM Clear conflicts (#4398)
* Auto-retry mdm-clear on conflict
* Add changelog
* Build fix
* Disable failing test
* Update to 6.2.3 again
* Update license dates
* Dont fail on batch2 double delivery (#4400)
* Don't fail on Batch2 double delivery
* Add changelog
* Update docker for release ppipeline
* Updating version to: 6.2.4 post release.
* Add test and implementation to fix potential NPE in pre-show resources (#4388)
* Add test and implementation to fix potential NPE in pre-show resources
* add test
* WIP getting identical test scenario
* More robust solution
* Finalize Code
* Add changelog, move a bunch of changelogs
* Remove not needed test
* Minor refactor and reporting
* Fix up megeback
* update backport info
* update backport info
* Updating version to: 6.2.5 post release.
* please
* fix test
Co-authored-by: jmarchionatto <60409882+jmarchionatto@users.noreply.github.com>
Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
Co-authored-by: Ken Stevens <khstevens@gmail.com>
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: JasonRoberts-smile <85363818+JasonRoberts-smile@users.noreply.github.com>
Co-authored-by: Qingyixia <106992634+Qingyixia@users.noreply.github.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: kateryna-mironova <107507153+kateryna-mironova@users.noreply.github.com>
Co-authored-by: longma1 <32119004+longma1@users.noreply.github.com>
Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
Co-authored-by: markiantorno <markiantorno@gmail.com>
Co-authored-by: Steven Li <steven@smilecdr.com>
Co-authored-by: Luke deGruchy <luke.degruchy@smilecdr.com>
Co-authored-by: karneet1212 <112980019+karneet1212@users.noreply.github.com>
Co-authored-by: James Agnew <jamesagnew@gmail.com>
Co-authored-by: James Agnew <james@jamess-mbp.lan>
Co-authored-by: KGJ-software <39975592+KGJ-software@users.noreply.github.com>
Co-authored-by: kylejule <kyle.jule@smilecdr.com>
Co-authored-by: Nathan Doef <n.doef@protonmail.com>
Co-authored-by: nathaniel.doef <nathaniel.doef@smilecdr.com>
Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>
This commit is contained in:
parent
f2f29a1a32
commit
d70d813249
|
@ -107,6 +107,10 @@ public enum VersionEnum {
|
||||||
V6_1_4,
|
V6_1_4,
|
||||||
V6_2_0,
|
V6_2_0,
|
||||||
V6_2_1,
|
V6_2_1,
|
||||||
|
V6_2_2,
|
||||||
|
V6_2_3,
|
||||||
|
V6_2_4,
|
||||||
|
V6_2_5,
|
||||||
// Dev Build
|
// Dev Build
|
||||||
V6_3_0,
|
V6_3_0,
|
||||||
V6_4_0
|
V6_4_0
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
This version fixes a bug with 6.2.0 and previous releases wherein batch jobs that created very large chunk counts could occasionally fail to submit a small proportion of chunks.
|
|
@ -0,0 +1,3 @@
|
||||||
|
---
|
||||||
|
release-date: "2022-11-25"
|
||||||
|
codename: "Vishwa"
|
|
@ -0,0 +1 @@
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
---
|
||||||
|
release-date: "2023-01-05"
|
||||||
|
codename: "Vishwa"
|
|
@ -0,0 +1 @@
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
---
|
||||||
|
release-date: "2023-01-04"
|
||||||
|
codename: "Vishwa"
|
|
@ -1,5 +1,6 @@
|
||||||
---
|
---
|
||||||
type: add
|
type: add
|
||||||
issue: 4291
|
issue: 4291
|
||||||
|
backport: 6.2.2
|
||||||
title: "The NPM package installer did not support installing on R4B repositories. Thanks to Jens Kristian Villadsen
|
title: "The NPM package installer did not support installing on R4B repositories. Thanks to Jens Kristian Villadsen
|
||||||
for the pull request!"
|
for the pull request!"
|
||||||
|
|
|
@ -2,4 +2,5 @@
|
||||||
type: fix
|
type: fix
|
||||||
issue: 4388
|
issue: 4388
|
||||||
jira: SMILE-5834
|
jira: SMILE-5834
|
||||||
|
backport: 6.2.4
|
||||||
title: "Fixed an edge case during a Read operation where hooks could be invoked with a null resource. This could cause a NullPointerException in some cases."
|
title: "Fixed an edge case during a Read operation where hooks could be invoked with a null resource. This could cause a NullPointerException in some cases."
|
||||||
|
|
|
@ -0,0 +1,5 @@
|
||||||
|
---
|
||||||
|
type: fix
|
||||||
|
backport: 6.2.3
|
||||||
|
title: "The $mdm-clear operation sometimes failed with a constraint error when running in a heavily
|
||||||
|
multithreaded environment. This has been fixed."
|
|
@ -0,0 +1,5 @@
|
||||||
|
---
|
||||||
|
type: fix
|
||||||
|
issue: 4400
|
||||||
|
title: "When Batch2 work notifications are received twice (e.g. because the notification engine double delivered)
|
||||||
|
an unrecoverable failure could occur. This has been corrected."
|
|
@ -156,6 +156,7 @@ public abstract class BaseHapiFhirResourceDao<T extends IBaseResource> extends B
|
||||||
|
|
||||||
public static final String BASE_RESOURCE_NAME = "resource";
|
public static final String BASE_RESOURCE_NAME = "resource";
|
||||||
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(BaseHapiFhirResourceDao.class);
|
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(BaseHapiFhirResourceDao.class);
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
protected PlatformTransactionManager myPlatformTransactionManager;
|
protected PlatformTransactionManager myPlatformTransactionManager;
|
||||||
@Autowired(required = false)
|
@Autowired(required = false)
|
||||||
|
|
|
@ -4,6 +4,7 @@ import ca.uhn.fhir.interceptor.api.IInterceptorService;
|
||||||
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
|
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
|
||||||
import ca.uhn.fhir.jpa.entity.MdmLink;
|
import ca.uhn.fhir.jpa.entity.MdmLink;
|
||||||
import ca.uhn.fhir.jpa.entity.PartitionEntity;
|
import ca.uhn.fhir.jpa.entity.PartitionEntity;
|
||||||
|
import ca.uhn.fhir.jpa.interceptor.UserRequestRetryVersionConflictsInterceptor;
|
||||||
import ca.uhn.fhir.jpa.mdm.provider.BaseLinkR4Test;
|
import ca.uhn.fhir.jpa.mdm.provider.BaseLinkR4Test;
|
||||||
import ca.uhn.fhir.jpa.partition.IRequestPartitionHelperSvc;
|
import ca.uhn.fhir.jpa.partition.IRequestPartitionHelperSvc;
|
||||||
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
||||||
|
@ -14,8 +15,10 @@ import ca.uhn.fhir.mdm.api.MdmLinkSourceEnum;
|
||||||
import ca.uhn.fhir.mdm.api.MdmMatchResultEnum;
|
import ca.uhn.fhir.mdm.api.MdmMatchResultEnum;
|
||||||
import ca.uhn.fhir.mdm.api.MdmQuerySearchParameters;
|
import ca.uhn.fhir.mdm.api.MdmQuerySearchParameters;
|
||||||
import ca.uhn.fhir.mdm.api.paging.MdmPageRequest;
|
import ca.uhn.fhir.mdm.api.paging.MdmPageRequest;
|
||||||
|
import ca.uhn.fhir.mdm.batch2.clear.MdmClearStep;
|
||||||
import ca.uhn.fhir.mdm.model.MdmTransactionContext;
|
import ca.uhn.fhir.mdm.model.MdmTransactionContext;
|
||||||
import ca.uhn.fhir.mdm.rules.config.MdmSettings;
|
import ca.uhn.fhir.mdm.rules.config.MdmSettings;
|
||||||
|
import ca.uhn.fhir.rest.server.exceptions.ResourceVersionConflictException;
|
||||||
import ca.uhn.fhir.rest.server.interceptor.partition.RequestTenantPartitionInterceptor;
|
import ca.uhn.fhir.rest.server.interceptor.partition.RequestTenantPartitionInterceptor;
|
||||||
import ca.uhn.fhir.rest.server.servlet.ServletRequestDetails;
|
import ca.uhn.fhir.rest.server.servlet.ServletRequestDetails;
|
||||||
import org.hl7.fhir.instance.model.api.IBaseParameters;
|
import org.hl7.fhir.instance.model.api.IBaseParameters;
|
||||||
|
@ -37,6 +40,7 @@ import java.io.IOException;
|
||||||
import java.math.BigDecimal;
|
import java.math.BigDecimal;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.concurrent.atomic.AtomicBoolean;
|
||||||
|
|
||||||
import static ca.uhn.fhir.mdm.provider.MdmProviderDstu3Plus.DEFAULT_PAGE_SIZE;
|
import static ca.uhn.fhir.mdm.provider.MdmProviderDstu3Plus.DEFAULT_PAGE_SIZE;
|
||||||
import static ca.uhn.fhir.mdm.provider.MdmProviderDstu3Plus.MAX_PAGE_SIZE;
|
import static ca.uhn.fhir.mdm.provider.MdmProviderDstu3Plus.MAX_PAGE_SIZE;
|
||||||
|
@ -59,6 +63,7 @@ public class MdmControllerSvcImplTest extends BaseLinkR4Test {
|
||||||
private Batch2JobHelper myBatch2JobHelper;
|
private Batch2JobHelper myBatch2JobHelper;
|
||||||
@Autowired
|
@Autowired
|
||||||
private MdmSettings myMdmSettings;
|
private MdmSettings myMdmSettings;
|
||||||
|
private UserRequestRetryVersionConflictsInterceptor myUserRequestRetryVersionConflictsInterceptor;
|
||||||
private final RequestTenantPartitionInterceptor myPartitionInterceptor = new RequestTenantPartitionInterceptor();
|
private final RequestTenantPartitionInterceptor myPartitionInterceptor = new RequestTenantPartitionInterceptor();
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -70,12 +75,16 @@ public class MdmControllerSvcImplTest extends BaseLinkR4Test {
|
||||||
myPartitionLookupSvc.createPartition(new PartitionEntity().setId(2).setName(PARTITION_2), null);
|
myPartitionLookupSvc.createPartition(new PartitionEntity().setId(2).setName(PARTITION_2), null);
|
||||||
myInterceptorService.registerInterceptor(myPartitionInterceptor);
|
myInterceptorService.registerInterceptor(myPartitionInterceptor);
|
||||||
myMdmSettings.setEnabled(true);
|
myMdmSettings.setEnabled(true);
|
||||||
|
|
||||||
|
myUserRequestRetryVersionConflictsInterceptor = new UserRequestRetryVersionConflictsInterceptor();
|
||||||
|
myInterceptorService.registerInterceptor(myUserRequestRetryVersionConflictsInterceptor);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@AfterEach
|
@AfterEach
|
||||||
public void after() throws IOException {
|
public void after() throws IOException {
|
||||||
myMdmSettings.setEnabled(false);
|
myMdmSettings.setEnabled(false);
|
||||||
|
myInterceptorService.unregisterInterceptor(myUserRequestRetryVersionConflictsInterceptor);
|
||||||
myPartitionSettings.setPartitioningEnabled(false);
|
myPartitionSettings.setPartitioningEnabled(false);
|
||||||
myInterceptorService.unregisterInterceptor(myPartitionInterceptor);
|
myInterceptorService.unregisterInterceptor(myPartitionInterceptor);
|
||||||
super.after();
|
super.after();
|
||||||
|
@ -160,6 +169,35 @@ public class MdmControllerSvcImplTest extends BaseLinkR4Test {
|
||||||
assertLinkCount(2);
|
assertLinkCount(2);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void testMdmClearWithWriteConflict() {
|
||||||
|
AtomicBoolean haveFired = new AtomicBoolean(false);
|
||||||
|
MdmClearStep.setClearCompletionCallbackForUnitTest(()->{
|
||||||
|
if (haveFired.getAndSet(true) == false) {
|
||||||
|
throw new ResourceVersionConflictException("Conflict");
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
assertLinkCount(1);
|
||||||
|
|
||||||
|
RequestPartitionId requestPartitionId1 = RequestPartitionId.fromPartitionId(1);
|
||||||
|
RequestPartitionId requestPartitionId2 = RequestPartitionId.fromPartitionId(2);
|
||||||
|
createPractitionerAndUpdateLinksOnPartition(buildJanePractitioner(), requestPartitionId1);
|
||||||
|
createPractitionerAndUpdateLinksOnPartition(buildJanePractitioner(), requestPartitionId2);
|
||||||
|
assertLinkCount(3);
|
||||||
|
|
||||||
|
List<String> urls = new ArrayList<>();
|
||||||
|
urls.add("Practitioner");
|
||||||
|
IPrimitiveType<BigDecimal> batchSize = new DecimalType(new BigDecimal(100));
|
||||||
|
ServletRequestDetails details = new ServletRequestDetails();
|
||||||
|
details.setTenantId(PARTITION_2);
|
||||||
|
IBaseParameters clearJob = myMdmControllerSvc.submitMdmClearJob(urls, batchSize, details);
|
||||||
|
String jobId = ((StringType) ((Parameters) clearJob).getParameterValue("jobId")).getValueAsString();
|
||||||
|
myBatch2JobHelper.awaitJobCompletion(jobId);
|
||||||
|
|
||||||
|
assertLinkCount(2);
|
||||||
|
}
|
||||||
|
|
||||||
private class PartitionIdMatcher implements ArgumentMatcher<RequestPartitionId> {
|
private class PartitionIdMatcher implements ArgumentMatcher<RequestPartitionId> {
|
||||||
private RequestPartitionId myRequestPartitionId;
|
private RequestPartitionId myRequestPartitionId;
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
<configuration>
|
<configuration>
|
||||||
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
|
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
|
||||||
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
|
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
|
||||||
<level>TRACE</level>
|
<level>INFO</level>
|
||||||
</filter>
|
</filter>
|
||||||
<encoder>
|
<encoder>
|
||||||
<!--N.B use this pattern to remove timestamp/thread/level/logger information from logs during testing.<pattern>[%file:%line] %msg%n</pattern>-->
|
<!--N.B use this pattern to remove timestamp/thread/level/logger information from logs during testing.<pattern>[%file:%line] %msg%n</pattern>-->
|
||||||
|
@ -49,25 +49,6 @@
|
||||||
<appender-ref ref="STDOUT" />
|
<appender-ref ref="STDOUT" />
|
||||||
</logger>
|
</logger>
|
||||||
|
|
||||||
<!--
|
|
||||||
Configuration for MDM troubleshooting log
|
|
||||||
-->
|
|
||||||
<appender name="MDM_TROUBLESHOOTING" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
|
||||||
<filter class="ch.qos.logback.classic.filter.ThresholdFilter"><level>DEBUG</level></filter>
|
|
||||||
<file>${smile.basedir}/log/mdm-troubleshooting.log</file>
|
|
||||||
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
|
|
||||||
<fileNamePattern>${smile.basedir}/log/mdm-troubleshooting.log.%i.gz</fileNamePattern>
|
|
||||||
<minIndex>1</minIndex>
|
|
||||||
<maxIndex>9</maxIndex>
|
|
||||||
</rollingPolicy>
|
|
||||||
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
|
|
||||||
<maxFileSize>5MB</maxFileSize>
|
|
||||||
</triggeringPolicy>
|
|
||||||
<encoder>
|
|
||||||
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
|
|
||||||
</encoder>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<logger name="ca.uhn.fhir.log.mdm_troubleshooting" level="TRACE">
|
<logger name="ca.uhn.fhir.log.mdm_troubleshooting" level="TRACE">
|
||||||
<appender-ref ref="MDM_TROUBLESHOOTING"/>
|
<appender-ref ref="MDM_TROUBLESHOOTING"/>
|
||||||
</logger>
|
</logger>
|
||||||
|
|
|
@ -41,6 +41,8 @@ import org.springframework.context.ApplicationContext;
|
||||||
import javax.persistence.EntityManager;
|
import javax.persistence.EntityManager;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
import static org.junit.jupiter.api.Assertions.assertNotNull;
|
import static org.junit.jupiter.api.Assertions.assertNotNull;
|
||||||
import static org.junit.jupiter.api.Assertions.fail;
|
import static org.junit.jupiter.api.Assertions.fail;
|
||||||
|
|
|
@ -166,6 +166,7 @@ import org.springframework.transaction.support.TransactionCallbackWithoutResult;
|
||||||
import org.springframework.transaction.support.TransactionTemplate;
|
import org.springframework.transaction.support.TransactionTemplate;
|
||||||
|
|
||||||
import javax.annotation.Nonnull;
|
import javax.annotation.Nonnull;
|
||||||
|
import javax.sql.DataSource;
|
||||||
import java.io.BufferedReader;
|
import java.io.BufferedReader;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.io.InputStreamReader;
|
import java.io.InputStreamReader;
|
||||||
|
|
|
@ -31,6 +31,7 @@ import ca.uhn.fhir.batch2.jobs.export.models.ResourceIdList;
|
||||||
import ca.uhn.fhir.batch2.jobs.models.BatchResourceId;
|
import ca.uhn.fhir.batch2.jobs.models.BatchResourceId;
|
||||||
import ca.uhn.fhir.i18n.Msg;
|
import ca.uhn.fhir.i18n.Msg;
|
||||||
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
||||||
|
import ca.uhn.fhir.jpa.api.dao.DaoRegistry;
|
||||||
import ca.uhn.fhir.jpa.bulk.export.api.IBulkExportProcessor;
|
import ca.uhn.fhir.jpa.bulk.export.api.IBulkExportProcessor;
|
||||||
import ca.uhn.fhir.jpa.bulk.export.model.ExportPIDIteratorParameters;
|
import ca.uhn.fhir.jpa.bulk.export.model.ExportPIDIteratorParameters;
|
||||||
import ca.uhn.fhir.rest.api.server.storage.IResourcePersistentId;
|
import ca.uhn.fhir.rest.api.server.storage.IResourcePersistentId;
|
||||||
|
|
|
@ -39,6 +39,10 @@ import org.slf4j.Logger;
|
||||||
|
|
||||||
import javax.annotation.Nullable;
|
import javax.annotation.Nullable;
|
||||||
|
|
||||||
|
import java.util.Optional;
|
||||||
|
|
||||||
|
import static org.apache.commons.lang3.StringUtils.isBlank;
|
||||||
|
|
||||||
public class WorkChunkProcessor {
|
public class WorkChunkProcessor {
|
||||||
private static final Logger ourLog = Logs.getBatchTroubleshootingLog();
|
private static final Logger ourLog = Logs.getBatchTroubleshootingLog();
|
||||||
|
|
||||||
|
@ -113,7 +117,12 @@ public class WorkChunkProcessor {
|
||||||
} else {
|
} else {
|
||||||
// all other kinds of steps
|
// all other kinds of steps
|
||||||
Validate.notNull(theWorkChunk);
|
Validate.notNull(theWorkChunk);
|
||||||
StepExecutionDetails<PT, IT> stepExecutionDetails = getExecutionDetailsForNonReductionStep(theWorkChunk, theInstance, inputType, parameters);
|
Optional<StepExecutionDetails<PT, IT>> stepExecutionDetailsOpt = getExecutionDetailsForNonReductionStep(theWorkChunk, theInstance, inputType, parameters);
|
||||||
|
if (!stepExecutionDetailsOpt.isPresent()) {
|
||||||
|
return new JobStepExecutorOutput<>(false, dataSink);
|
||||||
|
}
|
||||||
|
|
||||||
|
StepExecutionDetails<PT, IT> stepExecutionDetails = stepExecutionDetailsOpt.get();
|
||||||
|
|
||||||
// execute the step
|
// execute the step
|
||||||
boolean success = myStepExecutor.executeStep(stepExecutionDetails, worker, dataSink);
|
boolean success = myStepExecutor.executeStep(stepExecutionDetails, worker, dataSink);
|
||||||
|
@ -146,7 +155,7 @@ public class WorkChunkProcessor {
|
||||||
/**
|
/**
|
||||||
* Construct execution details for non-reduction step
|
* Construct execution details for non-reduction step
|
||||||
*/
|
*/
|
||||||
private <PT extends IModelJson, IT extends IModelJson> StepExecutionDetails<PT, IT> getExecutionDetailsForNonReductionStep(
|
private <PT extends IModelJson, IT extends IModelJson> Optional<StepExecutionDetails<PT, IT>> getExecutionDetailsForNonReductionStep(
|
||||||
WorkChunk theWorkChunk,
|
WorkChunk theWorkChunk,
|
||||||
JobInstance theInstance,
|
JobInstance theInstance,
|
||||||
Class<IT> theInputType,
|
Class<IT> theInputType,
|
||||||
|
@ -155,11 +164,15 @@ public class WorkChunkProcessor {
|
||||||
IT inputData = null;
|
IT inputData = null;
|
||||||
|
|
||||||
if (!theInputType.equals(VoidModel.class)) {
|
if (!theInputType.equals(VoidModel.class)) {
|
||||||
|
if (isBlank(theWorkChunk.getData())) {
|
||||||
|
ourLog.info("Ignoring chunk[{}] for step[{}] in status[{}] because it has no data", theWorkChunk.getId(), theWorkChunk.getTargetStepId(), theWorkChunk.getStatus());
|
||||||
|
return Optional.empty();
|
||||||
|
}
|
||||||
inputData = theWorkChunk.getData(theInputType);
|
inputData = theWorkChunk.getData(theInputType);
|
||||||
}
|
}
|
||||||
|
|
||||||
String chunkId = theWorkChunk.getId();
|
String chunkId = theWorkChunk.getId();
|
||||||
|
|
||||||
return new StepExecutionDetails<>(theParameters, inputData, theInstance, chunkId);
|
return Optional.of(new StepExecutionDetails<>(theParameters, inputData, theInstance, chunkId));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -454,6 +454,31 @@ public class JobCoordinatorImplTest extends BaseBatch2Test {
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* If a notification is received for a chunk that should have data but doesn't, we can just ignore that
|
||||||
|
* (just caused by double delivery of a chunk notification message)
|
||||||
|
*/
|
||||||
|
@Test
|
||||||
|
public void testPerformStep_ChunkAlreadyComplete() {
|
||||||
|
|
||||||
|
// Setup
|
||||||
|
|
||||||
|
WorkChunk chunk = createWorkChunkStep2();
|
||||||
|
chunk.setData((String)null);
|
||||||
|
setupMocks(createJobDefinition(), chunk);
|
||||||
|
mySvc.start();
|
||||||
|
|
||||||
|
// Execute
|
||||||
|
|
||||||
|
myWorkChannelReceiver.send(new JobWorkNotificationJsonMessage(createWorkNotification(STEP_2)));
|
||||||
|
|
||||||
|
// Verify
|
||||||
|
verifyNoMoreInteractions(myStep1Worker);
|
||||||
|
verifyNoMoreInteractions(myStep2Worker);
|
||||||
|
verifyNoMoreInteractions(myStep3Worker);
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testStartInstance() {
|
public void testStartInstance() {
|
||||||
|
|
||||||
|
|
|
@ -41,6 +41,7 @@ import ca.uhn.fhir.rest.api.server.RequestDetails;
|
||||||
import ca.uhn.fhir.rest.api.server.storage.TransactionDetails;
|
import ca.uhn.fhir.rest.api.server.storage.TransactionDetails;
|
||||||
import ca.uhn.fhir.rest.server.provider.ProviderConstants;
|
import ca.uhn.fhir.rest.server.provider.ProviderConstants;
|
||||||
import ca.uhn.fhir.util.StopWatch;
|
import ca.uhn.fhir.util.StopWatch;
|
||||||
|
import com.google.common.annotations.VisibleForTesting;
|
||||||
import org.slf4j.Logger;
|
import org.slf4j.Logger;
|
||||||
import org.slf4j.LoggerFactory;
|
import org.slf4j.LoggerFactory;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
@ -54,6 +55,7 @@ import java.util.concurrent.TimeUnit;
|
||||||
public class MdmClearStep implements IJobStepWorker<MdmClearJobParameters, ResourceIdListWorkChunkJson, VoidModel> {
|
public class MdmClearStep implements IJobStepWorker<MdmClearJobParameters, ResourceIdListWorkChunkJson, VoidModel> {
|
||||||
|
|
||||||
private static final Logger ourLog = LoggerFactory.getLogger(MdmClearStep.class);
|
private static final Logger ourLog = LoggerFactory.getLogger(MdmClearStep.class);
|
||||||
|
private static Runnable ourClearCompletionCallbackForUnitTest;
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
HapiTransactionService myHapiTransactionService;
|
HapiTransactionService myHapiTransactionService;
|
||||||
|
@ -71,6 +73,8 @@ public class MdmClearStep implements IJobStepWorker<MdmClearJobParameters, Resou
|
||||||
public RunOutcome run(@Nonnull StepExecutionDetails<MdmClearJobParameters, ResourceIdListWorkChunkJson> theStepExecutionDetails, @Nonnull IJobDataSink<VoidModel> theDataSink) throws JobExecutionFailedException {
|
public RunOutcome run(@Nonnull StepExecutionDetails<MdmClearJobParameters, ResourceIdListWorkChunkJson> theStepExecutionDetails, @Nonnull IJobDataSink<VoidModel> theDataSink) throws JobExecutionFailedException {
|
||||||
|
|
||||||
SystemRequestDetails requestDetails = new SystemRequestDetails();
|
SystemRequestDetails requestDetails = new SystemRequestDetails();
|
||||||
|
requestDetails.setRetry(true);
|
||||||
|
requestDetails.setMaxRetries(100);
|
||||||
requestDetails.setRequestPartitionId(theStepExecutionDetails.getParameters().getRequestPartitionId());
|
requestDetails.setRequestPartitionId(theStepExecutionDetails.getParameters().getRequestPartitionId());
|
||||||
TransactionDetails transactionDetails = new TransactionDetails();
|
TransactionDetails transactionDetails = new TransactionDetails();
|
||||||
myHapiTransactionService.execute(requestDetails, transactionDetails, buildJob(requestDetails, transactionDetails, theStepExecutionDetails));
|
myHapiTransactionService.execute(requestDetails, transactionDetails, buildJob(requestDetails, transactionDetails, theStepExecutionDetails));
|
||||||
|
@ -119,7 +123,18 @@ public class MdmClearStep implements IJobStepWorker<MdmClearJobParameters, Resou
|
||||||
|
|
||||||
ourLog.info("Finished removing {} golden resources in {} - {}/sec - Instance[{}] Chunk[{}]", persistentIds.size(), sw, sw.formatThroughput(persistentIds.size(), TimeUnit.SECONDS), myInstanceId, myChunkId);
|
ourLog.info("Finished removing {} golden resources in {} - {}/sec - Instance[{}] Chunk[{}]", persistentIds.size(), sw, sw.formatThroughput(persistentIds.size(), TimeUnit.SECONDS), myInstanceId, myChunkId);
|
||||||
|
|
||||||
|
if (ourClearCompletionCallbackForUnitTest != null) {
|
||||||
|
ourClearCompletionCallbackForUnitTest.run();
|
||||||
|
}
|
||||||
|
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@VisibleForTesting
|
||||||
|
public static void setClearCompletionCallbackForUnitTest(Runnable theClearCompletionCallbackForUnitTest) {
|
||||||
|
ourClearCompletionCallbackForUnitTest = theClearCompletionCallbackForUnitTest;
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -7,9 +7,9 @@
|
||||||
<property name="charset" value="UTF-8"/>
|
<property name="charset" value="UTF-8"/>
|
||||||
<property name="cacheFile" value="target/cache_non_main_files"/>
|
<property name="cacheFile" value="target/cache_non_main_files"/>
|
||||||
|
|
||||||
<module name="SuppressionFilter">
|
<!-- <module name="SuppressionFilter">-->
|
||||||
<property name="file" value="src/checkstyle/checkstyle_suppressions.xml" />
|
<!-- <property name="file" value="${basedir}/src/checkstyle/checkstyle_suppressions.xml" />-->
|
||||||
</module>
|
<!-- </module> TODO GGG propagate this to master -->
|
||||||
<module name="TreeWalker">
|
<module name="TreeWalker">
|
||||||
<module name="RegexpSinglelineJava">
|
<module name="RegexpSinglelineJava">
|
||||||
<property name="format" value="System\.out\.println"/>
|
<property name="format" value="System\.out\.println"/>
|
||||||
|
|
Loading…
Reference in New Issue