Rel 6 2 5 mergeback (#4459)

* jm wrong bundle entry url (#4213)

* Bug test

* here you go

* Generate relative URIs for bundle entry.request.url, as specified

* Point jira issue in changelog

* Adjust tests to fixes

Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
Co-authored-by: Tadgh <garygrantgraham@gmail.com>

* improved logging (#4217)

Co-authored-by: Ken Stevens <ken@smilecdr.com>

* Rel 6 1 3 mergeback (#4215)

* Bump for CVE (#3856)

* Bump for CVE

* Bump spring-data version

* Fix compile

* Cut over to spring bom

* Bump to RC1

* remove RC

* do not contrain reindex for common SP updates (#3876)

* only fast-track jobs with exactly one chunk (#3879)

* Fix illegalstateexception when an exception is thrown during stream response (#3882)

* Finish up changelog, minor refactor

* reset buffer only

* Hack for some replacements

* Failure handling

* wip

* Fixed the issue (#3845)

* Fixed the issue

* Changelog modification

* Changelog modification

* Implemented seventh character extended code and the corresponding dis… (#3709)

* Implemented seventh character extended code and the corresponding display

* Modifications

* Changes on previous test according to modifications made in ICD10-CM XML file

* Subscription sending delete events being skipped (#3888)

* fixed bug and added test

* refactor

* Update for CVE (#3895)

* updated pointcuts to work as intended (#3903)

* updated pointcuts to work as intended

* added changelog

* review fixes

Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>

* 3904 during $delete expunge job hibernate search indexed documents are left orphaned (#3905)

* Add test and implementation

* Add changelog

* 3899 code in limits (#3901)

* Add implementation, changelog, test

* Update hapi-fhir-jpaserver-test-utilities/src/test/java/ca/uhn/fhir/jpa/provider/r4/ResourceProviderR4Test.java

Co-authored-by: Ken Stevens <khstevens@gmail.com>

Co-authored-by: Ken Stevens <khstevens@gmail.com>

* 3884 overlapping searchparameter undetected rel 6 1 (#3909)

* Applying all changes from previous dev branch to current one pointing to rel_6_1

* Fixing merge conflict related to Msg.code value.

* Fixing Msg.code value.

* Making checkstyle happy.

* Making sure that all tests are passing.

* Passing all tests after fixing Msg.code

* Passing all tests.

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* 3745 - fixed NPE for bundle with duplicate conditional create resourc… (#3746)

* 3745 - fixed NPE for bundle with duplicate conditional create resources and a conditional delete

* created unit test for skip of delete operation while processing duplicating create entries

* moved unit test to FhirSystemDaoR4Test

* 3379 mdm fixes (#3906)

* added MdmLinkCreateSvcimplTest

* fixed creating mdm-link not setting the resource type correctly

* fixed a bug where ResourcePersistenceId was being duplicated instead of passed on

* Update hapi-fhir-jpaserver-mdm/src/test/java/ca/uhn/fhir/jpa/mdm/svc/MdmLinkCreateSvcImplTest.java

Change order of tests such that assertEquals takes expected value then actual value

Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>

* added changelog, also changed a setup function in test to beforeeach

Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>

* Fix to the issue (#3855)

* Fix to the issue

* Progress

* fixed the issue

* Addressing suggestions

* add response status code to MethodOutcome

* Addressing suggestions

Co-authored-by: Ken Stevens <ken@smilecdr.com>

* Fix for caching appearing broken in batch2 for bulkexport jobs (#3912)

* Respect caching in bullk export, fix bug with completed date on empty jobs

* add changelog

* Add impl

* Add breaking test

* Complete failing test

* more broken tests

* Fix more tests'

* Fix paging bug

* Fix another brittle test

* 3915 do not collapse rules with filters (#3916)

* do not attempt to merge compartment permissions with filters

* changelog

* Rename to IT for concurrency problems

Co-authored-by: Tadgh <garygrantgraham@gmail.com>

* Version bump

* fix $mdm-submit output (#3917)

Co-authored-by: Ken Stevens <ken@smilecdr.com>

* Gl3407 bundle offset size (#3918)

* begin with failing test

* fixed

* change log

* rollback default count change and corresponding comments

Co-authored-by: Ken Stevens <ken@smilecdr.com>

* Offset interceptor now only works for external calls

* Initialize some beans (esp interceptors) later in the boot process so they don't slow down startup.

* do not reindex searchparam jobs on startup

* Fix oracle non-enterprise attempting online index add (#3925)

* 3922 delete expunge large dataset (#3923)

* lower batchsize of delete requests so that we do not get sql exceptions

* blah

* fix test

* updated tests to not fail

Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>

* add index

* Fix up colun grab

* Revert offset mode change

* Revert fix for null/system request details checks for reindex purposes

* Fix bug and add test for SP Validating Interceptor (#3930)

* wip

* Fix uptests

* Fix index online test

* Fix SP validating interceptor logic

* Updating version to: 6.1.1 post release.

* fix compile error

* Deploy to sonatype (#3934)

* adding sonatype profile to checkstyle module

* adding sonatype profile to tinder module

* adding sonatype profile to base pom

* adding final deployToSonatype profile

* wip

* Revert version enum

* Updating version to: 6.1.1 post release.

* Add test, changelog, and implementation

* Add backport info

* Create failing test

* Implemented the fix, fixed existing unit tests

* added changelog

* added test case for no filter, exclude 1 patient

* wip

* Add backport info

* Add info of new version

* Updating version to: 6.1.2 post release.

* bump info and backport for 6.1.2

* Bump for hapi

* Implement bug fixes, add new tests (#4022)

* Implement bug fixes, add new tests

* tidy

* Tidy

* refactor for cleaning

* More tidying

* Lower logging

* Split into nested tests, rename, add todos

* Typo

* Code review

* add backport info

* Updating version to: 6.1.3 post release.

* Updating version to: 6.1.3 post release.

* removed duplicate mention of ver 6.1.3 in versionEnum

* backport pr 4101

* mdm message key (#4111)

* begin with failing test

* fixed 2 tests

* fix tests

* fix tests

* change log

Co-authored-by: Ken Stevens <ken@smilecdr.com>

* backport 6.1.3 docs changes

* fixed typo on doc backport message

* fix test breaking

* Updating version to: 6.1.4 post release.

* wip

Co-authored-by: JasonRoberts-smile <85363818+JasonRoberts-smile@users.noreply.github.com>
Co-authored-by: Qingyixia <106992634+Qingyixia@users.noreply.github.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
Co-authored-by: Ken Stevens <khstevens@gmail.com>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: kateryna-mironova <107507153+kateryna-mironova@users.noreply.github.com>
Co-authored-by: longma1 <32119004+longma1@users.noreply.github.com>
Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: markiantorno <markiantorno@gmail.com>
Co-authored-by: Steven Li <steven@smilecdr.com>

* pin okio-jvm for kotlin vuln (#4216)

* Fix UrlUtil.unescape() by not escaping "+" to " " if this is an "application/..." _outputFormat. (#4220)

* First commit:  Failing unit test and a TODO with a vague idea of where the bug happens.

* Don't escape "+" in a URL GET parameter if it starts with "application".

* Remove unnecessary TODO.

* Add changelog.

* Code review feedback on naming.  Also, make logic more robust by putting plus and should escape boolean && in parens.

* Ks 20221031 migration lock (#4224)

* started design

* complete with tests

* changelog

* cleanup

* tyop

Co-authored-by: Ken Stevens <ken@smilecdr.com>

* 4207-getpagesoffset-set-to-total-number-of-resources-results-in-inconsistent-amount-of-entries-when-requests-are-sent-consecutively (#4209)

* Added test

* Added solution

* Changelog

* Changes made based on comments

* Fix bug with MDM submit

* fix

* Version bump

* 4234 consent in conjunction with versionedapiconverterinterceptor fails (#4236)

* Add constant for interceptor

* add test, changelog

* Allow Batch2 transition from ERRORED to COMPLETE (#4242)

* Allow Batch2 transition from ERRORED to COMPLETE

* Add changelog

* Test fix

Co-authored-by: James Agnew <james@jamess-mbp.lan>

* 3685 When bulk exporting, if no resource type param is provided, defa… (#4233)

* 3685 When bulk exporting, if no resource type param is provided, default to all registered types.

* Update test case.

* Cleaned up changelog.

* Added test case for multiple resource types.

* Added failing test case for not returning Binary resource.

* Refactor solution.

Co-authored-by: kylejule <kyle.jule@smilecdr.com>

* Add next version

* bulk export permanently reusing cached results (#4249)

* Add test, fix bug, add changelog

* minor refactor

* Fix broken test

* Smile 4892 DocumentReference Attachment url (#4237)

* failing test

* fix

* increase test Attachment url size to new max

* decrease limit to 500

* ci fix

Co-authored-by: nathaniel.doef <nathaniel.doef@smilecdr.com>

* Overlapping SearchParameter with the same code and base are not allowed (#4253)

* Overlapping SearchParameter with the same code and base are not allowed

* Fix existing tests according to changes

* Cleanup dead code and remove related tests

* Version Bump

* ignore misfires in quartz

* Allowing Failures On Index Drops (#4272)

* Allowing failure on index drops.

* Adding changeLog

* Modification to changelog following code review.

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* Revert "ignore misfires in quartz"

This reverts commit 15c74a46bc.

* Ignore misfires in quartz (#4273)

* Reindex Behaviour Issues (#4261)

* fixmes for ND

* address FIXME comments

* fix tests

* increase max retries

* fix resource id chunking logic

* fix test

* add modular patient

* change log

* version bump

Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: nathaniel.doef <nathaniel.doef@smilecdr.com>

* Set official Version

* license

* Fix up numbers

* Fix up numbers

* Update numbers

* wip

* fix numbers

* Fix test:

* Fix more tests

* TEMP FIX FOR BUILD

* wip

* Updating version to: 6.2.1 post release.

* Add a whack of logging

* wip

* add implementation

* wip and test

* wip

* last-second-fetch

* expose useful method

* remove 10000 limit

* Strip some logging

* Fix up logging

* Unpublicize method

* Fix version

* Make minor changes

* once again on 6.2.1

* re-add version enum

* add folder

* fix test

* DIsable busted test

* Disable more broken tests

* Only submit queued chunks

* Quiet log

* Fix wrong pinned version

* Updating version to: 6.2.2 post release.

* fixes for https://github.com/hapifhir/hapi-fhir/issues/4277 and https… (#4291)

* fixes for https://github.com/hapifhir/hapi-fhir/issues/4277 and https://github.com/hapifhir/hapi-fhir/issues/4276

* Credit for #4291

Co-authored-by: James Agnew <jamesagnew@gmail.com>

* backport and changelog for 6.2.2

* Updating version to: 6.2.3 post release.

* fix https://simpaticois.atlassian.net/browse/SMILE-5781

* Version bump to 6.2.3-SNAPSHOT

* Auto retry on MDM Clear conflicts (#4398)

* Auto-retry mdm-clear on conflict

* Add changelog

* Build fix

* Disable failing test

* Update to 6.2.3 again

* Update license dates

* Dont fail on batch2 double delivery (#4400)

* Don't fail on Batch2 double delivery

* Add changelog

* Update docker for release ppipeline

* Updating version to: 6.2.4 post release.

* Add test and implementation to fix potential NPE in pre-show resources (#4388)

* Add test and implementation to fix potential NPE in pre-show resources

* add test

* WIP getting identical test scenario

* More robust solution

* Finalize Code

* Add changelog, move a bunch of changelogs

* Remove not needed test

* Minor refactor and reporting

* Fix up megeback

* update backport info

* update backport info

* Updating version to: 6.2.5 post release.

* Prevent chunk from returning to in-progress unless it is errorred, in-progress, or queued

* changelog

* Update logger

* Indicate backport

* Add version and upgrade info. add backport

* hapi-fhir side done

* cdr side

* add assert

* add test

* Fix import

* prevent multiple reduction step runs in maintenance run (#4423)

* setting job to inprog to prevent multiple maintenance passes from running reduction steps

* added a new status

Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-mbp.home>

* Add backport info

* Fix up tests

* Fix discrepancies

* Fix test

* Version bump

* bump ips

Co-authored-by: jmarchionatto <60409882+jmarchionatto@users.noreply.github.com>
Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
Co-authored-by: Ken Stevens <khstevens@gmail.com>
Co-authored-by: Ken Stevens <ken@smilecdr.com>
Co-authored-by: JasonRoberts-smile <85363818+JasonRoberts-smile@users.noreply.github.com>
Co-authored-by: Qingyixia <106992634+Qingyixia@users.noreply.github.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-MacBook-Pro.local>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: kateryna-mironova <107507153+kateryna-mironova@users.noreply.github.com>
Co-authored-by: longma1 <32119004+longma1@users.noreply.github.com>
Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
Co-authored-by: markiantorno <markiantorno@gmail.com>
Co-authored-by: Steven Li <steven@smilecdr.com>
Co-authored-by: Luke deGruchy <luke.degruchy@smilecdr.com>
Co-authored-by: karneet1212 <112980019+karneet1212@users.noreply.github.com>
Co-authored-by: James Agnew <jamesagnew@gmail.com>
Co-authored-by: James Agnew <james@jamess-mbp.lan>
Co-authored-by: KGJ-software <39975592+KGJ-software@users.noreply.github.com>
Co-authored-by: kylejule <kyle.jule@smilecdr.com>
Co-authored-by: Nathan Doef <n.doef@protonmail.com>
Co-authored-by: nathaniel.doef <nathaniel.doef@smilecdr.com>
Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>
Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-mbp.home>
This commit is contained in:
Tadgh 2023-01-24 17:38:35 -08:00 committed by GitHub
parent 1755eb46e6
commit 456cc81b32
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
98 changed files with 283 additions and 147 deletions

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,14 +4,14 @@
<modelVersion>4.0.0</modelVersion>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-bom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<packaging>pom</packaging>
<name>HAPI FHIR BOM</name>
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-cli</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../hapi-deployable-pom</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -0,0 +1,2 @@
This release fixes a problem with the batch framework which could cause jobs to hang indefinitely if multiple processes attempted to run a maintenance pass simultaneously.

View File

@ -0,0 +1,3 @@
---
release-date: "2023-01-09"
codename: "Vishwa"

View File

@ -1,6 +1,7 @@
---
type: change
issue: 4065
backport: 6.2.5
title: "A new DaoConfig configuration setting has been added called JobFastTrackingEnabled, default false.
If this setting is enabled, then gated batch jobs that produce only one chunk will immediately trigger a batch
maintenance job. This may be useful for testing, but is not recommended for production use. Prior to this change,

View File

@ -1,5 +1,6 @@
---
type: fix
issue: 4400
backport: 6.2.5
title: "When Batch2 work notifications are received twice (e.g. because the notification engine double delivered)
an unrecoverable failure could occur. This has been corrected."

View File

@ -2,5 +2,6 @@
type: fix
issue: 4417
jira: SMILE-5405
backport: 6.2.5
title: "Fixed a bug with batch2 which could cause previously completed chunks to be set back to in-progress. This could cause a batch job to never complete.
Now, a safeguard to ensure a job can never return to in-progress once it has completed or failed."

View File

@ -0,0 +1,11 @@
---
type: fix
issue: 4422
jira: SMILE-5701
backport: 6.2.5
title: "When a Bulk Export job runs with a large amount of data,
there is a chance the reduction step can be kicked off multiple
times, resulting in data loss in the final report.
Jobs will now be set to in-progress before processing to
prevent multiple reduction steps from being started.
"

View File

@ -11,7 +11,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -276,7 +276,7 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
}
@Override
@Transactional(propagation = Propagation.REQUIRED)
@Transactional(propagation = Propagation.REQUIRES_NEW)
public boolean canAdvanceInstanceToNextStep(String theInstanceId, String theCurrentStepId) {
List<StatusEnum> statusesForStep = myWorkChunkRepository.getDistinctStatusesForStep(theInstanceId, theCurrentStepId);
ourLog.debug("Checking whether gated job can advanced to next step. [instanceId={}, currentStepId={}, statusesForStep={}]", theInstanceId, theCurrentStepId, statusesForStep);
@ -390,6 +390,12 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
return recordsChanged > 0;
}
@Override
public boolean markInstanceAsStatus(String theInstance, StatusEnum theStatusEnum) {
int recordsChanged = myJobInstanceRepository.updateInstanceStatus(theInstance, theStatusEnum);
return recordsChanged > 0;
}
@Override
@Transactional(propagation = Propagation.REQUIRES_NEW)
public JobOperationResultJson cancelInstance(String theInstanceId) {

View File

@ -20,10 +20,8 @@ package ca.uhn.fhir.jpa.dao.data;
* #L%
*/
import ca.uhn.fhir.batch2.model.JobInstance;
import ca.uhn.fhir.batch2.model.StatusEnum;
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
import org.hibernate.engine.jdbc.batch.spi.Batch;
import org.springframework.data.domain.Pageable;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Modifying;

View File

@ -37,6 +37,7 @@ import javax.persistence.Lob;
import javax.persistence.Table;
import javax.persistence.Temporal;
import javax.persistence.TemporalType;
import javax.persistence.Version;
import java.io.Serializable;
import java.util.Date;
@ -71,6 +72,11 @@ public class Batch2JobInstanceEntity implements Serializable {
@Temporal(TemporalType.TIMESTAMP)
private Date myEndTime;
@Version
@Column(name = "UPDATE_TIME", nullable = true)
@Temporal(TemporalType.TIMESTAMP)
private Date myUpdateTime;
@Column(name = "DEFINITION_ID", length = JobDefinition.ID_MAX_LENGTH, nullable = false)
private String myDefinitionId;
@ -190,6 +196,10 @@ public class Batch2JobInstanceEntity implements Serializable {
myEndTime = theEndTime;
}
public Date getUpdateTime() {
return myUpdateTime;
}
public String getId() {
return myId;
}
@ -289,6 +299,7 @@ public class Batch2JobInstanceEntity implements Serializable {
.append("createTime", myCreateTime)
.append("startTime", myStartTime)
.append("endTime", myEndTime)
.append("updateTime", myUpdateTime)
.append("status", myStatus)
.append("cancelled", myCancelled)
.append("combinedRecordsProcessed", myCombinedRecordsProcessed)

View File

@ -39,6 +39,7 @@ import javax.persistence.ManyToOne;
import javax.persistence.Table;
import javax.persistence.Temporal;
import javax.persistence.TemporalType;
import javax.persistence.Version;
import java.io.Serializable;
import java.util.Date;
@ -68,6 +69,10 @@ public class Batch2WorkChunkEntity implements Serializable {
@Column(name = "END_TIME", nullable = true)
@Temporal(TemporalType.TIMESTAMP)
private Date myEndTime;
@Version
@Column(name = "UPDATE_TIME", nullable = true)
@Temporal(TemporalType.TIMESTAMP)
private Date myUpdateTime;
@Column(name = "RECORDS_PROCESSED", nullable = true)
private Integer myRecordsProcessed;
@Column(name = "DEFINITION_ID", length = ID_MAX_LENGTH, nullable = false)
@ -141,6 +146,10 @@ public class Batch2WorkChunkEntity implements Serializable {
myEndTime = theEndTime;
}
public Date getUpdateTime() {
return myUpdateTime;
}
public Integer getRecordsProcessed() {
return myRecordsProcessed;
}
@ -225,6 +234,7 @@ public class Batch2WorkChunkEntity implements Serializable {
.append("createTime", myCreateTime)
.append("startTime", myStartTime)
.append("endTime", myEndTime)
.append("updateTime", myUpdateTime)
.append("recordsProcessed", myRecordsProcessed)
.append("targetStepId", myTargetStepId)
.append("serializedData", mySerializedData)

View File

@ -130,6 +130,18 @@ public class HapiFhirJpaMigrationTasks extends BaseMigrationTasks<VersionEnum> {
.modifyColumn("20221103.1", "SP_URI")
.nullable()
.withType(ColumnTypeEnum.STRING, 500);
version.onTable("BT2_JOB_INSTANCE")
.addColumn("20230110.1", "UPDATE_TIME")
.nullable()
.type(ColumnTypeEnum.DATE_TIMESTAMP);
version.onTable("BT2_WORK_CHUNK")
.addColumn("20230110.2", "UPDATE_TIME")
.nullable()
.type(ColumnTypeEnum.DATE_TIMESTAMP);
}
private void init610() {

View File

@ -48,6 +48,7 @@ public class JobInstanceUtil {
retVal.setStartTime(theEntity.getStartTime());
retVal.setCreateTime(theEntity.getCreateTime());
retVal.setEndTime(theEntity.getEndTime());
retVal.setUpdateTime(theEntity.getUpdateTime());
retVal.setCombinedRecordsProcessed(theEntity.getCombinedRecordsProcessed());
retVal.setCombinedRecordsProcessedPerSecond(theEntity.getCombinedRecordsProcessedPerSecond());
retVal.setTotalElapsedMillis(theEntity.getTotalElapsedMillis());
@ -81,6 +82,7 @@ public class JobInstanceUtil {
retVal.setStatus(theEntity.getStatus());
retVal.setCreateTime(theEntity.getCreateTime());
retVal.setStartTime(theEntity.getStartTime());
retVal.setUpdateTime(theEntity.getUpdateTime());
retVal.setEndTime(theEntity.getEndTime());
retVal.setErrorMessage(theEntity.getErrorMessage());
retVal.setErrorCount(theEntity.getErrorCount());

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -3,7 +3,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -164,7 +164,8 @@ public class MdmControllerSvcImplTest extends BaseLinkR4Test {
ServletRequestDetails details = new ServletRequestDetails();
details.setTenantId(PARTITION_2);
IBaseParameters clearJob = myMdmControllerSvc.submitMdmClearJob(urls, batchSize, details);
String jobId = ((StringType) ((Parameters) clearJob).getParameterValue("jobId")).getValueAsString();
Parameters.ParametersParameterComponent parameter = ((Parameters) clearJob).getParameter("jobId");
String jobId = ((StringType) parameter.getValue()).getValueAsString();
myBatch2JobHelper.awaitJobCompletion(jobId);
assertLinkCount(2);

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -43,6 +43,8 @@ import java.util.List;
import java.util.List;
import java.util.List;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.fail;

View File

@ -63,6 +63,10 @@ import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.TransactionDefinition;
import org.springframework.transaction.support.TransactionSynchronizationManager;
import org.springframework.transaction.support.TransactionTemplate;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.TransactionDefinition;
import org.springframework.transaction.support.TransactionSynchronizationManager;
import org.springframework.transaction.support.TransactionTemplate;
import java.io.IOException;
import java.util.ArrayList;

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -7,7 +7,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>hapi-fhir-serviceloaders</artifactId>
<groupId>ca.uhn.hapi.fhir</groupId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>hapi-fhir-serviceloaders</artifactId>
<groupId>ca.uhn.hapi.fhir</groupId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
@ -20,7 +20,7 @@
<dependency>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-caching-api</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>hapi-fhir-serviceloaders</artifactId>
<groupId>ca.uhn.hapi.fhir</groupId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -7,7 +7,7 @@
<parent>
<artifactId>hapi-fhir</artifactId>
<groupId>ca.uhn.hapi.fhir</groupId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<artifactId>hapi-fhir</artifactId>
<groupId>ca.uhn.hapi.fhir</groupId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
</parent>
<artifactId>hapi-fhir-spring-boot-sample-client-apache</artifactId>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
</parent>
<artifactId>hapi-fhir-spring-boot-sample-client-okhttp</artifactId>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
</parent>
<artifactId>hapi-fhir-spring-boot-sample-server-jersey</artifactId>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-spring-boot</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
</parent>
<artifactId>hapi-fhir-spring-boot-samples</artifactId>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -228,6 +228,9 @@ public interface IJobPersistence {
*/
boolean markInstanceAsCompleted(String theInstanceId);
@Transactional(propagation = Propagation.REQUIRES_NEW)
boolean markInstanceAsStatus(String theInstance, StatusEnum theStatusEnum);
/**
* Marks an instance as cancelled
*

View File

@ -55,6 +55,14 @@ public class ReductionStepExecutor {
) {
IReductionStepWorker<PT, IT, OT> reductionStepWorker = (IReductionStepWorker<PT, IT, OT>) theStep.getJobStepWorker();
// we mark it first so that no other maintenance passes will pick this job up!
// if we shut down mid process, though, it will be stuck in FINALIZE forever :(
if (!myJobPersistence.markInstanceAsStatus(theInstance.getInstanceId(), StatusEnum.FINALIZE)) {
ourLog.warn("JobInstance[{}] is already in FINALIZE state, no reducer action performed.", theInstance.getInstanceId());
return false;
}
theInstance.setStatus(StatusEnum.FINALIZE);
// We fetch all chunks first...
Iterator<WorkChunk> chunkIterator = myJobPersistence.fetchAllWorkChunksForStepIterator(theInstance.getInstanceId(), theStep.getStepId());
@ -63,75 +71,80 @@ public class ReductionStepExecutor {
boolean retval = true;
while (chunkIterator.hasNext()) {
WorkChunk chunk = chunkIterator.next();
if (!chunk.getStatus().isIncomplete()) {
// This should never happen since jobs with reduction are required to be gated
ourLog.error("Unexpected chunk {} with status {} found while reducing {}. No chunks feeding into a reduction step should be complete.", chunk.getId(), chunk.getStatus(), theInstance);
continue;
try {
while (chunkIterator.hasNext()) {
WorkChunk chunk = chunkIterator.next();
if (!chunk.getStatus().isIncomplete()) {
// This should never happen since jobs with reduction are required to be gated
ourLog.error("Unexpected chunk {} with status {} found while reducing {}. No chunks feeding into a reduction step should be complete.", chunk.getId(), chunk.getStatus(), theInstance);
continue;
}
if (!failedChunks.isEmpty()) {
// we are going to fail all future chunks now
failedChunks.add(chunk.getId());
} else {
try {
// feed them into our reduction worker
// this is the most likely area to throw,
// as this is where db actions and processing is likely to happen
ChunkExecutionDetails<PT, IT> chunkDetails = new ChunkExecutionDetails<>(chunk.getData(theInputType), theParameters, theInstance.getInstanceId(), chunk.getId());
ChunkOutcome outcome = reductionStepWorker.consume(chunkDetails);
switch (outcome.getStatuss()) {
case SUCCESS:
successfulChunkIds.add(chunk.getId());
break;
case ABORT:
ourLog.error("Processing of work chunk {} resulted in aborting job.", chunk.getId());
// fail entire job - including all future workchunks
failedChunks.add(chunk.getId());
retval = false;
break;
case FAIL:
myJobPersistence.markWorkChunkAsFailed(chunk.getId(),
"Step worker failed to process work chunk " + chunk.getId());
retval = false;
break;
}
} catch (Exception e) {
String msg = String.format(
"Reduction step failed to execute chunk reduction for chunk %s with exception: %s.",
chunk.getId(),
e.getMessage()
);
// we got a failure in a reduction
ourLog.error(msg, e);
retval = false;
myJobPersistence.markWorkChunkAsFailed(chunk.getId(), msg);
}
}
}
} finally {
if (!successfulChunkIds.isEmpty()) {
// complete the steps without making a new work chunk
myJobPersistence.markWorkChunksWithStatusAndWipeData(theInstance.getInstanceId(),
successfulChunkIds,
StatusEnum.COMPLETED,
null // error message - none
);
}
if (!failedChunks.isEmpty()) {
// we are going to fail all future chunks now
failedChunks.add(chunk.getId());
} else {
try {
// feed them into our reduction worker
// this is the most likely area to throw,
// as this is where db actions and processing is likely to happen
ChunkExecutionDetails<PT, IT> chunkDetails = new ChunkExecutionDetails<>(chunk.getData(theInputType), theParameters, theInstance.getInstanceId(), chunk.getId());
ChunkOutcome outcome = reductionStepWorker.consume(chunkDetails);
switch (outcome.getStatuss()) {
case SUCCESS:
successfulChunkIds.add(chunk.getId());
break;
case ABORT:
ourLog.error("Processing of work chunk {} resulted in aborting job.", chunk.getId());
// fail entire job - including all future workchunks
failedChunks.add(chunk.getId());
retval = false;
break;
case FAIL:
myJobPersistence.markWorkChunkAsFailed(chunk.getId(),
"Step worker failed to process work chunk " + chunk.getId());
retval = false;
break;
}
} catch (Exception e) {
String msg = String.format(
"Reduction step failed to execute chunk reduction for chunk %s with exception: %s.",
chunk.getId(),
e.getMessage()
);
// we got a failure in a reduction
ourLog.error(msg, e);
retval = false;
myJobPersistence.markWorkChunkAsFailed(chunk.getId(), msg);
}
// mark any failed chunks as failed for aborting
myJobPersistence.markWorkChunksWithStatusAndWipeData(theInstance.getInstanceId(),
failedChunks,
StatusEnum.FAILED,
"JOB ABORTED");
}
}
if (!successfulChunkIds.isEmpty()) {
// complete the steps without making a new work chunk
myJobPersistence.markWorkChunksWithStatusAndWipeData(theInstance.getInstanceId(),
successfulChunkIds,
StatusEnum.COMPLETED,
null // error message - none
);
}
if (!failedChunks.isEmpty()) {
// mark any failed chunks as failed for aborting
myJobPersistence.markWorkChunksWithStatusAndWipeData(theInstance.getInstanceId(),
failedChunks,
StatusEnum.FAILED,
"JOB ABORTED");
}
// if no successful chunks, return false

View File

@ -166,6 +166,11 @@ public class SynchronizedJobPersistenceWrapper implements IJobPersistence {
return myWrap.markInstanceAsCompleted(theInstanceId);
}
@Override
public boolean markInstanceAsStatus(String theInstance, StatusEnum theStatusEnum) {
return myWrap.markInstanceAsStatus(theInstance, theStatusEnum);
}
@Override
public JobOperationResultJson cancelInstance(String theInstanceId) {
return myWrap.cancelInstance(theInstanceId);

View File

@ -92,6 +92,7 @@ public class JobInstanceProcessor {
break;
case IN_PROGRESS:
case ERRORED:
case FINALIZE:
myJobInstanceProgressCalculator.calculateAndStoreInstanceProgress();
break;
case COMPLETED:
@ -138,6 +139,11 @@ public class JobInstanceProcessor {
return;
}
if (jobWorkCursor.isReductionStep() && myInstance.getStatus() == StatusEnum.FINALIZE) {
ourLog.warn("Job instance {} is still finalizing - a second reduction job will not be started.", myInstance.getInstanceId());
return;
}
String instanceId = myInstance.getInstanceId();
String currentStepId = jobWorkCursor.getCurrentStepId();
boolean shouldAdvance = myJobPersistence.canAdvanceInstanceToNextStep(instanceId, currentStepId);

View File

@ -73,6 +73,11 @@ public class JobInstance extends JobInstanceStartRequest implements IModelJson,
@JsonDeserialize(using = JsonDateDeserializer.class)
private Date myEndTime;
@JsonProperty(value = "updateTime")
@JsonSerialize(using = JsonDateSerializer.class)
@JsonDeserialize(using = JsonDateDeserializer.class)
private Date myUpdateTime;
@JsonProperty(value = "combinedRecordsProcessed")
private Integer myCombinedRecordsProcessed;
@ -120,6 +125,7 @@ public class JobInstance extends JobInstanceStartRequest implements IModelJson,
setCombinedRecordsProcessedPerSecond(theJobInstance.getCombinedRecordsProcessedPerSecond());
setCreateTime(theJobInstance.getCreateTime());
setEndTime(theJobInstance.getEndTime());
setUpdateTime(theJobInstance.getUpdateTime());
setErrorCount(theJobInstance.getErrorCount());
setErrorMessage(theJobInstance.getErrorMessage());
setEstimatedTimeRemaining(theJobInstance.getEstimatedTimeRemaining());
@ -135,6 +141,14 @@ public class JobInstance extends JobInstanceStartRequest implements IModelJson,
myJobDefinition = theJobInstance.getJobDefinition();
}
public void setUpdateTime(Date theUpdateTime) {
myUpdateTime = theUpdateTime;
}
public Date getUpdateTime() {
return myUpdateTime;
}
public static JobInstance fromJobDefinition(JobDefinition<?> theJobDefinition) {
JobInstance instance = new JobInstance();
instance.setJobDefinition(theJobDefinition);
@ -331,6 +345,7 @@ public class JobInstance extends JobInstanceStartRequest implements IModelJson,
.append("createTime", myCreateTime)
.append("startTime", myStartTime)
.append("endTime", myEndTime)
.append("updateTime", myUpdateTime)
.append("combinedRecordsProcessed", myCombinedRecordsProcessed)
.append("combinedRecordsProcessedPerSecond", myCombinedRecordsProcessedPerSecond)
.append("totalElapsedMillis", myTotalElapsedMillis)

View File

@ -41,6 +41,11 @@ public enum StatusEnum {
*/
IN_PROGRESS(true, false),
/**
* For reduction steps
*/
FINALIZE(true, false),
/**
* Task completed successfully
*/
@ -160,6 +165,9 @@ public enum StatusEnum {
// terminal state cannot transition
canTransition = false;
break;
case FINALIZE:
canTransition = theNewStatus != QUEUED && theNewStatus != IN_PROGRESS;
break;
default:
canTransition = null;
break;

View File

@ -72,6 +72,10 @@ public class WorkChunk implements IModelJson {
@JsonDeserialize(using = JsonDateDeserializer.class)
private Date myEndTime;
@JsonProperty("updateTime")
@JsonSerialize(using = JsonDateSerializer.class)
@JsonDeserialize(using = JsonDateDeserializer.class)
private Date myUpdateTime;
@JsonProperty(value = "recordsProcessed", access = JsonProperty.Access.READ_ONLY)
private Integer myRecordsProcessed;
@ -224,4 +228,12 @@ public class WorkChunk implements IModelJson {
myErrorMessage = theErrorMessage;
return this;
}
public void setUpdateTime(Date theUpdateTime) {
myUpdateTime = theUpdateTime;
}
public Date getUpdateTime() {
return myUpdateTime;
}
}

View File

@ -199,6 +199,7 @@ public class WorkChunkProcessorTest {
.thenReturn(true);
when(myJobPersistence.fetchAllWorkChunksForStepIterator(eq(INSTANCE_ID), eq(REDUCTION_STEP_ID)))
.thenReturn(chunks.iterator());
when(myJobPersistence.markInstanceAsStatus(eq(INSTANCE_ID), eq(StatusEnum.FINALIZE))).thenReturn(true);
when(myReductionStep.consume(any(ChunkExecutionDetails.class)))
.thenReturn(ChunkOutcome.SUCCESS());
when(myReductionStep.run(
@ -260,6 +261,7 @@ public class WorkChunkProcessorTest {
.thenReturn(true);
when(myJobPersistence.fetchAllWorkChunksForStepIterator(eq(INSTANCE_ID), eq(REDUCTION_STEP_ID)))
.thenReturn(chunks.iterator());
when(myJobPersistence.markInstanceAsStatus(eq(INSTANCE_ID), eq(StatusEnum.FINALIZE))).thenReturn(true);
doThrow(new RuntimeException(errorMsg))
.when(myReductionStep).consume(any(ChunkExecutionDetails.class));
@ -308,6 +310,7 @@ public class WorkChunkProcessorTest {
.thenReturn(true);
when(myJobPersistence.fetchAllWorkChunksForStepIterator(eq(INSTANCE_ID), eq(REDUCTION_STEP_ID)))
.thenReturn(chunks.iterator());
when(myJobPersistence.markInstanceAsStatus(eq(INSTANCE_ID), eq(StatusEnum.FINALIZE))).thenReturn(true);
when(myReductionStep.consume(any(ChunkExecutionDetails.class)))
.thenReturn(ChunkOutcome.SUCCESS())
.thenReturn(new ChunkOutcome(ChunkOutcome.Status.FAIL));
@ -351,6 +354,7 @@ public class WorkChunkProcessorTest {
// when
when(workCursor.isReductionStep())
.thenReturn(true);
when(myJobPersistence.markInstanceAsStatus(eq(INSTANCE_ID), eq(StatusEnum.FINALIZE))).thenReturn(true);
when(myJobPersistence.fetchAllWorkChunksForStepIterator(eq(INSTANCE_ID), eq(REDUCTION_STEP_ID)))
.thenReturn(chunks.iterator());
when(myReductionStep.consume(any(ChunkExecutionDetails.class)))

View File

@ -15,7 +15,7 @@ class StatusEnumTest {
}
@Test
public void testNotEndedStatuses() {
assertThat(StatusEnum.getNotEndedStatuses(), containsInAnyOrder(StatusEnum.QUEUED, StatusEnum.IN_PROGRESS));
assertThat(StatusEnum.getNotEndedStatuses(), containsInAnyOrder(StatusEnum.QUEUED, StatusEnum.IN_PROGRESS, StatusEnum.FINALIZE));
}
@ParameterizedTest
@ -61,6 +61,11 @@ class StatusEnumTest {
"FAILED, CANCELLED, false",
"FAILED, ERRORED, false",
"FAILED, FAILED, true",
"FINALIZE, COMPLETED, true",
"FINALIZE, IN_PROGRESS, false",
"FINALIZE, QUEUED, false",
"FINALIZE, FAILED, true",
"FINALIZE, ERRORED, true",
})
public void testStateTransition(StatusEnum origStatus, StatusEnum newStatus, boolean expected) {
assertEquals(expected, StatusEnum.isLegalStateTransition(origStatus, newStatus));
@ -68,6 +73,6 @@ class StatusEnumTest {
@Test
public void testEnumSize() {
assertEquals(6, StatusEnum.values().length, "Update testStateTransition() with new cases");
assertEquals(7, StatusEnum.values().length, "Update testStateTransition() with new cases");
}
}

View File

@ -7,7 +7,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-deployable-pom</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>

View File

@ -6,7 +6,7 @@
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<packaging>pom</packaging>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<name>HAPI-FHIR</name>
<description>An open-source implementation of the FHIR specification in Java.</description>
<url>https://hapifhir.io</url>
@ -2132,7 +2132,7 @@
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir-checkstyle</artifactId>
<!-- Remember to bump this when you upgrade the version -->
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
</dependency>
</dependencies>
</plugin>

View File

@ -6,7 +6,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>

View File

@ -4,7 +4,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>ca.uhn.hapi.fhir</groupId>
<artifactId>hapi-fhir</artifactId>
<version>6.3.13-SNAPSHOT</version>
<version>6.3.14-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>