Batch2 workchunk states hapi (#5851)
* step 1 * updated batch 2 framework with READY state * spotless * remove entity manager * spotless * fixing up more tests for batch2 * updating documentation * cleanup * removing checkstyle violation * code review points * review points continued * review poitns finished * updating tests * updates * spotless * updated * step 1 * updated * sketch out test cases * basic state transition shell work * typos * spotless * adding spy override * fixing tests * spotless * changing comment to complete build * fixing some tests and adding a view * adding different paging mechanism * spotless * waiting step 1 * commit changes * remove text * review fixes * spotless * some tweaks * updating documentation and adding change log * spotless * added documentation * review comments 1 * more review fixes * spotless * fixing bug * fixing path * spotless * update state diagram * review points round 1 * revert * updating diag * review fixes round 2 * spotless * - Implemented GATE_WAITING state for the batch2 state machine. - This will be the initial status for all workchunks of a gated job. - made compatible with the equivalent "fake QUEUED" state in the Old batch2 implementation. - Updated corresponding docs. - added corresponding tests and changelog * Revert "- Implemented GATE_WAITING state for the batch2 state machine." This reverts commit32a00f4b81
. * - Implemented GATE_WAITING state for the batch2 state machine. - This will be the initial status for all workchunks of a gated job. - made compatible with the equivalent "fake QUEUED" state in the Old batch2 implementation. - Updated corresponding docs. - added corresponding tests and changelog * fixing a bug * spotless * fixing * - fix merges conflicts - set first chunk to be always created in READY * - have only one path through the equeueReady method - fixed tests * - hid the over-powered transition function behind a proper state action * spotless * resolved review comments * fixing tests * resolved review comments * resolved review comments * resolved review comments * resolved review comments * resolved review comments * updating migration script number * fixed bugs * spotless * fix test high concurrency * fixing a test * code fix * fixing tests in bulkexportit * fixing tests * fixing tests * cleanup * completed instance will not be sent to the reduction step service * Revert "completed instance will not be sent to the reduction step service" This reverts commitaa149b6691
. * Revert "Revert "completed instance will not be sent to the reduction step service"" This reverts commite18f5796a1
. * removing dead code * changed db query for step advance to take statuses as parameter instead * test fixes * spotless * test fix * spotless * fixing tests * migration fix * fixing test * testing pipeline with `testGroupBulkExportNotInGroup_DoesNotShowUp` disabled * fixing some tests * Add new race test for simultaneous queue/dequeue * re-enabling `testGroupBulkExportNotInGroup_DoesNotShowUp` * cascade tag deletes * test fixes * some logging * a test case * adding job id * more test code * marking purge checks * test fix * testing * pausing schedulers on cleanup * adding a wait * max thread count guarantee * fixing the tests again * removing dead code * spotless * checking * msg codes: * Fixing a test * review points * spotless * required pom values * step 1 of reduction ready * update * reductoin ready * annother test * spotless * cleanup * cleanup * simplifying check in reduction step * review fixes * updating version * using 7.3.1 * adding check * test finessing --------- Co-authored-by: leif stawnyczy <leifstawnyczy@leifs-mbp.home> Co-authored-by: Michael Buckley <michaelabuckley@gmail.com> Co-authored-by: tyner <tyner.guo@smilecdr.com>
This commit is contained in:
parent
b555498c9b
commit
ae67e7b55e
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir</artifactId>
|
<artifactId>hapi-fhir</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -26,13 +26,16 @@ import java.util.LinkedList;
|
||||||
import java.util.NoSuchElementException;
|
import java.util.NoSuchElementException;
|
||||||
import java.util.function.Consumer;
|
import java.util.function.Consumer;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* This paging iterator only works with already ordered queries
|
||||||
|
*/
|
||||||
public class PagingIterator<T> implements Iterator<T> {
|
public class PagingIterator<T> implements Iterator<T> {
|
||||||
|
|
||||||
public interface PageFetcher<T> {
|
public interface PageFetcher<T> {
|
||||||
void fetchNextPage(int thePageIndex, int theBatchSize, Consumer<T> theConsumer);
|
void fetchNextPage(int thePageIndex, int theBatchSize, Consumer<T> theConsumer);
|
||||||
}
|
}
|
||||||
|
|
||||||
static final int PAGE_SIZE = 100;
|
static final int DEFAULT_PAGE_SIZE = 100;
|
||||||
|
|
||||||
private int myPage;
|
private int myPage;
|
||||||
|
|
||||||
|
@ -42,8 +45,16 @@ public class PagingIterator<T> implements Iterator<T> {
|
||||||
|
|
||||||
private final PageFetcher<T> myFetcher;
|
private final PageFetcher<T> myFetcher;
|
||||||
|
|
||||||
|
private final int myPageSize;
|
||||||
|
|
||||||
public PagingIterator(PageFetcher<T> theFetcher) {
|
public PagingIterator(PageFetcher<T> theFetcher) {
|
||||||
|
this(DEFAULT_PAGE_SIZE, theFetcher);
|
||||||
|
}
|
||||||
|
|
||||||
|
public PagingIterator(int thePageSize, PageFetcher<T> theFetcher) {
|
||||||
|
assert thePageSize > 0 : "Page size must be a positive value";
|
||||||
myFetcher = theFetcher;
|
myFetcher = theFetcher;
|
||||||
|
myPageSize = thePageSize;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -66,9 +77,9 @@ public class PagingIterator<T> implements Iterator<T> {
|
||||||
|
|
||||||
private void fetchNextBatch() {
|
private void fetchNextBatch() {
|
||||||
if (!myIsFinished && myCurrentBatch.isEmpty()) {
|
if (!myIsFinished && myCurrentBatch.isEmpty()) {
|
||||||
myFetcher.fetchNextPage(myPage, PAGE_SIZE, myCurrentBatch::add);
|
myFetcher.fetchNextPage(myPage, myPageSize, myCurrentBatch::add);
|
||||||
myPage++;
|
myPage++;
|
||||||
myIsFinished = myCurrentBatch.size() < PAGE_SIZE;
|
myIsFinished = myCurrentBatch.size() < myPageSize;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -62,7 +62,7 @@ public class PagingIteratorTest {
|
||||||
public void next_fetchTest_fetchesAndReturns() {
|
public void next_fetchTest_fetchesAndReturns() {
|
||||||
// 3 cases to make sure we get the edge cases
|
// 3 cases to make sure we get the edge cases
|
||||||
for (int adj : new int[] { -1, 0, 1 }) {
|
for (int adj : new int[] { -1, 0, 1 }) {
|
||||||
int size = PagingIterator.PAGE_SIZE + adj;
|
int size = PagingIterator.DEFAULT_PAGE_SIZE + adj;
|
||||||
|
|
||||||
myPagingIterator = createPagingIterator(size);
|
myPagingIterator = createPagingIterator(size);
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir-bom</artifactId>
|
<artifactId>hapi-fhir-bom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<packaging>pom</packaging>
|
<packaging>pom</packaging>
|
||||||
<name>HAPI FHIR BOM</name>
|
<name>HAPI FHIR BOM</name>
|
||||||
|
@ -12,7 +12,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir</artifactId>
|
<artifactId>hapi-fhir</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir-cli</artifactId>
|
<artifactId>hapi-fhir-cli</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir</artifactId>
|
<artifactId>hapi-fhir</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir</artifactId>
|
<artifactId>hapi-fhir</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -0,0 +1,10 @@
|
||||||
|
---
|
||||||
|
type: add
|
||||||
|
issue: 5745
|
||||||
|
title: "Added another state to the Batch2 work chunk state machine: `READY`.
|
||||||
|
This work chunk state will be the initial state on creation.
|
||||||
|
Once queued for delivery, they will transition to `QUEUED`.
|
||||||
|
The exception is for ReductionStep chunks (because reduction steps
|
||||||
|
are not read off of the queue, but executed by the maintenance job
|
||||||
|
inline.
|
||||||
|
"
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
type: add
|
||||||
|
issue: 5767
|
||||||
|
title: "Added new `POLL_WAITING` state for WorkChunks in batch jobs.
|
||||||
|
Also added RetryChunkLaterException for jobs that have steps that
|
||||||
|
need to be retried at a later time (can be provided optionally to exception).
|
||||||
|
If a step throws this new exception, it will be set with the new
|
||||||
|
`POLL_WAITING` status and retried at a later time.
|
||||||
|
"
|
|
@ -0,0 +1,7 @@
|
||||||
|
---
|
||||||
|
type: add
|
||||||
|
issue: 5818
|
||||||
|
title: "Added another state to the Batch2 work chunk state machine: `GATE_WAITING`.
|
||||||
|
This work chunk state will be the initial state on creation for gated jobs.
|
||||||
|
Once all chunks are completed for the previous step, they will transition to `READY`.
|
||||||
|
"
|
|
@ -47,24 +47,35 @@ stateDiagram-v2
|
||||||
title: Batch2 Job Work Chunk state transitions
|
title: Batch2 Job Work Chunk state transitions
|
||||||
---
|
---
|
||||||
stateDiagram-v2
|
stateDiagram-v2
|
||||||
|
state GATE_WAITING
|
||||||
|
state READY
|
||||||
|
state REDUCTION_READY
|
||||||
state QUEUED
|
state QUEUED
|
||||||
state on_receive <<choice>>
|
state on_receive <<choice>>
|
||||||
state IN_PROGRESS
|
state IN_PROGRESS
|
||||||
state ERROR
|
state ERROR
|
||||||
|
state POLL_WAITING
|
||||||
state execute <<choice>>
|
state execute <<choice>>
|
||||||
state FAILED
|
state FAILED
|
||||||
state COMPLETED
|
state COMPLETED
|
||||||
direction LR
|
direction LR
|
||||||
[*] --> QUEUED : on create
|
[*] --> READY : on create - normal or gated jobs first chunks
|
||||||
|
[*] --> GATE_WAITING : on create - gated jobs for all but the first chunks of the first step
|
||||||
|
GATE_WAITING --> READY : on gate release - gated
|
||||||
|
GATE_WAITING --> REDUCTION_READY : on gate release for the final reduction step (all reduction jobs are gated)
|
||||||
|
QUEUED --> READY : on gate release - gated (for compatibility with legacy QUEUED state up to Hapi-fhir version 7.1)
|
||||||
|
READY --> QUEUED : placed on kafka (maint.)
|
||||||
|
POLL_WAITING --> READY : after a poll delay on a POLL_WAITING work chunk has elapsed
|
||||||
|
|
||||||
%% worker processing states
|
%% worker processing states
|
||||||
QUEUED --> on_receive : on deque by worker
|
QUEUED --> on_receive : on deque by worker
|
||||||
on_receive --> IN_PROGRESS : start execution
|
on_receive --> IN_PROGRESS : start execution
|
||||||
|
|
||||||
IN_PROGRESS --> execute: execute
|
IN_PROGRESS --> execute: execute
|
||||||
execute --> ERROR : on re-triable error
|
execute --> ERROR : on re-triable error
|
||||||
execute --> COMPLETED : success\n maybe trigger instance first_step_finished
|
execute --> COMPLETED : success\n maybe trigger instance first_step_finished
|
||||||
execute --> FAILED : on unrecoverable \n or too many errors
|
execute --> FAILED : on unrecoverable \n or too many errors
|
||||||
|
execute --> POLL_WAITING : job step has throw a RetryChunkLaterException and must be tried again after the provided poll delay
|
||||||
|
|
||||||
%% temporary error state until retry
|
%% temporary error state until retry
|
||||||
ERROR --> on_receive : exception rollback\n triggers redelivery
|
ERROR --> on_receive : exception rollback\n triggers redelivery
|
||||||
|
|
|
@ -19,36 +19,54 @@ A HAPI-FHIR batch job definition consists of a job name, version, parameter json
|
||||||
After a job has been defined, *instances* of that job can be submitted for batch processing by populating a `JobInstanceStartRequest` with the job name and job parameters json and then submitting that request to the Batch Job Coordinator.
|
After a job has been defined, *instances* of that job can be submitted for batch processing by populating a `JobInstanceStartRequest` with the job name and job parameters json and then submitting that request to the Batch Job Coordinator.
|
||||||
|
|
||||||
The Batch Job Coordinator will then store two records in the database:
|
The Batch Job Coordinator will then store two records in the database:
|
||||||
- Job Instance with status QUEUED: that is the parent record for all data concerning this job
|
- Job Instance with status `QUEUED`: that is the parent record for all data concerning this job
|
||||||
- Batch Work Chunk with status QUEUED: this describes the first "chunk" of work required for this job. The first Batch Work Chunk contains no data.
|
- Batch Work Chunk with status `READY`: this describes the first "chunk" of work required for this job. The first Batch Work Chunk contains no data.
|
||||||
|
|
||||||
Lastly the Batch Job Coordinator publishes a message to the Batch Notification Message Channel (named `batch2-work-notification`) to inform worker threads that this first chunk of work is now ready for processing.
|
### The Maintenance Job
|
||||||
|
|
||||||
### Job Processing - First Step
|
A Scheduled Job runs periodically (once a minute). For each Job Instance in the database, it:
|
||||||
|
|
||||||
HAPI-FHIR Batch Jobs run based on job notification messages. The process is kicked off by the first chunk of work. When this notification message arrives, the message handler makes a single call to the first step defined in the job definition, passing in the job parameters as input.
|
1. Calculates job progress (% of work chunks in `COMPLETE` status). If the job is finished, purges any left over work chunks still in the database.
|
||||||
|
1. Moves all `POLL_WAITING` work chunks to `READY` if their `nextPollTime` has expired.
|
||||||
|
1. Calculates job progress (% of work chunks in `COMPLETE` status). If the job is finished, purges any leftover work chunks still in the database.
|
||||||
|
1. Cleans up any complete, failed, or cancelled jobs that need to be removed.
|
||||||
|
1. When the current step is complete, moves any gated jobs onto their next step and updates all chunks in `GATE_WAITING` to `READY`. If the the job is being moved to its final reduction step, chunks are moved from `GATE_WAITING` to `REDUCTION_READY`.
|
||||||
|
1. If the final step of a gated job is a reduction step, a reduction step execution will be triggered. All workchunks for the job in `REDUCTION_READY` will be consumed at this point.
|
||||||
|
1. Moves all `READY` work chunks into the `QUEUED` state and publishes a message to the Batch Notification Message Channel to inform worker threads that a work chunk is now ready for processing. \*
|
||||||
|
|
||||||
The handler then does the following:
|
\* An exception is for the final reduction step, where work chunks are not published to the Batch Notification Message Channel,
|
||||||
1. Change the work chunk status from QUEUED to IN_PROGRESS
|
but instead processed inline.
|
||||||
2. Change the Job Instance status from QUEUED to IN_PROGRESS
|
|
||||||
3. If the Job Instance is cancelled, change the status to CANCELLED and abort processing.
|
|
||||||
4. The first step of the job definition is executed with the job parameters
|
|
||||||
5. This step creates new work chunks. For each work chunk it creates, it json serializes the work chunk data, stores it in the database, and publishes a new message to the Batch Notification Message Channel to notify worker threads that there are new work chunks waiting to be processed.
|
|
||||||
6. If the step succeeded, the work chunk status is changed from IN_PROGRESS to COMPLETED, and the data it contained is deleted.
|
|
||||||
7. If the step failed, the work chunk status is changed from IN_PROGRESS to either ERRORED or FAILED depending on the severity of the error.
|
|
||||||
|
|
||||||
### Job Processing - Middle steps
|
### Batch Notification Message Handler
|
||||||
|
|
||||||
Middle Steps in the job definition are executed in the same way, except instead of only using the Job Parameters as input, they use both the Job Parameters and the Work Chunk data produced from the previous step.
|
HAPI-FHIR Batch Jobs run based on job notification messages of the Batch Notification Message Channel (named `batch2-work-notification`).
|
||||||
|
|
||||||
### Job Processing - Final Step
|
When a notification message arrives, the handler does the following:
|
||||||
|
|
||||||
|
1. Change the work chunk status from `QUEUED` to `IN_PROGRESS`
|
||||||
|
1. Change the Job Instance status from `QUEUED` to `IN_PROGRESS`
|
||||||
|
1. If the Job Instance is cancelled, change the status to `CANCELLED` and abort processing
|
||||||
|
1. If the step creates new work chunks, each work chunk will be created in either the `GATE_WAITING` state (for gated jobs) or `READY` state (for non-gated jobs) and will be handled in the next maintenance job pass.
|
||||||
|
1. If the step succeeds, the work chunk status is changed from `IN_PROGRESS` to `COMPLETED`, and the data it contained is deleted.
|
||||||
|
1. If the step throws a `RetryChunkLaterException`, the work chunk status is changed from `IN_PROGRESS` to `POLL_WAITING`, and a `nextPollTime` value will be set.
|
||||||
|
1. If the step fails, the work chunk status is changed from `IN_PROGRESS` to either `ERRORED` or `FAILED`, depending on the severity of the error.
|
||||||
|
|
||||||
|
### First Step
|
||||||
|
|
||||||
|
The first step in a job definition is executed with just the job parameters.
|
||||||
|
|
||||||
|
### Middle steps
|
||||||
|
|
||||||
|
Middle Steps in the job definition are executed using the initial Job Parameters and the Work Chunk data produced from the previous step.
|
||||||
|
|
||||||
|
### Final Step
|
||||||
|
|
||||||
The final step operates the same way as the middle steps, except it does not produce any new work chunks.
|
The final step operates the same way as the middle steps, except it does not produce any new work chunks.
|
||||||
|
|
||||||
### Gated Execution
|
### Gated Execution
|
||||||
|
|
||||||
If a Job Definition is set to having Gated Execution, then all work chunks for one step must be COMPLETED before any work chunks for the next step may begin.
|
If a Job Definition is set to having Gated Execution, then all work chunks for a step must be `COMPLETED` before any work chunks for the next step may begin.
|
||||||
|
|
||||||
### Job Instance Completion
|
### Job Instance Completion
|
||||||
|
|
||||||
A Batch Job Maintenance Service runs every minute to monitor the status of all Job Instances and the Job Instance is transitioned to either COMPLETED, ERRORED or FAILED according to the status of all outstanding work chunks for that job instance. If the job instance is still IN_PROGRESS this maintenance service also estimates the time remaining to complete the job.
|
A Batch Job Maintenance Service runs every minute to monitor the status of all Job Instances and the Job Instance is transitioned to either `COMPLETED`, `ERRORED` or `FAILED` according to the status of all outstanding work chunks for that job instance. If the job instance is still `IN_PROGRESS` this maintenance service also estimates the time remaining to complete the job.
|
||||||
|
|
|
@ -11,7 +11,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -37,6 +37,17 @@ public interface IHapiScheduler {
|
||||||
|
|
||||||
void logStatusForUnitTest();
|
void logStatusForUnitTest();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Pauses this scheduler (and thus all scheduled jobs).
|
||||||
|
* To restart call {@link #unpause()}
|
||||||
|
*/
|
||||||
|
void pause();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Restarts this scheduler after {@link #pause()}
|
||||||
|
*/
|
||||||
|
void unpause();
|
||||||
|
|
||||||
void scheduleJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition);
|
void scheduleJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition);
|
||||||
|
|
||||||
Set<JobKey> getJobKeysForUnitTest() throws SchedulerException;
|
Set<JobKey> getJobKeysForUnitTest() throws SchedulerException;
|
||||||
|
|
|
@ -32,6 +32,20 @@ public interface ISchedulerService {
|
||||||
|
|
||||||
void logStatusForUnitTest();
|
void logStatusForUnitTest();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Pauses the scheduler so no new jobs will run.
|
||||||
|
* Useful in tests when cleanup needs to happen but scheduled jobs may
|
||||||
|
* be running
|
||||||
|
*/
|
||||||
|
@VisibleForTesting
|
||||||
|
void pause();
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Restarts the scheduler after a previous call to {@link #pause()}.
|
||||||
|
*/
|
||||||
|
@VisibleForTesting
|
||||||
|
void unpause();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* This task will execute locally (and should execute on all nodes of the cluster if there is a cluster)
|
* This task will execute locally (and should execute on all nodes of the cluster if there is a cluster)
|
||||||
* @param theIntervalMillis How many milliseconds between passes should this job run
|
* @param theIntervalMillis How many milliseconds between passes should this job run
|
||||||
|
@ -52,6 +66,9 @@ public interface ISchedulerService {
|
||||||
@VisibleForTesting
|
@VisibleForTesting
|
||||||
Set<JobKey> getClusteredJobKeysForUnitTest() throws SchedulerException;
|
Set<JobKey> getClusteredJobKeysForUnitTest() throws SchedulerException;
|
||||||
|
|
||||||
|
@VisibleForTesting
|
||||||
|
boolean isSchedulingDisabled();
|
||||||
|
|
||||||
boolean isStopping();
|
boolean isStopping();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -29,6 +29,7 @@ import com.google.common.collect.Sets;
|
||||||
import jakarta.annotation.Nonnull;
|
import jakarta.annotation.Nonnull;
|
||||||
import org.apache.commons.lang3.Validate;
|
import org.apache.commons.lang3.Validate;
|
||||||
import org.quartz.JobDataMap;
|
import org.quartz.JobDataMap;
|
||||||
|
import org.quartz.JobExecutionContext;
|
||||||
import org.quartz.JobKey;
|
import org.quartz.JobKey;
|
||||||
import org.quartz.ScheduleBuilder;
|
import org.quartz.ScheduleBuilder;
|
||||||
import org.quartz.Scheduler;
|
import org.quartz.Scheduler;
|
||||||
|
@ -44,11 +45,14 @@ import org.slf4j.Logger;
|
||||||
import org.slf4j.LoggerFactory;
|
import org.slf4j.LoggerFactory;
|
||||||
import org.springframework.scheduling.quartz.SchedulerFactoryBean;
|
import org.springframework.scheduling.quartz.SchedulerFactoryBean;
|
||||||
|
|
||||||
|
import java.util.List;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.concurrent.atomic.AtomicInteger;
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
|
import static org.apache.commons.lang3.StringUtils.isNotBlank;
|
||||||
|
|
||||||
public abstract class BaseHapiScheduler implements IHapiScheduler {
|
public abstract class BaseHapiScheduler implements IHapiScheduler {
|
||||||
private static final Logger ourLog = LoggerFactory.getLogger(BaseHapiScheduler.class);
|
private static final Logger ourLog = LoggerFactory.getLogger(BaseHapiScheduler.class);
|
||||||
|
|
||||||
|
@ -151,6 +155,42 @@ public abstract class BaseHapiScheduler implements IHapiScheduler {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void pause() {
|
||||||
|
int delay = 100;
|
||||||
|
String errorMsg = null;
|
||||||
|
Throwable ex = null;
|
||||||
|
try {
|
||||||
|
int count = 0;
|
||||||
|
myScheduler.standby();
|
||||||
|
while (count < 3) {
|
||||||
|
if (!hasRunningJobs()) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
Thread.sleep(delay);
|
||||||
|
count++;
|
||||||
|
}
|
||||||
|
if (count >= 3) {
|
||||||
|
errorMsg = "Scheduler on standby. But after " + (count + 1) * delay
|
||||||
|
+ " ms there are still jobs running. Execution will continue, but may cause bugs.";
|
||||||
|
}
|
||||||
|
} catch (Exception x) {
|
||||||
|
ex = x;
|
||||||
|
errorMsg = "Failed to set to standby. Execution will continue, but may cause bugs.";
|
||||||
|
}
|
||||||
|
|
||||||
|
if (isNotBlank(errorMsg)) {
|
||||||
|
if (ex != null) {
|
||||||
|
ourLog.warn(errorMsg, ex);
|
||||||
|
} else {
|
||||||
|
ourLog.warn(errorMsg);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void unpause() {
|
||||||
|
start();
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void clear() throws SchedulerException {
|
public void clear() throws SchedulerException {
|
||||||
myScheduler.clear();
|
myScheduler.clear();
|
||||||
|
@ -168,6 +208,16 @@ public abstract class BaseHapiScheduler implements IHapiScheduler {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private boolean hasRunningJobs() {
|
||||||
|
try {
|
||||||
|
List<JobExecutionContext> currentlyExecutingJobs = myScheduler.getCurrentlyExecutingJobs();
|
||||||
|
ourLog.info("Checking for running jobs. Found {} running.", currentlyExecutingJobs);
|
||||||
|
return !currentlyExecutingJobs.isEmpty();
|
||||||
|
} catch (SchedulerException ex) {
|
||||||
|
throw new RuntimeException(Msg.code(2521) + " Failed during check for scheduled jobs", ex);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void scheduleJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition) {
|
public void scheduleJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition) {
|
||||||
Validate.isTrue(theIntervalMillis >= 100);
|
Validate.isTrue(theIntervalMillis >= 100);
|
||||||
|
|
|
@ -136,7 +136,7 @@ public abstract class BaseSchedulerServiceImpl implements ISchedulerService {
|
||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
private boolean isSchedulingDisabled() {
|
public boolean isSchedulingDisabled() {
|
||||||
return !isLocalSchedulingEnabled() || isSchedulingDisabledForUnitTests();
|
return !isLocalSchedulingEnabled() || isSchedulingDisabledForUnitTests();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -198,6 +198,18 @@ public abstract class BaseSchedulerServiceImpl implements ISchedulerService {
|
||||||
myClusteredScheduler.logStatusForUnitTest();
|
myClusteredScheduler.logStatusForUnitTest();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void pause() {
|
||||||
|
myLocalScheduler.pause();
|
||||||
|
myClusteredScheduler.pause();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void unpause() {
|
||||||
|
myLocalScheduler.unpause();
|
||||||
|
myClusteredScheduler.unpause();
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void scheduleLocalJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition) {
|
public void scheduleLocalJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition) {
|
||||||
scheduleJob("local", myLocalScheduler, theIntervalMillis, theJobDefinition);
|
scheduleJob("local", myLocalScheduler, theIntervalMillis, theJobDefinition);
|
||||||
|
|
|
@ -53,6 +53,16 @@ public class HapiNullScheduler implements IHapiScheduler {
|
||||||
@Override
|
@Override
|
||||||
public void logStatusForUnitTest() {}
|
public void logStatusForUnitTest() {}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void pause() {
|
||||||
|
// nothing to do
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void unpause() {
|
||||||
|
// nothing to do
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void scheduleJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition) {
|
public void scheduleJob(long theIntervalMillis, ScheduledJobDefinition theJobDefinition) {
|
||||||
ourLog.debug("Skipping scheduling job {} since scheduling is disabled", theJobDefinition.getId());
|
ourLog.debug("Skipping scheduling job {} since scheduling is disabled", theJobDefinition.getId());
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -123,6 +123,8 @@ class JobInstanceUtil {
|
||||||
retVal.setErrorMessage(theEntity.getErrorMessage());
|
retVal.setErrorMessage(theEntity.getErrorMessage());
|
||||||
retVal.setErrorCount(theEntity.getErrorCount());
|
retVal.setErrorCount(theEntity.getErrorCount());
|
||||||
retVal.setRecordsProcessed(theEntity.getRecordsProcessed());
|
retVal.setRecordsProcessed(theEntity.getRecordsProcessed());
|
||||||
|
retVal.setNextPollTime(theEntity.getNextPollTime());
|
||||||
|
retVal.setPollAttempts(theEntity.getPollAttempts());
|
||||||
// note: may be null out if queried NoData
|
// note: may be null out if queried NoData
|
||||||
retVal.setData(theEntity.getSerializedData());
|
retVal.setData(theEntity.getSerializedData());
|
||||||
retVal.setWarningMessage(theEntity.getWarningMessage());
|
retVal.setWarningMessage(theEntity.getWarningMessage());
|
||||||
|
|
|
@ -24,6 +24,7 @@ import ca.uhn.fhir.batch2.config.BaseBatch2Config;
|
||||||
import ca.uhn.fhir.interceptor.api.IInterceptorBroadcaster;
|
import ca.uhn.fhir.interceptor.api.IInterceptorBroadcaster;
|
||||||
import ca.uhn.fhir.jpa.bulk.export.job.BulkExportJobConfig;
|
import ca.uhn.fhir.jpa.bulk.export.job.BulkExportJobConfig;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
||||||
|
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkMetadataViewRepository;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
||||||
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
|
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
|
||||||
import jakarta.persistence.EntityManager;
|
import jakarta.persistence.EntityManager;
|
||||||
|
@ -39,12 +40,14 @@ public class JpaBatch2Config extends BaseBatch2Config {
|
||||||
public IJobPersistence batch2JobInstancePersister(
|
public IJobPersistence batch2JobInstancePersister(
|
||||||
IBatch2JobInstanceRepository theJobInstanceRepository,
|
IBatch2JobInstanceRepository theJobInstanceRepository,
|
||||||
IBatch2WorkChunkRepository theWorkChunkRepository,
|
IBatch2WorkChunkRepository theWorkChunkRepository,
|
||||||
|
IBatch2WorkChunkMetadataViewRepository theWorkChunkMetadataViewRepo,
|
||||||
IHapiTransactionService theTransactionService,
|
IHapiTransactionService theTransactionService,
|
||||||
EntityManager theEntityManager,
|
EntityManager theEntityManager,
|
||||||
IInterceptorBroadcaster theInterceptorBroadcaster) {
|
IInterceptorBroadcaster theInterceptorBroadcaster) {
|
||||||
return new JpaJobPersistenceImpl(
|
return new JpaJobPersistenceImpl(
|
||||||
theJobInstanceRepository,
|
theJobInstanceRepository,
|
||||||
theWorkChunkRepository,
|
theWorkChunkRepository,
|
||||||
|
theWorkChunkMetadataViewRepo,
|
||||||
theTransactionService,
|
theTransactionService,
|
||||||
theEntityManager,
|
theEntityManager,
|
||||||
theInterceptorBroadcaster);
|
theInterceptorBroadcaster);
|
||||||
|
|
|
@ -28,16 +28,19 @@ import ca.uhn.fhir.batch2.model.WorkChunk;
|
||||||
import ca.uhn.fhir.batch2.model.WorkChunkCompletionEvent;
|
import ca.uhn.fhir.batch2.model.WorkChunkCompletionEvent;
|
||||||
import ca.uhn.fhir.batch2.model.WorkChunkCreateEvent;
|
import ca.uhn.fhir.batch2.model.WorkChunkCreateEvent;
|
||||||
import ca.uhn.fhir.batch2.model.WorkChunkErrorEvent;
|
import ca.uhn.fhir.batch2.model.WorkChunkErrorEvent;
|
||||||
|
import ca.uhn.fhir.batch2.model.WorkChunkMetadata;
|
||||||
import ca.uhn.fhir.batch2.model.WorkChunkStatusEnum;
|
import ca.uhn.fhir.batch2.model.WorkChunkStatusEnum;
|
||||||
import ca.uhn.fhir.batch2.models.JobInstanceFetchRequest;
|
import ca.uhn.fhir.batch2.models.JobInstanceFetchRequest;
|
||||||
import ca.uhn.fhir.interceptor.api.HookParams;
|
import ca.uhn.fhir.interceptor.api.HookParams;
|
||||||
import ca.uhn.fhir.interceptor.api.IInterceptorBroadcaster;
|
import ca.uhn.fhir.interceptor.api.IInterceptorBroadcaster;
|
||||||
import ca.uhn.fhir.interceptor.api.Pointcut;
|
import ca.uhn.fhir.interceptor.api.Pointcut;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
||||||
|
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkMetadataViewRepository;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
||||||
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
|
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
|
||||||
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
||||||
import ca.uhn.fhir.jpa.entity.Batch2WorkChunkEntity;
|
import ca.uhn.fhir.jpa.entity.Batch2WorkChunkEntity;
|
||||||
|
import ca.uhn.fhir.jpa.entity.Batch2WorkChunkMetadataView;
|
||||||
import ca.uhn.fhir.model.api.PagingIterator;
|
import ca.uhn.fhir.model.api.PagingIterator;
|
||||||
import ca.uhn.fhir.rest.api.server.RequestDetails;
|
import ca.uhn.fhir.rest.api.server.RequestDetails;
|
||||||
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
||||||
|
@ -64,7 +67,10 @@ import org.springframework.transaction.annotation.Propagation;
|
||||||
import org.springframework.transaction.annotation.Transactional;
|
import org.springframework.transaction.annotation.Transactional;
|
||||||
import org.springframework.transaction.support.TransactionSynchronizationManager;
|
import org.springframework.transaction.support.TransactionSynchronizationManager;
|
||||||
|
|
||||||
|
import java.time.Instant;
|
||||||
|
import java.util.Collections;
|
||||||
import java.util.Date;
|
import java.util.Date;
|
||||||
|
import java.util.HashSet;
|
||||||
import java.util.Iterator;
|
import java.util.Iterator;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
|
@ -85,6 +91,7 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
|
|
||||||
private final IBatch2JobInstanceRepository myJobInstanceRepository;
|
private final IBatch2JobInstanceRepository myJobInstanceRepository;
|
||||||
private final IBatch2WorkChunkRepository myWorkChunkRepository;
|
private final IBatch2WorkChunkRepository myWorkChunkRepository;
|
||||||
|
private final IBatch2WorkChunkMetadataViewRepository myWorkChunkMetadataViewRepo;
|
||||||
private final EntityManager myEntityManager;
|
private final EntityManager myEntityManager;
|
||||||
private final IHapiTransactionService myTransactionService;
|
private final IHapiTransactionService myTransactionService;
|
||||||
private final IInterceptorBroadcaster myInterceptorBroadcaster;
|
private final IInterceptorBroadcaster myInterceptorBroadcaster;
|
||||||
|
@ -95,13 +102,15 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
public JpaJobPersistenceImpl(
|
public JpaJobPersistenceImpl(
|
||||||
IBatch2JobInstanceRepository theJobInstanceRepository,
|
IBatch2JobInstanceRepository theJobInstanceRepository,
|
||||||
IBatch2WorkChunkRepository theWorkChunkRepository,
|
IBatch2WorkChunkRepository theWorkChunkRepository,
|
||||||
|
IBatch2WorkChunkMetadataViewRepository theWorkChunkMetadataViewRepo,
|
||||||
IHapiTransactionService theTransactionService,
|
IHapiTransactionService theTransactionService,
|
||||||
EntityManager theEntityManager,
|
EntityManager theEntityManager,
|
||||||
IInterceptorBroadcaster theInterceptorBroadcaster) {
|
IInterceptorBroadcaster theInterceptorBroadcaster) {
|
||||||
Validate.notNull(theJobInstanceRepository);
|
Validate.notNull(theJobInstanceRepository, "theJobInstanceRepository");
|
||||||
Validate.notNull(theWorkChunkRepository);
|
Validate.notNull(theWorkChunkRepository, "theWorkChunkRepository");
|
||||||
myJobInstanceRepository = theJobInstanceRepository;
|
myJobInstanceRepository = theJobInstanceRepository;
|
||||||
myWorkChunkRepository = theWorkChunkRepository;
|
myWorkChunkRepository = theWorkChunkRepository;
|
||||||
|
myWorkChunkMetadataViewRepo = theWorkChunkMetadataViewRepo;
|
||||||
myTransactionService = theTransactionService;
|
myTransactionService = theTransactionService;
|
||||||
myEntityManager = theEntityManager;
|
myEntityManager = theEntityManager;
|
||||||
myInterceptorBroadcaster = theInterceptorBroadcaster;
|
myInterceptorBroadcaster = theInterceptorBroadcaster;
|
||||||
|
@ -120,23 +129,46 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
entity.setSerializedData(theBatchWorkChunk.serializedData);
|
entity.setSerializedData(theBatchWorkChunk.serializedData);
|
||||||
entity.setCreateTime(new Date());
|
entity.setCreateTime(new Date());
|
||||||
entity.setStartTime(new Date());
|
entity.setStartTime(new Date());
|
||||||
entity.setStatus(WorkChunkStatusEnum.QUEUED);
|
entity.setStatus(getOnCreateStatus(theBatchWorkChunk));
|
||||||
|
|
||||||
ourLog.debug("Create work chunk {}/{}/{}", entity.getInstanceId(), entity.getId(), entity.getTargetStepId());
|
ourLog.debug("Create work chunk {}/{}/{}", entity.getInstanceId(), entity.getId(), entity.getTargetStepId());
|
||||||
ourLog.trace(
|
ourLog.trace(
|
||||||
"Create work chunk data {}/{}: {}", entity.getInstanceId(), entity.getId(), entity.getSerializedData());
|
"Create work chunk data {}/{}: {}", entity.getInstanceId(), entity.getId(), entity.getSerializedData());
|
||||||
myTransactionService.withSystemRequestOnDefaultPartition().execute(() -> myWorkChunkRepository.save(entity));
|
myTransactionService.withSystemRequestOnDefaultPartition().execute(() -> myWorkChunkRepository.save(entity));
|
||||||
|
|
||||||
return entity.getId();
|
return entity.getId();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Gets the initial onCreate state for the given workchunk.
|
||||||
|
* Gated job chunks start in GATE_WAITING; they will be transitioned to READY during maintenance pass when all
|
||||||
|
* chunks in the previous step are COMPLETED.
|
||||||
|
* Non gated job chunks start in READY
|
||||||
|
*/
|
||||||
|
private static WorkChunkStatusEnum getOnCreateStatus(WorkChunkCreateEvent theBatchWorkChunk) {
|
||||||
|
if (theBatchWorkChunk.isGatedExecution) {
|
||||||
|
return WorkChunkStatusEnum.GATE_WAITING;
|
||||||
|
} else {
|
||||||
|
return WorkChunkStatusEnum.READY;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@Transactional(propagation = Propagation.REQUIRED)
|
@Transactional(propagation = Propagation.REQUIRED)
|
||||||
public Optional<WorkChunk> onWorkChunkDequeue(String theChunkId) {
|
public Optional<WorkChunk> onWorkChunkDequeue(String theChunkId) {
|
||||||
|
// take a lock on the chunk id to ensure that the maintenance run isn't doing anything.
|
||||||
|
Batch2WorkChunkEntity chunkLock =
|
||||||
|
myEntityManager.find(Batch2WorkChunkEntity.class, theChunkId, LockModeType.PESSIMISTIC_WRITE);
|
||||||
|
// remove from the current state to avoid stale data.
|
||||||
|
myEntityManager.detach(chunkLock);
|
||||||
|
|
||||||
// NOTE: Ideally, IN_PROGRESS wouldn't be allowed here. On chunk failure, we probably shouldn't be allowed.
|
// NOTE: Ideally, IN_PROGRESS wouldn't be allowed here. On chunk failure, we probably shouldn't be allowed.
|
||||||
// But how does re-run happen if k8s kills a processor mid run?
|
// But how does re-run happen if k8s kills a processor mid run?
|
||||||
List<WorkChunkStatusEnum> priorStates =
|
List<WorkChunkStatusEnum> priorStates =
|
||||||
List.of(WorkChunkStatusEnum.QUEUED, WorkChunkStatusEnum.ERRORED, WorkChunkStatusEnum.IN_PROGRESS);
|
List.of(WorkChunkStatusEnum.QUEUED, WorkChunkStatusEnum.ERRORED, WorkChunkStatusEnum.IN_PROGRESS);
|
||||||
int rowsModified = myWorkChunkRepository.updateChunkStatusForStart(
|
int rowsModified = myWorkChunkRepository.updateChunkStatusForStart(
|
||||||
theChunkId, new Date(), WorkChunkStatusEnum.IN_PROGRESS, priorStates);
|
theChunkId, new Date(), WorkChunkStatusEnum.IN_PROGRESS, priorStates);
|
||||||
|
|
||||||
if (rowsModified == 0) {
|
if (rowsModified == 0) {
|
||||||
ourLog.info("Attempting to start chunk {} but it was already started.", theChunkId);
|
ourLog.info("Attempting to start chunk {} but it was already started.", theChunkId);
|
||||||
return Optional.empty();
|
return Optional.empty();
|
||||||
|
@ -288,6 +320,22 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
.collect(Collectors.toList()));
|
.collect(Collectors.toList()));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void enqueueWorkChunkForProcessing(String theChunkId, Consumer<Integer> theCallback) {
|
||||||
|
int updated = myWorkChunkRepository.updateChunkStatus(
|
||||||
|
theChunkId, WorkChunkStatusEnum.READY, WorkChunkStatusEnum.QUEUED);
|
||||||
|
theCallback.accept(updated);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public int updatePollWaitingChunksForJobIfReady(String theInstanceId) {
|
||||||
|
return myWorkChunkRepository.updateWorkChunksForPollWaiting(
|
||||||
|
theInstanceId,
|
||||||
|
Date.from(Instant.now()),
|
||||||
|
Set.of(WorkChunkStatusEnum.POLL_WAITING),
|
||||||
|
WorkChunkStatusEnum.READY);
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@Transactional(propagation = Propagation.REQUIRES_NEW)
|
@Transactional(propagation = Propagation.REQUIRES_NEW)
|
||||||
public List<JobInstance> fetchRecentInstances(int thePageSize, int thePageIndex) {
|
public List<JobInstance> fetchRecentInstances(int thePageSize, int thePageIndex) {
|
||||||
|
@ -333,6 +381,16 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void onWorkChunkPollDelay(String theChunkId, Date theDeadline) {
|
||||||
|
int updated = myWorkChunkRepository.updateWorkChunkNextPollTime(
|
||||||
|
theChunkId, WorkChunkStatusEnum.POLL_WAITING, Set.of(WorkChunkStatusEnum.IN_PROGRESS), theDeadline);
|
||||||
|
|
||||||
|
if (updated != 1) {
|
||||||
|
ourLog.warn("Expected to update 1 work chunk's poll delay; but found {}", updated);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void onWorkChunkFailed(String theChunkId, String theErrorMessage) {
|
public void onWorkChunkFailed(String theChunkId, String theErrorMessage) {
|
||||||
ourLog.info("Marking chunk {} as failed with message: {}", theChunkId, theErrorMessage);
|
ourLog.info("Marking chunk {} as failed with message: {}", theChunkId, theErrorMessage);
|
||||||
|
@ -383,24 +441,23 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@Transactional(propagation = Propagation.REQUIRES_NEW)
|
public Set<WorkChunkStatusEnum> getDistinctWorkChunkStatesForJobAndStep(
|
||||||
public boolean canAdvanceInstanceToNextStep(String theInstanceId, String theCurrentStepId) {
|
String theInstanceId, String theCurrentStepId) {
|
||||||
|
if (getRunningJob(theInstanceId) == null) {
|
||||||
|
return Collections.unmodifiableSet(new HashSet<>());
|
||||||
|
}
|
||||||
|
return myWorkChunkRepository.getDistinctStatusesForStep(theInstanceId, theCurrentStepId);
|
||||||
|
}
|
||||||
|
|
||||||
|
private Batch2JobInstanceEntity getRunningJob(String theInstanceId) {
|
||||||
Optional<Batch2JobInstanceEntity> instance = myJobInstanceRepository.findById(theInstanceId);
|
Optional<Batch2JobInstanceEntity> instance = myJobInstanceRepository.findById(theInstanceId);
|
||||||
if (instance.isEmpty()) {
|
if (instance.isEmpty()) {
|
||||||
return false;
|
return null;
|
||||||
}
|
}
|
||||||
if (instance.get().getStatus().isEnded()) {
|
if (instance.get().getStatus().isEnded()) {
|
||||||
return false;
|
return null;
|
||||||
}
|
}
|
||||||
Set<WorkChunkStatusEnum> statusesForStep =
|
return instance.get();
|
||||||
myWorkChunkRepository.getDistinctStatusesForStep(theInstanceId, theCurrentStepId);
|
|
||||||
|
|
||||||
ourLog.debug(
|
|
||||||
"Checking whether gated job can advanced to next step. [instanceId={}, currentStepId={}, statusesForStep={}]",
|
|
||||||
theInstanceId,
|
|
||||||
theCurrentStepId,
|
|
||||||
statusesForStep);
|
|
||||||
return statusesForStep.isEmpty() || statusesForStep.equals(Set.of(WorkChunkStatusEnum.COMPLETED));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private void fetchChunks(
|
private void fetchChunks(
|
||||||
|
@ -428,18 +485,16 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<String> fetchAllChunkIdsForStepWithStatus(
|
public void updateInstanceUpdateTime(String theInstanceId) {
|
||||||
String theInstanceId, String theStepId, WorkChunkStatusEnum theStatusEnum) {
|
myJobInstanceRepository.updateInstanceUpdateTime(theInstanceId, new Date());
|
||||||
return myTransactionService
|
|
||||||
.withSystemRequest()
|
|
||||||
.withPropagation(Propagation.REQUIRES_NEW)
|
|
||||||
.execute(() -> myWorkChunkRepository.fetchAllChunkIdsForStepWithStatus(
|
|
||||||
theInstanceId, theStepId, theStatusEnum));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void updateInstanceUpdateTime(String theInstanceId) {
|
public WorkChunk createWorkChunk(WorkChunk theWorkChunk) {
|
||||||
myJobInstanceRepository.updateInstanceUpdateTime(theInstanceId, new Date());
|
if (theWorkChunk.getId() == null) {
|
||||||
|
theWorkChunk.setId(UUID.randomUUID().toString());
|
||||||
|
}
|
||||||
|
return toChunk(myWorkChunkRepository.save(Batch2WorkChunkEntity.fromWorkChunk(theWorkChunk)));
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -458,6 +513,15 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
.map(this::toChunk);
|
.map(this::toChunk);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Page<WorkChunkMetadata> fetchAllWorkChunkMetadataForJobInStates(
|
||||||
|
Pageable thePageable, String theInstanceId, Set<WorkChunkStatusEnum> theStates) {
|
||||||
|
Page<Batch2WorkChunkMetadataView> page =
|
||||||
|
myWorkChunkMetadataViewRepo.fetchWorkChunkMetadataForJobInStates(thePageable, theInstanceId, theStates);
|
||||||
|
|
||||||
|
return page.map(Batch2WorkChunkMetadataView::toChunkMetadata);
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean updateInstance(String theInstanceId, JobInstanceUpdateCallback theModifier) {
|
public boolean updateInstance(String theInstanceId, JobInstanceUpdateCallback theModifier) {
|
||||||
Batch2JobInstanceEntity instanceEntity =
|
Batch2JobInstanceEntity instanceEntity =
|
||||||
|
@ -542,4 +606,45 @@ public class JpaJobPersistenceImpl implements IJobPersistence {
|
||||||
myInterceptorBroadcaster.callHooks(Pointcut.STORAGE_PRESTORAGE_BATCH_JOB_CREATE, params);
|
myInterceptorBroadcaster.callHooks(Pointcut.STORAGE_PRESTORAGE_BATCH_JOB_CREATE, params);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
@Transactional(propagation = Propagation.REQUIRES_NEW)
|
||||||
|
public boolean advanceJobStepAndUpdateChunkStatus(
|
||||||
|
String theJobInstanceId, String theNextStepId, boolean theIsReductionStep) {
|
||||||
|
boolean changed = updateInstance(theJobInstanceId, instance -> {
|
||||||
|
if (instance.getCurrentGatedStepId().equals(theNextStepId)) {
|
||||||
|
// someone else beat us here. No changes
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
ourLog.debug("Moving gated instance {} to the next step {}.", theJobInstanceId, theNextStepId);
|
||||||
|
instance.setCurrentGatedStepId(theNextStepId);
|
||||||
|
return true;
|
||||||
|
});
|
||||||
|
|
||||||
|
if (changed) {
|
||||||
|
ourLog.debug(
|
||||||
|
"Updating chunk status from GATE_WAITING to READY for gated instance {} in step {}.",
|
||||||
|
theJobInstanceId,
|
||||||
|
theNextStepId);
|
||||||
|
WorkChunkStatusEnum nextStep =
|
||||||
|
theIsReductionStep ? WorkChunkStatusEnum.REDUCTION_READY : WorkChunkStatusEnum.READY;
|
||||||
|
// when we reach here, the current step id is equal to theNextStepId
|
||||||
|
// Up to 7.1, gated jobs' work chunks are created in status QUEUED but not actually queued for the
|
||||||
|
// workers.
|
||||||
|
// In order to keep them compatible, turn QUEUED chunks into READY, too.
|
||||||
|
// TODO: 'QUEUED' from the IN clause will be removed after 7.6.0.
|
||||||
|
int numChanged = myWorkChunkRepository.updateAllChunksForStepWithStatus(
|
||||||
|
theJobInstanceId,
|
||||||
|
theNextStepId,
|
||||||
|
List.of(WorkChunkStatusEnum.GATE_WAITING, WorkChunkStatusEnum.QUEUED),
|
||||||
|
nextStep);
|
||||||
|
ourLog.debug(
|
||||||
|
"Updated {} chunks of gated instance {} for step {} from fake QUEUED to READY.",
|
||||||
|
numChanged,
|
||||||
|
theJobInstanceId,
|
||||||
|
theNextStepId);
|
||||||
|
}
|
||||||
|
|
||||||
|
return changed;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,21 @@
|
||||||
|
package ca.uhn.fhir.jpa.dao.data;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.batch2.model.WorkChunkStatusEnum;
|
||||||
|
import ca.uhn.fhir.jpa.entity.Batch2WorkChunkMetadataView;
|
||||||
|
import org.springframework.data.domain.Page;
|
||||||
|
import org.springframework.data.domain.Pageable;
|
||||||
|
import org.springframework.data.jpa.repository.JpaRepository;
|
||||||
|
import org.springframework.data.jpa.repository.Query;
|
||||||
|
import org.springframework.data.repository.query.Param;
|
||||||
|
|
||||||
|
import java.util.Collection;
|
||||||
|
|
||||||
|
public interface IBatch2WorkChunkMetadataViewRepository extends JpaRepository<Batch2WorkChunkMetadataView, String> {
|
||||||
|
|
||||||
|
@Query("SELECT v FROM Batch2WorkChunkMetadataView v WHERE v.myInstanceId = :instanceId AND v.myStatus IN :states "
|
||||||
|
+ " ORDER BY v.myInstanceId, v.myTargetStepId, v.myStatus, v.mySequence, v.myId ASC")
|
||||||
|
Page<Batch2WorkChunkMetadataView> fetchWorkChunkMetadataForJobInStates(
|
||||||
|
Pageable thePageRequest,
|
||||||
|
@Param("instanceId") String theInstanceId,
|
||||||
|
@Param("states") Collection<WorkChunkStatusEnum> theStates);
|
||||||
|
}
|
|
@ -49,7 +49,8 @@ public interface IBatch2WorkChunkRepository
|
||||||
@Query("SELECT new Batch2WorkChunkEntity("
|
@Query("SELECT new Batch2WorkChunkEntity("
|
||||||
+ "e.myId, e.mySequence, e.myJobDefinitionId, e.myJobDefinitionVersion, e.myInstanceId, e.myTargetStepId, e.myStatus,"
|
+ "e.myId, e.mySequence, e.myJobDefinitionId, e.myJobDefinitionVersion, e.myInstanceId, e.myTargetStepId, e.myStatus,"
|
||||||
+ "e.myCreateTime, e.myStartTime, e.myUpdateTime, e.myEndTime,"
|
+ "e.myCreateTime, e.myStartTime, e.myUpdateTime, e.myEndTime,"
|
||||||
+ "e.myErrorMessage, e.myErrorCount, e.myRecordsProcessed, e.myWarningMessage"
|
+ "e.myErrorMessage, e.myErrorCount, e.myRecordsProcessed, e.myWarningMessage,"
|
||||||
|
+ "e.myNextPollTime, e.myPollAttempts"
|
||||||
+ ") FROM Batch2WorkChunkEntity e WHERE e.myInstanceId = :instanceId ORDER BY e.mySequence ASC, e.myId ASC")
|
+ ") FROM Batch2WorkChunkEntity e WHERE e.myInstanceId = :instanceId ORDER BY e.mySequence ASC, e.myId ASC")
|
||||||
List<Batch2WorkChunkEntity> fetchChunksNoData(Pageable thePageRequest, @Param("instanceId") String theInstanceId);
|
List<Batch2WorkChunkEntity> fetchChunksNoData(Pageable thePageRequest, @Param("instanceId") String theInstanceId);
|
||||||
|
|
||||||
|
@ -75,6 +76,24 @@ public interface IBatch2WorkChunkRepository
|
||||||
@Param("status") WorkChunkStatusEnum theInProgress,
|
@Param("status") WorkChunkStatusEnum theInProgress,
|
||||||
@Param("warningMessage") String theWarningMessage);
|
@Param("warningMessage") String theWarningMessage);
|
||||||
|
|
||||||
|
@Modifying
|
||||||
|
@Query(
|
||||||
|
"UPDATE Batch2WorkChunkEntity e SET e.myStatus = :status, e.myNextPollTime = :nextPollTime, e.myPollAttempts = e.myPollAttempts + 1 WHERE e.myId = :id AND e.myStatus IN(:states)")
|
||||||
|
int updateWorkChunkNextPollTime(
|
||||||
|
@Param("id") String theChunkId,
|
||||||
|
@Param("status") WorkChunkStatusEnum theStatus,
|
||||||
|
@Param("states") Set<WorkChunkStatusEnum> theInitialStates,
|
||||||
|
@Param("nextPollTime") Date theNextPollTime);
|
||||||
|
|
||||||
|
@Modifying
|
||||||
|
@Query(
|
||||||
|
"UPDATE Batch2WorkChunkEntity e SET e.myStatus = :status, e.myNextPollTime = null WHERE e.myInstanceId = :instanceId AND e.myStatus IN(:states) AND e.myNextPollTime <= :pollTime")
|
||||||
|
int updateWorkChunksForPollWaiting(
|
||||||
|
@Param("instanceId") String theInstanceId,
|
||||||
|
@Param("pollTime") Date theTime,
|
||||||
|
@Param("states") Set<WorkChunkStatusEnum> theInitialStates,
|
||||||
|
@Param("status") WorkChunkStatusEnum theNewStatus);
|
||||||
|
|
||||||
@Modifying
|
@Modifying
|
||||||
@Query(
|
@Query(
|
||||||
"UPDATE Batch2WorkChunkEntity e SET e.myStatus = :status, e.myEndTime = :et, e.mySerializedData = null, e.mySerializedDataVc = null, e.myErrorMessage = :em WHERE e.myId IN(:ids)")
|
"UPDATE Batch2WorkChunkEntity e SET e.myStatus = :status, e.myEndTime = :et, e.mySerializedData = null, e.mySerializedDataVc = null, e.myErrorMessage = :em WHERE e.myId IN(:ids)")
|
||||||
|
@ -102,6 +121,22 @@ public interface IBatch2WorkChunkRepository
|
||||||
@Param("status") WorkChunkStatusEnum theInProgress,
|
@Param("status") WorkChunkStatusEnum theInProgress,
|
||||||
@Param("startStatuses") Collection<WorkChunkStatusEnum> theStartStatuses);
|
@Param("startStatuses") Collection<WorkChunkStatusEnum> theStartStatuses);
|
||||||
|
|
||||||
|
@Modifying
|
||||||
|
@Query("UPDATE Batch2WorkChunkEntity e SET e.myStatus = :newStatus WHERE e.myId = :id AND e.myStatus = :oldStatus")
|
||||||
|
int updateChunkStatus(
|
||||||
|
@Param("id") String theChunkId,
|
||||||
|
@Param("oldStatus") WorkChunkStatusEnum theOldStatus,
|
||||||
|
@Param("newStatus") WorkChunkStatusEnum theNewStatus);
|
||||||
|
|
||||||
|
@Modifying
|
||||||
|
@Query(
|
||||||
|
"UPDATE Batch2WorkChunkEntity e SET e.myStatus = :newStatus WHERE e.myInstanceId = :instanceId AND e.myTargetStepId = :stepId AND e.myStatus IN ( :oldStatuses )")
|
||||||
|
int updateAllChunksForStepWithStatus(
|
||||||
|
@Param("instanceId") String theInstanceId,
|
||||||
|
@Param("stepId") String theStepId,
|
||||||
|
@Param("oldStatuses") List<WorkChunkStatusEnum> theOldStatuses,
|
||||||
|
@Param("newStatus") WorkChunkStatusEnum theNewStatus);
|
||||||
|
|
||||||
@Modifying
|
@Modifying
|
||||||
@Query("DELETE FROM Batch2WorkChunkEntity e WHERE e.myInstanceId = :instanceId")
|
@Query("DELETE FROM Batch2WorkChunkEntity e WHERE e.myInstanceId = :instanceId")
|
||||||
int deleteAllForInstance(@Param("instanceId") String theInstanceId);
|
int deleteAllForInstance(@Param("instanceId") String theInstanceId);
|
||||||
|
|
|
@ -19,6 +19,7 @@
|
||||||
*/
|
*/
|
||||||
package ca.uhn.fhir.jpa.entity;
|
package ca.uhn.fhir.jpa.entity;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.batch2.model.WorkChunk;
|
||||||
import ca.uhn.fhir.batch2.model.WorkChunkStatusEnum;
|
import ca.uhn.fhir.batch2.model.WorkChunkStatusEnum;
|
||||||
import jakarta.persistence.Basic;
|
import jakarta.persistence.Basic;
|
||||||
import jakarta.persistence.Column;
|
import jakarta.persistence.Column;
|
||||||
|
@ -50,7 +51,10 @@ import static org.apache.commons.lang3.StringUtils.left;
|
||||||
@Entity
|
@Entity
|
||||||
@Table(
|
@Table(
|
||||||
name = "BT2_WORK_CHUNK",
|
name = "BT2_WORK_CHUNK",
|
||||||
indexes = {@Index(name = "IDX_BT2WC_II_SEQ", columnList = "INSTANCE_ID,SEQ")})
|
indexes = {
|
||||||
|
@Index(name = "IDX_BT2WC_II_SEQ", columnList = "INSTANCE_ID,SEQ"),
|
||||||
|
@Index(name = "IDX_BT2WC_II_SI_S_SEQ_ID", columnList = "INSTANCE_ID,TGT_STEP_ID,STAT,SEQ,ID")
|
||||||
|
})
|
||||||
public class Batch2WorkChunkEntity implements Serializable {
|
public class Batch2WorkChunkEntity implements Serializable {
|
||||||
|
|
||||||
public static final int ERROR_MSG_MAX_LENGTH = 500;
|
public static final int ERROR_MSG_MAX_LENGTH = 500;
|
||||||
|
@ -125,6 +129,19 @@ public class Batch2WorkChunkEntity implements Serializable {
|
||||||
@Column(name = "WARNING_MSG", length = WARNING_MSG_MAX_LENGTH, nullable = true)
|
@Column(name = "WARNING_MSG", length = WARNING_MSG_MAX_LENGTH, nullable = true)
|
||||||
private String myWarningMessage;
|
private String myWarningMessage;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* The next time the work chunk can attempt to rerun its work step.
|
||||||
|
*/
|
||||||
|
@Column(name = "NEXT_POLL_TIME", nullable = true)
|
||||||
|
@Temporal(TemporalType.TIMESTAMP)
|
||||||
|
private Date myNextPollTime;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* The number of times the work chunk has had its state set back to POLL_WAITING.
|
||||||
|
*/
|
||||||
|
@Column(name = "POLL_ATTEMPTS", nullable = true)
|
||||||
|
private int myPollAttempts;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Default constructor for Hibernate.
|
* Default constructor for Hibernate.
|
||||||
*/
|
*/
|
||||||
|
@ -148,7 +165,9 @@ public class Batch2WorkChunkEntity implements Serializable {
|
||||||
String theErrorMessage,
|
String theErrorMessage,
|
||||||
int theErrorCount,
|
int theErrorCount,
|
||||||
Integer theRecordsProcessed,
|
Integer theRecordsProcessed,
|
||||||
String theWarningMessage) {
|
String theWarningMessage,
|
||||||
|
Date theNextPollTime,
|
||||||
|
Integer thePollAttempts) {
|
||||||
myId = theId;
|
myId = theId;
|
||||||
mySequence = theSequence;
|
mySequence = theSequence;
|
||||||
myJobDefinitionId = theJobDefinitionId;
|
myJobDefinitionId = theJobDefinitionId;
|
||||||
|
@ -164,6 +183,32 @@ public class Batch2WorkChunkEntity implements Serializable {
|
||||||
myErrorCount = theErrorCount;
|
myErrorCount = theErrorCount;
|
||||||
myRecordsProcessed = theRecordsProcessed;
|
myRecordsProcessed = theRecordsProcessed;
|
||||||
myWarningMessage = theWarningMessage;
|
myWarningMessage = theWarningMessage;
|
||||||
|
myNextPollTime = theNextPollTime;
|
||||||
|
myPollAttempts = thePollAttempts;
|
||||||
|
}
|
||||||
|
|
||||||
|
public static Batch2WorkChunkEntity fromWorkChunk(WorkChunk theWorkChunk) {
|
||||||
|
Batch2WorkChunkEntity entity = new Batch2WorkChunkEntity(
|
||||||
|
theWorkChunk.getId(),
|
||||||
|
theWorkChunk.getSequence(),
|
||||||
|
theWorkChunk.getJobDefinitionId(),
|
||||||
|
theWorkChunk.getJobDefinitionVersion(),
|
||||||
|
theWorkChunk.getInstanceId(),
|
||||||
|
theWorkChunk.getTargetStepId(),
|
||||||
|
theWorkChunk.getStatus(),
|
||||||
|
theWorkChunk.getCreateTime(),
|
||||||
|
theWorkChunk.getStartTime(),
|
||||||
|
theWorkChunk.getUpdateTime(),
|
||||||
|
theWorkChunk.getEndTime(),
|
||||||
|
theWorkChunk.getErrorMessage(),
|
||||||
|
theWorkChunk.getErrorCount(),
|
||||||
|
theWorkChunk.getRecordsProcessed(),
|
||||||
|
theWorkChunk.getWarningMessage(),
|
||||||
|
theWorkChunk.getNextPollTime(),
|
||||||
|
theWorkChunk.getPollAttempts());
|
||||||
|
entity.setSerializedData(theWorkChunk.getData());
|
||||||
|
|
||||||
|
return entity;
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getErrorCount() {
|
public int getErrorCount() {
|
||||||
|
@ -299,6 +344,22 @@ public class Batch2WorkChunkEntity implements Serializable {
|
||||||
myInstanceId = theInstanceId;
|
myInstanceId = theInstanceId;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public Date getNextPollTime() {
|
||||||
|
return myNextPollTime;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setNextPollTime(Date theNextPollTime) {
|
||||||
|
myNextPollTime = theNextPollTime;
|
||||||
|
}
|
||||||
|
|
||||||
|
public int getPollAttempts() {
|
||||||
|
return myPollAttempts;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setPollAttempts(int thePollAttempts) {
|
||||||
|
myPollAttempts = thePollAttempts;
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public String toString() {
|
public String toString() {
|
||||||
return new ToStringBuilder(this, ToStringStyle.SHORT_PREFIX_STYLE)
|
return new ToStringBuilder(this, ToStringStyle.SHORT_PREFIX_STYLE)
|
||||||
|
@ -318,6 +379,8 @@ public class Batch2WorkChunkEntity implements Serializable {
|
||||||
.append("status", myStatus)
|
.append("status", myStatus)
|
||||||
.append("errorMessage", myErrorMessage)
|
.append("errorMessage", myErrorMessage)
|
||||||
.append("warningMessage", myWarningMessage)
|
.append("warningMessage", myWarningMessage)
|
||||||
|
.append("nextPollTime", myNextPollTime)
|
||||||
|
.append("pollAttempts", myPollAttempts)
|
||||||
.toString();
|
.toString();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,123 @@
|
||||||
|
package ca.uhn.fhir.jpa.entity;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.batch2.model.WorkChunkMetadata;
|
||||||
|
import ca.uhn.fhir.batch2.model.WorkChunkStatusEnum;
|
||||||
|
import jakarta.persistence.Column;
|
||||||
|
import jakarta.persistence.Entity;
|
||||||
|
import jakarta.persistence.EnumType;
|
||||||
|
import jakarta.persistence.Enumerated;
|
||||||
|
import jakarta.persistence.Id;
|
||||||
|
import org.hibernate.annotations.Immutable;
|
||||||
|
import org.hibernate.annotations.Subselect;
|
||||||
|
|
||||||
|
import java.io.Serializable;
|
||||||
|
|
||||||
|
import static ca.uhn.fhir.batch2.model.JobDefinition.ID_MAX_LENGTH;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A view for a Work Chunk that contains only the most necessary information
|
||||||
|
* to satisfy the no-data path.
|
||||||
|
*/
|
||||||
|
@Entity
|
||||||
|
@Immutable
|
||||||
|
@Subselect("SELECT e.id as id, "
|
||||||
|
+ " e.seq as seq,"
|
||||||
|
+ " e.stat as state, "
|
||||||
|
+ " e.instance_id as instance_id, "
|
||||||
|
+ " e.definition_id as job_definition_id, "
|
||||||
|
+ " e.definition_ver as job_definition_version, "
|
||||||
|
+ " e.tgt_step_id as target_step_id "
|
||||||
|
+ "FROM BT2_WORK_CHUNK e")
|
||||||
|
public class Batch2WorkChunkMetadataView implements Serializable {
|
||||||
|
|
||||||
|
@Id
|
||||||
|
@Column(name = "ID", length = ID_MAX_LENGTH)
|
||||||
|
private String myId;
|
||||||
|
|
||||||
|
@Column(name = "SEQ", nullable = false)
|
||||||
|
private int mySequence;
|
||||||
|
|
||||||
|
@Column(name = "STATE", length = ID_MAX_LENGTH, nullable = false)
|
||||||
|
@Enumerated(EnumType.STRING)
|
||||||
|
private WorkChunkStatusEnum myStatus;
|
||||||
|
|
||||||
|
@Column(name = "INSTANCE_ID", length = ID_MAX_LENGTH, nullable = false)
|
||||||
|
private String myInstanceId;
|
||||||
|
|
||||||
|
@Column(name = "JOB_DEFINITION_ID", length = ID_MAX_LENGTH, nullable = false)
|
||||||
|
private String myJobDefinitionId;
|
||||||
|
|
||||||
|
@Column(name = "JOB_DEFINITION_VERSION", nullable = false)
|
||||||
|
private int myJobDefinitionVersion;
|
||||||
|
|
||||||
|
@Column(name = "TARGET_STEP_ID", length = ID_MAX_LENGTH, nullable = false)
|
||||||
|
private String myTargetStepId;
|
||||||
|
|
||||||
|
public String getId() {
|
||||||
|
return myId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setId(String theId) {
|
||||||
|
myId = theId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public int getSequence() {
|
||||||
|
return mySequence;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setSequence(int theSequence) {
|
||||||
|
mySequence = theSequence;
|
||||||
|
}
|
||||||
|
|
||||||
|
public WorkChunkStatusEnum getStatus() {
|
||||||
|
return myStatus;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setStatus(WorkChunkStatusEnum theStatus) {
|
||||||
|
myStatus = theStatus;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getInstanceId() {
|
||||||
|
return myInstanceId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setInstanceId(String theInstanceId) {
|
||||||
|
myInstanceId = theInstanceId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getJobDefinitionId() {
|
||||||
|
return myJobDefinitionId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setJobDefinitionId(String theJobDefinitionId) {
|
||||||
|
myJobDefinitionId = theJobDefinitionId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public int getJobDefinitionVersion() {
|
||||||
|
return myJobDefinitionVersion;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setJobDefinitionVersion(int theJobDefinitionVersion) {
|
||||||
|
myJobDefinitionVersion = theJobDefinitionVersion;
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getTargetStepId() {
|
||||||
|
return myTargetStepId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setTargetStepId(String theTargetStepId) {
|
||||||
|
myTargetStepId = theTargetStepId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public WorkChunkMetadata toChunkMetadata() {
|
||||||
|
WorkChunkMetadata metadata = new WorkChunkMetadata();
|
||||||
|
metadata.setId(getId());
|
||||||
|
metadata.setInstanceId(getInstanceId());
|
||||||
|
metadata.setSequence(getSequence());
|
||||||
|
metadata.setStatus(getStatus());
|
||||||
|
metadata.setJobDefinitionId(getJobDefinitionId());
|
||||||
|
metadata.setJobDefinitionVersion(getJobDefinitionVersion());
|
||||||
|
metadata.setTargetStepId(getTargetStepId());
|
||||||
|
return metadata;
|
||||||
|
}
|
||||||
|
}
|
|
@ -293,6 +293,23 @@ public class HapiFhirJpaMigrationTasks extends BaseMigrationTasks<VersionEnum> {
|
||||||
|
|
||||||
// This fix will work for MSSQL or Oracle.
|
// This fix will work for MSSQL or Oracle.
|
||||||
version.addTask(new ForceIdMigrationFixTask(version.getRelease(), "20231222.1"));
|
version.addTask(new ForceIdMigrationFixTask(version.getRelease(), "20231222.1"));
|
||||||
|
|
||||||
|
// add index to Batch2WorkChunkEntity
|
||||||
|
Builder.BuilderWithTableName workChunkTable = version.onTable("BT2_WORK_CHUNK");
|
||||||
|
|
||||||
|
workChunkTable
|
||||||
|
.addIndex("20240321.1", "IDX_BT2WC_II_SI_S_SEQ_ID")
|
||||||
|
.unique(false)
|
||||||
|
.withColumns("INSTANCE_ID", "TGT_STEP_ID", "STAT", "SEQ", "ID");
|
||||||
|
|
||||||
|
// add columns to Batch2WorkChunkEntity
|
||||||
|
Builder.BuilderWithTableName batch2WorkChunkTable = version.onTable("BT2_WORK_CHUNK");
|
||||||
|
|
||||||
|
batch2WorkChunkTable
|
||||||
|
.addColumn("20240322.1", "NEXT_POLL_TIME")
|
||||||
|
.nullable()
|
||||||
|
.type(ColumnTypeEnum.DATE_TIMESTAMP);
|
||||||
|
batch2WorkChunkTable.addColumn("20240322.2", "POLL_ATTEMPTS").nullable().type(ColumnTypeEnum.INT);
|
||||||
}
|
}
|
||||||
|
|
||||||
private void init680_Part2() {
|
private void init680_Part2() {
|
||||||
|
|
|
@ -4,6 +4,7 @@ import ca.uhn.fhir.batch2.api.JobOperationResultJson;
|
||||||
import ca.uhn.fhir.batch2.model.FetchJobInstancesRequest;
|
import ca.uhn.fhir.batch2.model.FetchJobInstancesRequest;
|
||||||
import ca.uhn.fhir.batch2.model.JobInstance;
|
import ca.uhn.fhir.batch2.model.JobInstance;
|
||||||
import ca.uhn.fhir.batch2.model.StatusEnum;
|
import ca.uhn.fhir.batch2.model.StatusEnum;
|
||||||
|
import ca.uhn.fhir.batch2.model.WorkChunkStatusEnum;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
||||||
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
|
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
|
||||||
|
@ -31,6 +32,7 @@ import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
import static org.mockito.ArgumentMatchers.any;
|
import static org.mockito.ArgumentMatchers.any;
|
||||||
import static org.mockito.ArgumentMatchers.eq;
|
import static org.mockito.ArgumentMatchers.eq;
|
||||||
import static org.mockito.Mockito.verify;
|
import static org.mockito.Mockito.verify;
|
||||||
|
import static org.mockito.Mockito.verifyNoInteractions;
|
||||||
import static org.mockito.Mockito.when;
|
import static org.mockito.Mockito.when;
|
||||||
|
|
||||||
@ExtendWith(MockitoExtension.class)
|
@ExtendWith(MockitoExtension.class)
|
||||||
|
|
|
@ -30,6 +30,8 @@ import org.springframework.transaction.support.TransactionCallback;
|
||||||
import org.springframework.transaction.support.TransactionTemplate;
|
import org.springframework.transaction.support.TransactionTemplate;
|
||||||
|
|
||||||
import jakarta.annotation.Nonnull;
|
import jakarta.annotation.Nonnull;
|
||||||
|
|
||||||
|
import java.time.Instant;
|
||||||
import java.time.LocalDateTime;
|
import java.time.LocalDateTime;
|
||||||
import java.time.ZoneId;
|
import java.time.ZoneId;
|
||||||
import java.time.temporal.ChronoUnit;
|
import java.time.temporal.ChronoUnit;
|
||||||
|
@ -43,6 +45,7 @@ import java.util.stream.IntStream;
|
||||||
import static org.exparity.hamcrest.date.DateMatchers.within;
|
import static org.exparity.hamcrest.date.DateMatchers.within;
|
||||||
import static org.hamcrest.MatcherAssert.assertThat;
|
import static org.hamcrest.MatcherAssert.assertThat;
|
||||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
import static org.mockito.ArgumentMatchers.any;
|
import static org.mockito.ArgumentMatchers.any;
|
||||||
import static org.mockito.ArgumentMatchers.anyString;
|
import static org.mockito.ArgumentMatchers.anyString;
|
||||||
import static org.mockito.ArgumentMatchers.eq;
|
import static org.mockito.ArgumentMatchers.eq;
|
||||||
|
@ -97,7 +100,17 @@ public class BulkDataExportJobSchedulingHelperImplTest {
|
||||||
verify(myJpaJobPersistence, never()).deleteInstanceAndChunks(anyString());
|
verify(myJpaJobPersistence, never()).deleteInstanceAndChunks(anyString());
|
||||||
|
|
||||||
final Date cutoffDate = myCutoffCaptor.getValue();
|
final Date cutoffDate = myCutoffCaptor.getValue();
|
||||||
assertEquals(DateUtils.truncate(computeDateFromConfig(expectedRetentionHours), Calendar.SECOND), DateUtils.truncate(cutoffDate, Calendar.SECOND));
|
Date expectedCutoff = computeDateFromConfig(expectedRetentionHours);
|
||||||
|
verifyDatesWithinSeconds(expectedCutoff, cutoffDate, 2);
|
||||||
|
}
|
||||||
|
|
||||||
|
private void verifyDatesWithinSeconds(Date theExpected, Date theActual, int theSeconds) {
|
||||||
|
Instant expectedInstant = theExpected.toInstant();
|
||||||
|
Instant actualInstant = theActual.toInstant();
|
||||||
|
|
||||||
|
String msg = String.format("Expected time not within %d s", theSeconds);
|
||||||
|
assertTrue(expectedInstant.plus(theSeconds, ChronoUnit.SECONDS).isAfter(actualInstant), msg);
|
||||||
|
assertTrue(expectedInstant.minus(theSeconds, ChronoUnit.SECONDS).isBefore(actualInstant), msg);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -31,6 +31,7 @@ public abstract class BaseTag extends BasePartitionable implements Serializable
|
||||||
|
|
||||||
private static final long serialVersionUID = 1L;
|
private static final long serialVersionUID = 1L;
|
||||||
|
|
||||||
|
// many baseTags -> one tag definition
|
||||||
@ManyToOne(cascade = {})
|
@ManyToOne(cascade = {})
|
||||||
@JoinColumn(name = "TAG_ID", nullable = false)
|
@JoinColumn(name = "TAG_ID", nullable = false)
|
||||||
private TagDefinition myTag;
|
private TagDefinition myTag;
|
||||||
|
|
|
@ -67,12 +67,14 @@ public class TagDefinition implements Serializable {
|
||||||
@Column(name = "TAG_ID")
|
@Column(name = "TAG_ID")
|
||||||
private Long myId;
|
private Long myId;
|
||||||
|
|
||||||
|
// one tag definition -> many resource tags
|
||||||
@OneToMany(
|
@OneToMany(
|
||||||
cascade = {},
|
cascade = {},
|
||||||
fetch = FetchType.LAZY,
|
fetch = FetchType.LAZY,
|
||||||
mappedBy = "myTag")
|
mappedBy = "myTag")
|
||||||
private Collection<ResourceTag> myResources;
|
private Collection<ResourceTag> myResources;
|
||||||
|
|
||||||
|
// one tag definition -> many history
|
||||||
@OneToMany(
|
@OneToMany(
|
||||||
cascade = {},
|
cascade = {},
|
||||||
fetch = FetchType.LAZY,
|
fetch = FetchType.LAZY,
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -505,7 +505,7 @@ public class SubscriptionMatchingSubscriberTest extends BaseBlockingQueueSubscri
|
||||||
|
|
||||||
subscriber.matchActiveSubscriptionsAndDeliver(message);
|
subscriber.matchActiveSubscriptionsAndDeliver(message);
|
||||||
|
|
||||||
verify(myCanonicalSubscription, atLeastOnce()).getSendDeleteMessages();
|
verify(myCanonicalSubscription).getSendDeleteMessages();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -2209,28 +2209,28 @@ public class FhirResourceDaoDstu2Test extends BaseJpaDstu2Test {
|
||||||
p.addName().addFamily(methodName);
|
p.addName().addFamily(methodName);
|
||||||
IIdType id1 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
IIdType id1 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
p = new Patient();
|
p = new Patient();
|
||||||
p.addIdentifier().setSystem("urn:system2").setValue(methodName);
|
p.addIdentifier().setSystem("urn:system2").setValue(methodName);
|
||||||
p.addName().addFamily(methodName);
|
p.addName().addFamily(methodName);
|
||||||
IIdType id2 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
IIdType id2 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
p = new Patient();
|
p = new Patient();
|
||||||
p.addIdentifier().setSystem("urn:system3").setValue(methodName);
|
p.addIdentifier().setSystem("urn:system3").setValue(methodName);
|
||||||
p.addName().addFamily(methodName);
|
p.addName().addFamily(methodName);
|
||||||
IIdType id3 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
IIdType id3 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
p = new Patient();
|
p = new Patient();
|
||||||
p.addIdentifier().setSystem("urn:system4").setValue(methodName);
|
p.addIdentifier().setSystem("urn:system4").setValue(methodName);
|
||||||
p.addName().addFamily(methodName);
|
p.addName().addFamily(methodName);
|
||||||
IIdType id4 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
IIdType id4 = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
SearchParameterMap pm;
|
SearchParameterMap pm;
|
||||||
List<IIdType> actual;
|
List<IIdType> actual;
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -10,6 +10,7 @@ import ca.uhn.fhir.batch2.api.IJobStepWorker;
|
||||||
import ca.uhn.fhir.batch2.api.ILastJobStepWorker;
|
import ca.uhn.fhir.batch2.api.ILastJobStepWorker;
|
||||||
import ca.uhn.fhir.batch2.api.IReductionStepWorker;
|
import ca.uhn.fhir.batch2.api.IReductionStepWorker;
|
||||||
import ca.uhn.fhir.batch2.api.JobExecutionFailedException;
|
import ca.uhn.fhir.batch2.api.JobExecutionFailedException;
|
||||||
|
import ca.uhn.fhir.batch2.api.RetryChunkLaterException;
|
||||||
import ca.uhn.fhir.batch2.api.RunOutcome;
|
import ca.uhn.fhir.batch2.api.RunOutcome;
|
||||||
import ca.uhn.fhir.batch2.api.StepExecutionDetails;
|
import ca.uhn.fhir.batch2.api.StepExecutionDetails;
|
||||||
import ca.uhn.fhir.batch2.api.VoidModel;
|
import ca.uhn.fhir.batch2.api.VoidModel;
|
||||||
|
@ -27,15 +28,20 @@ import ca.uhn.fhir.jpa.subscription.channel.api.IChannelFactory;
|
||||||
import ca.uhn.fhir.jpa.subscription.channel.impl.LinkedBlockingChannel;
|
import ca.uhn.fhir.jpa.subscription.channel.impl.LinkedBlockingChannel;
|
||||||
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
||||||
import ca.uhn.fhir.jpa.test.Batch2JobHelper;
|
import ca.uhn.fhir.jpa.test.Batch2JobHelper;
|
||||||
|
import ca.uhn.fhir.jpa.test.config.Batch2FastSchedulerConfig;
|
||||||
|
import ca.uhn.fhir.jpa.test.config.TestR4Config;
|
||||||
import ca.uhn.fhir.model.api.IModelJson;
|
import ca.uhn.fhir.model.api.IModelJson;
|
||||||
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
||||||
|
import ca.uhn.fhir.test.utilities.UnregisterScheduledProcessor;
|
||||||
import ca.uhn.fhir.util.JsonUtil;
|
import ca.uhn.fhir.util.JsonUtil;
|
||||||
import ca.uhn.test.concurrency.PointcutLatch;
|
import ca.uhn.test.concurrency.PointcutLatch;
|
||||||
|
import ca.uhn.test.util.LogbackCaptureTestExtension;
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
import jakarta.annotation.Nonnull;
|
import jakarta.annotation.Nonnull;
|
||||||
import org.junit.jupiter.api.AfterEach;
|
import org.junit.jupiter.api.AfterEach;
|
||||||
import org.junit.jupiter.api.BeforeEach;
|
import org.junit.jupiter.api.BeforeEach;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
|
import org.junit.jupiter.api.extension.RegisterExtension;
|
||||||
import org.junit.jupiter.params.ParameterizedTest;
|
import org.junit.jupiter.params.ParameterizedTest;
|
||||||
import org.junit.jupiter.params.provider.ValueSource;
|
import org.junit.jupiter.params.provider.ValueSource;
|
||||||
import org.slf4j.Logger;
|
import org.slf4j.Logger;
|
||||||
|
@ -43,11 +49,21 @@ import org.slf4j.LoggerFactory;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.data.domain.Page;
|
import org.springframework.data.domain.Page;
|
||||||
import org.springframework.data.domain.Sort;
|
import org.springframework.data.domain.Sort;
|
||||||
|
import org.springframework.messaging.MessageHandler;
|
||||||
|
import org.springframework.test.context.ContextConfiguration;
|
||||||
|
import org.springframework.test.context.TestPropertySource;
|
||||||
|
import org.testcontainers.shaded.org.awaitility.Awaitility;
|
||||||
|
|
||||||
|
import java.time.Duration;
|
||||||
|
import java.time.temporal.ChronoUnit;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
|
import java.util.HashMap;
|
||||||
import java.util.Iterator;
|
import java.util.Iterator;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
|
import java.util.concurrent.ConcurrentHashMap;
|
||||||
|
import java.util.concurrent.TimeUnit;
|
||||||
import java.util.concurrent.atomic.AtomicBoolean;
|
import java.util.concurrent.atomic.AtomicBoolean;
|
||||||
import java.util.concurrent.atomic.AtomicInteger;
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
|
|
||||||
|
@ -60,6 +76,13 @@ import static org.junit.jupiter.api.Assertions.assertSame;
|
||||||
import static org.junit.jupiter.api.Assertions.assertTrue;
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
import static org.junit.jupiter.api.Assertions.fail;
|
import static org.junit.jupiter.api.Assertions.fail;
|
||||||
|
|
||||||
|
@ContextConfiguration(classes = {
|
||||||
|
Batch2FastSchedulerConfig.class
|
||||||
|
})
|
||||||
|
@TestPropertySource(properties = {
|
||||||
|
// These tests require scheduling to work
|
||||||
|
UnregisterScheduledProcessor.SCHEDULING_DISABLED_EQUALS_FALSE
|
||||||
|
})
|
||||||
public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
private static final Logger ourLog = LoggerFactory.getLogger(Batch2CoordinatorIT.class);
|
private static final Logger ourLog = LoggerFactory.getLogger(Batch2CoordinatorIT.class);
|
||||||
|
|
||||||
|
@ -81,6 +104,9 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
@Autowired
|
@Autowired
|
||||||
IJobPersistence myJobPersistence;
|
IJobPersistence myJobPersistence;
|
||||||
|
|
||||||
|
@RegisterExtension
|
||||||
|
LogbackCaptureTestExtension myLogbackCaptureTestExtension = new LogbackCaptureTestExtension();
|
||||||
|
|
||||||
private final PointcutLatch myFirstStepLatch = new PointcutLatch("First Step");
|
private final PointcutLatch myFirstStepLatch = new PointcutLatch("First Step");
|
||||||
private final PointcutLatch myLastStepLatch = new PointcutLatch("Last Step");
|
private final PointcutLatch myLastStepLatch = new PointcutLatch("Last Step");
|
||||||
private IJobCompletionHandler<TestJobParameters> myCompletionHandler;
|
private IJobCompletionHandler<TestJobParameters> myCompletionHandler;
|
||||||
|
@ -91,6 +117,10 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
return RunOutcome.SUCCESS;
|
return RunOutcome.SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static {
|
||||||
|
TestR4Config.ourMaxThreads = 100;
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@BeforeEach
|
@BeforeEach
|
||||||
public void before() throws Exception {
|
public void before() throws Exception {
|
||||||
|
@ -117,7 +147,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
// final step
|
// final step
|
||||||
ILastJobStepWorker<TestJobParameters, FirstStepOutput> last = (step, sink) -> RunOutcome.SUCCESS;
|
ILastJobStepWorker<TestJobParameters, FirstStepOutput> last = (step, sink) -> RunOutcome.SUCCESS;
|
||||||
// job definition
|
// job definition
|
||||||
String jobId = new Exception().getStackTrace()[0].getMethodName();
|
String jobId = getMethodNameForJobId();
|
||||||
JobDefinition<? extends IModelJson> jd = JobDefinition.newBuilder()
|
JobDefinition<? extends IModelJson> jd = JobDefinition.newBuilder()
|
||||||
.setJobDefinitionId(jobId)
|
.setJobDefinitionId(jobId)
|
||||||
.setJobDescription("test job")
|
.setJobDescription("test job")
|
||||||
|
@ -183,7 +213,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> callLatch(myFirstStepLatch, step);
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> callLatch(myFirstStepLatch, step);
|
||||||
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> fail();
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> fail();
|
||||||
|
|
||||||
String jobId = new Exception().getStackTrace()[0].getMethodName();
|
String jobId = getMethodNameForJobId();
|
||||||
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobId, firstStep, lastStep);
|
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobId, firstStep, lastStep);
|
||||||
|
|
||||||
myJobDefinitionRegistry.addJobDefinition(definition);
|
myJobDefinitionRegistry.addJobDefinition(definition);
|
||||||
|
@ -192,6 +222,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
myFirstStepLatch.setExpectedCount(1);
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
myFirstStepLatch.awaitExpected();
|
myFirstStepLatch.awaitExpected();
|
||||||
|
|
||||||
myBatch2JobHelper.awaitJobCompletion(startResponse.getInstanceId());
|
myBatch2JobHelper.awaitJobCompletion(startResponse.getInstanceId());
|
||||||
|
@ -216,11 +247,10 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
myFirstStepLatch.setExpectedCount(1);
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
myLastStepLatch.setExpectedCount(1);
|
myLastStepLatch.setExpectedCount(1);
|
||||||
String batchJobId = myJobCoordinator.startInstance(new SystemRequestDetails(), request).getInstanceId();
|
String batchJobId = myJobCoordinator.startInstance(new SystemRequestDetails(), request).getInstanceId();
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
myFirstStepLatch.awaitExpected();
|
myFirstStepLatch.awaitExpected();
|
||||||
|
|
||||||
myBatch2JobHelper.assertFastTracking(batchJobId);
|
myBatch2JobHelper.assertFastTracking(batchJobId);
|
||||||
|
|
||||||
// Since there was only one chunk, the job should proceed without requiring a maintenance pass
|
|
||||||
myBatch2JobHelper.awaitJobCompletion(batchJobId);
|
myBatch2JobHelper.awaitJobCompletion(batchJobId);
|
||||||
myLastStepLatch.awaitExpected();
|
myLastStepLatch.awaitExpected();
|
||||||
|
|
||||||
|
@ -234,10 +264,92 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
assertEquals(1.0, jobInstance.getProgress());
|
assertEquals(1.0, jobInstance.getProgress());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* This test verifies that if we have a workchunks being processed by the queue,
|
||||||
|
* and the maintenance job kicks in, it won't necessarily advance the steps.
|
||||||
|
*/
|
||||||
@Test
|
@Test
|
||||||
public void reductionStepFailing_willFailJob() throws InterruptedException {
|
public void gatedJob_whenMaintenanceRunHappensDuringMsgProcessing_doesNotAdvance() throws InterruptedException {
|
||||||
// setup
|
// setup
|
||||||
String jobId = new Exception().getStackTrace()[0].getMethodName();
|
// we disable the scheduler because multiple schedulers running simultaneously
|
||||||
|
// might cause database collisions we do not expect (not what we're testing)
|
||||||
|
myBatch2JobHelper.enableMaintenanceRunner(false);
|
||||||
|
String jobId = getMethodNameForJobId();
|
||||||
|
int chunksToMake = 5;
|
||||||
|
AtomicInteger secondGateCounter = new AtomicInteger();
|
||||||
|
AtomicBoolean reductionCheck = new AtomicBoolean(false);
|
||||||
|
// we will listen into the message queue so we can force actions on it
|
||||||
|
MessageHandler handler = message -> {
|
||||||
|
/*
|
||||||
|
* We will force a run of the maintenance job
|
||||||
|
* to simulate the situation in which a chunk is
|
||||||
|
* still being processed by the WorkChunkMessageHandler
|
||||||
|
* (and thus, not available yet).
|
||||||
|
*/
|
||||||
|
myBatch2JobHelper.forceRunMaintenancePass();
|
||||||
|
};
|
||||||
|
|
||||||
|
buildAndDefine3StepReductionJob(jobId, new IReductionStepHandler() {
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void firstStep(StepExecutionDetails<TestJobParameters, VoidModel> theStep, IJobDataSink<FirstStepOutput> theDataSink) {
|
||||||
|
for (int i = 0; i < chunksToMake; i++) {
|
||||||
|
theDataSink.accept(new FirstStepOutput());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void secondStep(StepExecutionDetails<TestJobParameters, FirstStepOutput> theStep, IJobDataSink<SecondStepOutput> theDataSink) {
|
||||||
|
// no new chunks
|
||||||
|
SecondStepOutput output = new SecondStepOutput();
|
||||||
|
theDataSink.accept(output);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void reductionStepConsume(ChunkExecutionDetails<TestJobParameters, SecondStepOutput> theChunkDetails, IJobDataSink<ReductionStepOutput> theDataSink) {
|
||||||
|
// we expect to get one here
|
||||||
|
int val = secondGateCounter.getAndIncrement();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void reductionStepRun(StepExecutionDetails<TestJobParameters, SecondStepOutput> theStepExecutionDetails, IJobDataSink<ReductionStepOutput> theDataSink) {
|
||||||
|
reductionCheck.set(true);
|
||||||
|
theDataSink.accept(new ReductionStepOutput(new ArrayList<>()));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
myWorkChannel.subscribe(handler);
|
||||||
|
|
||||||
|
// test
|
||||||
|
JobInstanceStartRequest request = buildRequest(jobId);
|
||||||
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
|
|
||||||
|
String instanceId = startResponse.getInstanceId();
|
||||||
|
|
||||||
|
// wait
|
||||||
|
myBatch2JobHelper.awaitJobCompletion(instanceId);
|
||||||
|
|
||||||
|
// verify
|
||||||
|
Optional<JobInstance> instanceOp = myJobPersistence.fetchInstance(instanceId);
|
||||||
|
assertTrue(instanceOp.isPresent());
|
||||||
|
JobInstance jobInstance = instanceOp.get();
|
||||||
|
assertTrue(reductionCheck.get());
|
||||||
|
assertEquals(chunksToMake, secondGateCounter.get());
|
||||||
|
|
||||||
|
assertEquals(StatusEnum.COMPLETED, jobInstance.getStatus());
|
||||||
|
assertEquals(1.0, jobInstance.getProgress());
|
||||||
|
} finally {
|
||||||
|
myWorkChannel.unsubscribe(handler);
|
||||||
|
myBatch2JobHelper.enableMaintenanceRunner(true);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void reductionStepFailing_willFailJob() {
|
||||||
|
// setup
|
||||||
|
String jobId = getMethodNameForJobId();
|
||||||
int totalChunks = 3;
|
int totalChunks = 3;
|
||||||
AtomicInteger chunkCounter = new AtomicInteger();
|
AtomicInteger chunkCounter = new AtomicInteger();
|
||||||
String error = "this is an error";
|
String error = "this is an error";
|
||||||
|
@ -292,22 +404,17 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
@Test
|
@Test
|
||||||
public void testJobWithReductionStepFiresCompletionHandler() throws InterruptedException {
|
public void testJobWithReductionStepFiresCompletionHandler() throws InterruptedException {
|
||||||
// setup
|
// setup
|
||||||
String jobId = new Exception().getStackTrace()[0].getMethodName();
|
String jobId = getMethodNameForJobId();
|
||||||
String testInfo = "test";
|
String testInfo = "test";
|
||||||
int totalCalls = 2;
|
int totalCalls = 2;
|
||||||
AtomicInteger secondStepInt = new AtomicInteger();
|
AtomicInteger secondStepInt = new AtomicInteger();
|
||||||
|
|
||||||
AtomicBoolean completionBool = new AtomicBoolean();
|
AtomicBoolean completionBool = new AtomicBoolean();
|
||||||
|
|
||||||
AtomicBoolean jobStatusBool = new AtomicBoolean();
|
|
||||||
|
|
||||||
myCompletionHandler = (params) -> {
|
myCompletionHandler = (params) -> {
|
||||||
// ensure our completion handler fires
|
// ensure our completion handler gets the right status
|
||||||
|
assertEquals(StatusEnum.COMPLETED, params.getInstance().getStatus());
|
||||||
completionBool.getAndSet(true);
|
completionBool.getAndSet(true);
|
||||||
|
|
||||||
if (StatusEnum.COMPLETED.equals(params.getInstance().getStatus())){
|
|
||||||
jobStatusBool.getAndSet(true);
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
buildAndDefine3StepReductionJob(jobId, new IReductionStepHandler() {
|
buildAndDefine3StepReductionJob(jobId, new IReductionStepHandler() {
|
||||||
|
@ -351,10 +458,11 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
|
|
||||||
String instanceId = startResponse.getInstanceId();
|
String instanceId = startResponse.getInstanceId();
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
myFirstStepLatch.awaitExpected();
|
myFirstStepLatch.awaitExpected();
|
||||||
assertNotNull(instanceId);
|
assertNotNull(instanceId);
|
||||||
|
|
||||||
myBatch2JobHelper.awaitGatedStepId(FIRST_STEP_ID, instanceId);
|
myBatch2JobHelper.awaitGatedStepId(SECOND_STEP_ID, instanceId);
|
||||||
|
|
||||||
// wait for last step to finish
|
// wait for last step to finish
|
||||||
ourLog.info("Setting last step latch");
|
ourLog.info("Setting last step latch");
|
||||||
|
@ -362,17 +470,16 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
// waiting
|
// waiting
|
||||||
myBatch2JobHelper.awaitJobCompletion(instanceId);
|
myBatch2JobHelper.awaitJobCompletion(instanceId);
|
||||||
myLastStepLatch.awaitExpected();
|
|
||||||
ourLog.info("awaited the last step");
|
ourLog.info("awaited the last step");
|
||||||
|
myLastStepLatch.awaitExpected();
|
||||||
|
|
||||||
// verify
|
// verify
|
||||||
Optional<JobInstance> instanceOp = myJobPersistence.fetchInstance(instanceId);
|
Optional<JobInstance> instanceOp = myJobPersistence.fetchInstance(instanceId);
|
||||||
assertTrue(instanceOp.isPresent());
|
assertTrue(instanceOp.isPresent());
|
||||||
JobInstance jobInstance = instanceOp.get();
|
JobInstance jobInstance = instanceOp.get();
|
||||||
|
|
||||||
// ensure our completion handler fires with the up-to-date job instance
|
// ensure our completion handler fired
|
||||||
assertTrue(completionBool.get());
|
assertTrue(completionBool.get());
|
||||||
assertTrue(jobStatusBool.get());
|
|
||||||
|
|
||||||
assertEquals(StatusEnum.COMPLETED, jobInstance.getStatus());
|
assertEquals(StatusEnum.COMPLETED, jobInstance.getStatus());
|
||||||
assertEquals(1.0, jobInstance.getProgress());
|
assertEquals(1.0, jobInstance.getProgress());
|
||||||
|
@ -382,7 +489,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
@ValueSource(booleans = {true, false})
|
@ValueSource(booleans = {true, false})
|
||||||
public void testJobDefinitionWithReductionStepIT(boolean theDelayReductionStepBool) throws InterruptedException {
|
public void testJobDefinitionWithReductionStepIT(boolean theDelayReductionStepBool) throws InterruptedException {
|
||||||
// setup
|
// setup
|
||||||
String jobId = new Exception().getStackTrace()[0].getMethodName() + "_" + theDelayReductionStepBool;
|
String jobId = getMethodNameForJobId() + "_" + theDelayReductionStepBool;
|
||||||
String testInfo = "test";
|
String testInfo = "test";
|
||||||
AtomicInteger secondStepInt = new AtomicInteger();
|
AtomicInteger secondStepInt = new AtomicInteger();
|
||||||
|
|
||||||
|
@ -441,12 +548,12 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
JobInstanceStartRequest request = buildRequest(jobId);
|
JobInstanceStartRequest request = buildRequest(jobId);
|
||||||
myFirstStepLatch.setExpectedCount(1);
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
|
|
||||||
String instanceId = startResponse.getInstanceId();
|
String instanceId = startResponse.getInstanceId();
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
myFirstStepLatch.awaitExpected();
|
myFirstStepLatch.awaitExpected();
|
||||||
assertNotNull(instanceId);
|
assertNotNull(instanceId);
|
||||||
|
|
||||||
myBatch2JobHelper.awaitGatedStepId(FIRST_STEP_ID, instanceId);
|
myBatch2JobHelper.awaitGatedStepId(SECOND_STEP_ID, instanceId);
|
||||||
|
|
||||||
// wait for last step to finish
|
// wait for last step to finish
|
||||||
ourLog.info("Setting last step latch");
|
ourLog.info("Setting last step latch");
|
||||||
|
@ -482,6 +589,95 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
assertEquals(1.0, jobInstance.getProgress());
|
assertEquals(1.0, jobInstance.getProgress());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void testJobWithLongPollingStep() throws InterruptedException {
|
||||||
|
// create job definition
|
||||||
|
int callsToMake = 3;
|
||||||
|
int chunksToAwait = 2;
|
||||||
|
String jobId = getMethodNameForJobId();
|
||||||
|
|
||||||
|
ConcurrentHashMap<String, AtomicInteger> chunkToCounter = new ConcurrentHashMap<>();
|
||||||
|
HashMap<String, Integer> chunkToCallsToMake = new HashMap<>();
|
||||||
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> first = (step, sink) -> {
|
||||||
|
for (int i = 0; i < chunksToAwait; i++) {
|
||||||
|
String cv = "chunk" + i;
|
||||||
|
chunkToCallsToMake.put(cv, callsToMake);
|
||||||
|
sink.accept(new FirstStepOutput().setValue(cv));
|
||||||
|
}
|
||||||
|
return RunOutcome.SUCCESS;
|
||||||
|
};
|
||||||
|
|
||||||
|
// step 2
|
||||||
|
IJobStepWorker<TestJobParameters, FirstStepOutput, SecondStepOutput> second = (step, sink) -> {
|
||||||
|
// simulate a call
|
||||||
|
Awaitility.await().atMost(100, TimeUnit.MICROSECONDS);
|
||||||
|
|
||||||
|
// we use Batch2FastSchedulerConfig, so we have a fast scheduler
|
||||||
|
// that should catch and call repeatedly pretty quickly
|
||||||
|
String chunkValue = step.getData().myTestValue;
|
||||||
|
AtomicInteger pollCounter = chunkToCounter.computeIfAbsent(chunkValue, (key) -> {
|
||||||
|
return new AtomicInteger();
|
||||||
|
});
|
||||||
|
int count = pollCounter.getAndIncrement();
|
||||||
|
|
||||||
|
if (chunkToCallsToMake.get(chunkValue) <= count) {
|
||||||
|
sink.accept(new SecondStepOutput());
|
||||||
|
return RunOutcome.SUCCESS;
|
||||||
|
}
|
||||||
|
throw new RetryChunkLaterException(Duration.of(200, ChronoUnit.MILLIS));
|
||||||
|
};
|
||||||
|
|
||||||
|
// step 3
|
||||||
|
ILastJobStepWorker<TestJobParameters, SecondStepOutput> last = (step, sink) -> {
|
||||||
|
myLastStepLatch.call(1);
|
||||||
|
return RunOutcome.SUCCESS;
|
||||||
|
};
|
||||||
|
|
||||||
|
JobDefinition<? extends IModelJson> jd = JobDefinition.newBuilder()
|
||||||
|
.setJobDefinitionId(jobId)
|
||||||
|
.setJobDescription("test job")
|
||||||
|
.setJobDefinitionVersion(TEST_JOB_VERSION)
|
||||||
|
.setParametersType(TestJobParameters.class)
|
||||||
|
.gatedExecution()
|
||||||
|
.addFirstStep(
|
||||||
|
FIRST_STEP_ID,
|
||||||
|
"First step",
|
||||||
|
FirstStepOutput.class,
|
||||||
|
first
|
||||||
|
)
|
||||||
|
.addIntermediateStep(SECOND_STEP_ID,
|
||||||
|
"Second step",
|
||||||
|
SecondStepOutput.class,
|
||||||
|
second)
|
||||||
|
.addLastStep(
|
||||||
|
LAST_STEP_ID,
|
||||||
|
"Final step",
|
||||||
|
last
|
||||||
|
)
|
||||||
|
.completionHandler(myCompletionHandler)
|
||||||
|
.build();
|
||||||
|
myJobDefinitionRegistry.addJobDefinition(jd);
|
||||||
|
|
||||||
|
// test
|
||||||
|
JobInstanceStartRequest request = buildRequest(jobId);
|
||||||
|
myLastStepLatch.setExpectedCount(chunksToAwait);
|
||||||
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
|
String instanceId = startResponse.getInstanceId();
|
||||||
|
|
||||||
|
// waiting for the job
|
||||||
|
myBatch2JobHelper.awaitJobCompletion(startResponse);
|
||||||
|
// ensure final step fired
|
||||||
|
myLastStepLatch.awaitExpected();
|
||||||
|
|
||||||
|
// verify
|
||||||
|
assertEquals(chunksToAwait, chunkToCounter.size());
|
||||||
|
for (Map.Entry<String, AtomicInteger> set : chunkToCounter.entrySet()) {
|
||||||
|
// +1 because after 0 indexing; it will make callsToMake failed calls (0, 1... callsToMake)
|
||||||
|
// and one more successful call (callsToMake + 1)
|
||||||
|
assertEquals(callsToMake + 1, set.getValue().get());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testFirstStepToSecondStep_doubleChunk_doesNotFastTrack() throws InterruptedException {
|
public void testFirstStepToSecondStep_doubleChunk_doesNotFastTrack() throws InterruptedException {
|
||||||
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> {
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> {
|
||||||
|
@ -491,7 +687,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
};
|
};
|
||||||
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> callLatch(myLastStepLatch, step);
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> callLatch(myLastStepLatch, step);
|
||||||
|
|
||||||
String jobDefId = new Exception().getStackTrace()[0].getMethodName();
|
String jobDefId = getMethodNameForJobId();
|
||||||
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobDefId, firstStep, lastStep);
|
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobDefId, firstStep, lastStep);
|
||||||
|
|
||||||
myJobDefinitionRegistry.addJobDefinition(definition);
|
myJobDefinitionRegistry.addJobDefinition(definition);
|
||||||
|
@ -501,6 +697,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
myFirstStepLatch.setExpectedCount(1);
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
String instanceId = startResponse.getInstanceId();
|
String instanceId = startResponse.getInstanceId();
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
myFirstStepLatch.awaitExpected();
|
myFirstStepLatch.awaitExpected();
|
||||||
|
|
||||||
myLastStepLatch.setExpectedCount(2);
|
myLastStepLatch.setExpectedCount(2);
|
||||||
|
@ -513,14 +710,14 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void JobExecutionFailedException_CausesInstanceFailure() {
|
public void jobExecutionFailedException_CausesInstanceFailure() {
|
||||||
// setup
|
// setup
|
||||||
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> {
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> {
|
||||||
throw new JobExecutionFailedException("Expected Test Exception");
|
throw new JobExecutionFailedException("Expected Test Exception");
|
||||||
};
|
};
|
||||||
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> fail();
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> fail();
|
||||||
|
|
||||||
String jobDefId = new Exception().getStackTrace()[0].getMethodName();
|
String jobDefId = getMethodNameForJobId();
|
||||||
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobDefId, firstStep, lastStep);
|
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobDefId, firstStep, lastStep);
|
||||||
|
|
||||||
myJobDefinitionRegistry.addJobDefinition(definition);
|
myJobDefinitionRegistry.addJobDefinition(definition);
|
||||||
|
@ -538,36 +735,47 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
@Test
|
@Test
|
||||||
public void testUnknownException_KeepsInProgress_CanCancelManually() throws InterruptedException {
|
public void testUnknownException_KeepsInProgress_CanCancelManually() throws InterruptedException {
|
||||||
// setup
|
// setup
|
||||||
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> {
|
|
||||||
callLatch(myFirstStepLatch, step);
|
|
||||||
throw new RuntimeException("Expected Test Exception");
|
|
||||||
};
|
|
||||||
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> fail();
|
|
||||||
|
|
||||||
String jobDefId = new Exception().getStackTrace()[0].getMethodName();
|
// we want to control the maintenance runner ourselves in this case
|
||||||
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobDefId, firstStep, lastStep);
|
// to prevent intermittent test failures
|
||||||
|
myJobMaintenanceService.enableMaintenancePass(false);
|
||||||
|
|
||||||
myJobDefinitionRegistry.addJobDefinition(definition);
|
try {
|
||||||
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> {
|
||||||
|
callLatch(myFirstStepLatch, step);
|
||||||
|
throw new RuntimeException("Expected Test Exception");
|
||||||
|
};
|
||||||
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> fail();
|
||||||
|
|
||||||
JobInstanceStartRequest request = buildRequest(jobDefId);
|
String jobDefId = getMethodNameForJobId();
|
||||||
|
JobDefinition<? extends IModelJson> definition = buildGatedJobDefinition(jobDefId, firstStep, lastStep);
|
||||||
|
|
||||||
// execute
|
myJobDefinitionRegistry.addJobDefinition(definition);
|
||||||
ourLog.info("Starting job");
|
|
||||||
myFirstStepLatch.setExpectedCount(1);
|
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
|
||||||
String instanceId = startResponse.getInstanceId();
|
|
||||||
myFirstStepLatch.awaitExpected();
|
|
||||||
|
|
||||||
// validate
|
JobInstanceStartRequest request = buildRequest(jobDefId);
|
||||||
myBatch2JobHelper.awaitJobInProgress(instanceId);
|
|
||||||
|
|
||||||
// execute
|
// execute
|
||||||
ourLog.info("Cancel job {}", instanceId);
|
ourLog.info("Starting job");
|
||||||
myJobCoordinator.cancelInstance(instanceId);
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
ourLog.info("Cancel job {} done", instanceId);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
|
String instanceId = startResponse.getInstanceId();
|
||||||
|
myBatch2JobHelper.forceRunMaintenancePass();
|
||||||
|
myFirstStepLatch.awaitExpected();
|
||||||
|
|
||||||
// validate
|
// validate
|
||||||
myBatch2JobHelper.awaitJobCancelled(instanceId);
|
myBatch2JobHelper.awaitJobHasStatusWithForcedMaintenanceRuns(instanceId, StatusEnum.IN_PROGRESS);
|
||||||
|
|
||||||
|
// execute
|
||||||
|
ourLog.info("Cancel job {}", instanceId);
|
||||||
|
myJobCoordinator.cancelInstance(instanceId);
|
||||||
|
ourLog.info("Cancel job {} done", instanceId);
|
||||||
|
|
||||||
|
// validate
|
||||||
|
myBatch2JobHelper.awaitJobHasStatusWithForcedMaintenanceRuns(instanceId,
|
||||||
|
StatusEnum.CANCELLED);
|
||||||
|
} finally {
|
||||||
|
myJobMaintenanceService.enableMaintenancePass(true);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
@ -586,7 +794,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
return RunOutcome.SUCCESS;
|
return RunOutcome.SUCCESS;
|
||||||
};
|
};
|
||||||
// job definition
|
// job definition
|
||||||
String jobDefId = new Exception().getStackTrace()[0].getMethodName();
|
String jobDefId = getMethodNameForJobId();
|
||||||
JobDefinition<? extends IModelJson> jd = JobDefinition.newBuilder()
|
JobDefinition<? extends IModelJson> jd = JobDefinition.newBuilder()
|
||||||
.setJobDefinitionId(jobDefId)
|
.setJobDefinitionId(jobDefId)
|
||||||
.setJobDescription("test job")
|
.setJobDescription("test job")
|
||||||
|
@ -629,6 +837,15 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
return request;
|
return request;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Returns the method name of the calling method for a unique job id.
|
||||||
|
* It is best this is called from the test method directly itself, and never
|
||||||
|
* delegate to a separate child method.s
|
||||||
|
*/
|
||||||
|
private String getMethodNameForJobId() {
|
||||||
|
return new Exception().getStackTrace()[1].getMethodName();
|
||||||
|
}
|
||||||
|
|
||||||
@Nonnull
|
@Nonnull
|
||||||
private JobDefinition<? extends IModelJson> buildGatedJobDefinition(String theJobId, IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> theFirstStep, IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> theLastStep) {
|
private JobDefinition<? extends IModelJson> buildGatedJobDefinition(String theJobId, IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> theFirstStep, IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> theLastStep) {
|
||||||
return JobDefinition.newBuilder()
|
return JobDefinition.newBuilder()
|
||||||
|
@ -723,6 +940,7 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
)
|
)
|
||||||
.completionHandler(myCompletionHandler)
|
.completionHandler(myCompletionHandler)
|
||||||
.build();
|
.build();
|
||||||
|
myJobDefinitionRegistry.removeJobDefinition(theJobId, 1);
|
||||||
myJobDefinitionRegistry.addJobDefinition(jd);
|
myJobDefinitionRegistry.addJobDefinition(jd);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -732,8 +950,16 @@ public class Batch2CoordinatorIT extends BaseJpaR4Test {
|
||||||
}
|
}
|
||||||
|
|
||||||
static class FirstStepOutput implements IModelJson {
|
static class FirstStepOutput implements IModelJson {
|
||||||
|
@JsonProperty("test")
|
||||||
|
private String myTestValue;
|
||||||
|
|
||||||
FirstStepOutput() {
|
FirstStepOutput() {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public FirstStepOutput setValue(String theV) {
|
||||||
|
myTestValue = theV;
|
||||||
|
return this;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static class SecondStepOutput implements IModelJson {
|
static class SecondStepOutput implements IModelJson {
|
||||||
|
|
|
@ -1,12 +1,10 @@
|
||||||
package ca.uhn.fhir.jpa.batch2;
|
package ca.uhn.fhir.jpa.batch2;
|
||||||
|
|
||||||
import ca.uhn.fhir.batch2.model.StatusEnum;
|
import ca.uhn.fhir.batch2.model.StatusEnum;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
|
||||||
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
||||||
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
||||||
import org.junit.jupiter.params.ParameterizedTest;
|
import org.junit.jupiter.params.ParameterizedTest;
|
||||||
import org.junit.jupiter.params.provider.CsvSource;
|
import org.junit.jupiter.params.provider.CsvSource;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
|
||||||
|
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
import java.util.Date;
|
import java.util.Date;
|
||||||
|
@ -18,9 +16,6 @@ import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
|
|
||||||
public class Batch2JobInstanceRepositoryTest extends BaseJpaR4Test {
|
public class Batch2JobInstanceRepositoryTest extends BaseJpaR4Test {
|
||||||
|
|
||||||
@Autowired
|
|
||||||
IBatch2JobInstanceRepository myBatch2JobInstanceRepository;
|
|
||||||
|
|
||||||
@ParameterizedTest
|
@ParameterizedTest
|
||||||
@CsvSource({
|
@CsvSource({
|
||||||
"QUEUED, FAILED, QUEUED, true, normal transition",
|
"QUEUED, FAILED, QUEUED, true, normal transition",
|
||||||
|
@ -38,16 +33,16 @@ public class Batch2JobInstanceRepositoryTest extends BaseJpaR4Test {
|
||||||
entity.setStatus(theCurrentState);
|
entity.setStatus(theCurrentState);
|
||||||
entity.setCreateTime(new Date());
|
entity.setCreateTime(new Date());
|
||||||
entity.setDefinitionId("definition_id");
|
entity.setDefinitionId("definition_id");
|
||||||
myBatch2JobInstanceRepository.save(entity);
|
myJobInstanceRepository.save(entity);
|
||||||
|
|
||||||
// when
|
// when
|
||||||
int changeCount =
|
int changeCount =
|
||||||
runInTransaction(()->
|
runInTransaction(()->
|
||||||
myBatch2JobInstanceRepository.updateInstanceStatusIfIn(jobId, theTargetState, theAllowedPriorStates));
|
myJobInstanceRepository.updateInstanceStatusIfIn(jobId, theTargetState, theAllowedPriorStates));
|
||||||
|
|
||||||
// then
|
// then
|
||||||
Batch2JobInstanceEntity readBack = runInTransaction(() ->
|
Batch2JobInstanceEntity readBack = runInTransaction(() ->
|
||||||
myBatch2JobInstanceRepository.findById(jobId).orElseThrow());
|
myJobInstanceRepository.findById(jobId).orElseThrow());
|
||||||
if (theExpectedSuccessFlag) {
|
if (theExpectedSuccessFlag) {
|
||||||
assertEquals(1, changeCount, "The change happened");
|
assertEquals(1, changeCount, "The change happened");
|
||||||
assertEquals(theTargetState, readBack.getStatus());
|
assertEquals(theTargetState, readBack.getStatus());
|
||||||
|
|
|
@ -27,6 +27,7 @@ import ca.uhn.fhir.model.api.IModelJson;
|
||||||
import ca.uhn.fhir.util.JsonUtil;
|
import ca.uhn.fhir.util.JsonUtil;
|
||||||
import ca.uhn.test.concurrency.IPointcutLatch;
|
import ca.uhn.test.concurrency.IPointcutLatch;
|
||||||
import ca.uhn.test.concurrency.PointcutLatch;
|
import ca.uhn.test.concurrency.PointcutLatch;
|
||||||
|
import jakarta.annotation.Nonnull;
|
||||||
import org.junit.jupiter.api.AfterEach;
|
import org.junit.jupiter.api.AfterEach;
|
||||||
import org.junit.jupiter.api.BeforeEach;
|
import org.junit.jupiter.api.BeforeEach;
|
||||||
import org.junit.jupiter.api.Disabled;
|
import org.junit.jupiter.api.Disabled;
|
||||||
|
@ -39,7 +40,6 @@ import org.springframework.messaging.MessageChannel;
|
||||||
import org.springframework.messaging.support.ChannelInterceptor;
|
import org.springframework.messaging.support.ChannelInterceptor;
|
||||||
import org.springframework.transaction.support.TransactionTemplate;
|
import org.springframework.transaction.support.TransactionTemplate;
|
||||||
|
|
||||||
import jakarta.annotation.Nonnull;
|
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Date;
|
import java.util.Date;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
@ -358,10 +358,10 @@ public class Batch2JobMaintenanceDatabaseIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
WorkChunkExpectation expectation = new WorkChunkExpectation(
|
WorkChunkExpectation expectation = new WorkChunkExpectation(
|
||||||
"""
|
"""
|
||||||
chunk1, FIRST, COMPLETED
|
chunk1, FIRST, COMPLETED
|
||||||
chunk2, SECOND, QUEUED
|
chunk2, SECOND, QUEUED
|
||||||
chunk3, LAST, QUEUED
|
chunk3, LAST, QUEUED
|
||||||
""",
|
""",
|
||||||
""
|
""
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|
|
@ -11,17 +11,26 @@ import ca.uhn.fhir.batch2.api.VoidModel;
|
||||||
import ca.uhn.fhir.batch2.coordinator.JobDefinitionRegistry;
|
import ca.uhn.fhir.batch2.coordinator.JobDefinitionRegistry;
|
||||||
import ca.uhn.fhir.batch2.maintenance.JobMaintenanceServiceImpl;
|
import ca.uhn.fhir.batch2.maintenance.JobMaintenanceServiceImpl;
|
||||||
import ca.uhn.fhir.batch2.model.JobDefinition;
|
import ca.uhn.fhir.batch2.model.JobDefinition;
|
||||||
|
import ca.uhn.fhir.batch2.model.JobInstance;
|
||||||
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
|
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
|
||||||
import ca.uhn.fhir.batch2.model.JobWorkNotificationJsonMessage;
|
import ca.uhn.fhir.batch2.model.JobWorkNotificationJsonMessage;
|
||||||
|
import ca.uhn.fhir.batch2.model.StatusEnum;
|
||||||
import ca.uhn.fhir.jpa.subscription.channel.api.ChannelConsumerSettings;
|
import ca.uhn.fhir.jpa.subscription.channel.api.ChannelConsumerSettings;
|
||||||
import ca.uhn.fhir.jpa.subscription.channel.api.IChannelFactory;
|
import ca.uhn.fhir.jpa.subscription.channel.api.IChannelFactory;
|
||||||
import ca.uhn.fhir.jpa.subscription.channel.impl.LinkedBlockingChannel;
|
import ca.uhn.fhir.jpa.subscription.channel.impl.LinkedBlockingChannel;
|
||||||
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
||||||
import ca.uhn.fhir.jpa.test.Batch2JobHelper;
|
import ca.uhn.fhir.jpa.test.Batch2JobHelper;
|
||||||
|
import ca.uhn.fhir.jpa.test.config.Batch2FastSchedulerConfig;
|
||||||
import ca.uhn.fhir.model.api.IModelJson;
|
import ca.uhn.fhir.model.api.IModelJson;
|
||||||
|
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
||||||
import ca.uhn.fhir.test.utilities.UnregisterScheduledProcessor;
|
import ca.uhn.fhir.test.utilities.UnregisterScheduledProcessor;
|
||||||
|
import ca.uhn.fhir.testjob.TestJobDefinitionUtils;
|
||||||
|
import ca.uhn.fhir.testjob.models.FirstStepOutput;
|
||||||
|
import ca.uhn.fhir.testjob.models.ReductionStepOutput;
|
||||||
|
import ca.uhn.fhir.testjob.models.TestJobParameters;
|
||||||
import ca.uhn.test.concurrency.PointcutLatch;
|
import ca.uhn.test.concurrency.PointcutLatch;
|
||||||
import com.fasterxml.jackson.annotation.JsonProperty;
|
import com.fasterxml.jackson.annotation.JsonProperty;
|
||||||
|
import jakarta.annotation.Nonnull;
|
||||||
import org.junit.jupiter.api.AfterEach;
|
import org.junit.jupiter.api.AfterEach;
|
||||||
import org.junit.jupiter.api.BeforeEach;
|
import org.junit.jupiter.api.BeforeEach;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
|
@ -31,8 +40,6 @@ import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.test.context.ContextConfiguration;
|
import org.springframework.test.context.ContextConfiguration;
|
||||||
import org.springframework.test.context.TestPropertySource;
|
import org.springframework.test.context.TestPropertySource;
|
||||||
|
|
||||||
import jakarta.annotation.Nonnull;
|
|
||||||
import jakarta.annotation.PostConstruct;
|
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
|
@ -41,9 +48,9 @@ import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The on-enter actions are defined in
|
* The on-enter actions are defined in
|
||||||
* {@link ca.uhn.fhir.batch2.progress.JobInstanceStatusUpdater#handleStatusChange}
|
* {@link ca.uhn.fhir.batch2.progress.JobInstanceStatusUpdater#handleStatusChange(JobInstance)}}
|
||||||
* {@link ca.uhn.fhir.batch2.progress.InstanceProgress#updateStatus(JobInstance)}
|
* {@link ca.uhn.fhir.batch2.progress.InstanceProgress#updateStatus(JobInstance)}
|
||||||
* {@link JobInstanceProcessor#cleanupInstance()}
|
* {@link ca.uhn.fhir.batch2.maintenance.JobInstanceProcessor#cleanupInstance()}
|
||||||
|
|
||||||
* For chunks:
|
* For chunks:
|
||||||
* {@link ca.uhn.fhir.jpa.batch2.JpaJobPersistenceImpl#onWorkChunkCreate}
|
* {@link ca.uhn.fhir.jpa.batch2.JpaJobPersistenceImpl#onWorkChunkCreate}
|
||||||
|
@ -53,13 +60,10 @@ import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
@TestPropertySource(properties = {
|
@TestPropertySource(properties = {
|
||||||
UnregisterScheduledProcessor.SCHEDULING_DISABLED_EQUALS_FALSE
|
UnregisterScheduledProcessor.SCHEDULING_DISABLED_EQUALS_FALSE
|
||||||
})
|
})
|
||||||
@ContextConfiguration(classes = {Batch2JobMaintenanceIT.SpringConfig.class})
|
@ContextConfiguration(classes = {Batch2FastSchedulerConfig.class})
|
||||||
public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
||||||
private static final Logger ourLog = LoggerFactory.getLogger(Batch2JobMaintenanceIT.class);
|
private static final Logger ourLog = LoggerFactory.getLogger(Batch2JobMaintenanceIT.class);
|
||||||
|
|
||||||
public static final int TEST_JOB_VERSION = 1;
|
|
||||||
public static final String FIRST_STEP_ID = "first-step";
|
|
||||||
public static final String LAST_STEP_ID = "last-step";
|
|
||||||
@Autowired
|
@Autowired
|
||||||
JobDefinitionRegistry myJobDefinitionRegistry;
|
JobDefinitionRegistry myJobDefinitionRegistry;
|
||||||
@Autowired
|
@Autowired
|
||||||
|
@ -87,6 +91,7 @@ public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
@BeforeEach
|
@BeforeEach
|
||||||
public void before() {
|
public void before() {
|
||||||
|
myStorageSettings.setJobFastTrackingEnabled(true);
|
||||||
myCompletionHandler = details -> {};
|
myCompletionHandler = details -> {};
|
||||||
myWorkChannel = (LinkedBlockingChannel) myChannelFactory.getOrCreateReceiver(CHANNEL_NAME, JobWorkNotificationJsonMessage.class, new ChannelConsumerSettings());
|
myWorkChannel = (LinkedBlockingChannel) myChannelFactory.getOrCreateReceiver(CHANNEL_NAME, JobWorkNotificationJsonMessage.class, new ChannelConsumerSettings());
|
||||||
JobMaintenanceServiceImpl jobMaintenanceService = (JobMaintenanceServiceImpl) myJobMaintenanceService;
|
JobMaintenanceServiceImpl jobMaintenanceService = (JobMaintenanceServiceImpl) myJobMaintenanceService;
|
||||||
|
@ -99,7 +104,6 @@ public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
||||||
@AfterEach
|
@AfterEach
|
||||||
public void after() {
|
public void after() {
|
||||||
myWorkChannel.clearInterceptorsForUnitTest();
|
myWorkChannel.clearInterceptorsForUnitTest();
|
||||||
myStorageSettings.setJobFastTrackingEnabled(true);
|
|
||||||
JobMaintenanceServiceImpl jobMaintenanceService = (JobMaintenanceServiceImpl) myJobMaintenanceService;
|
JobMaintenanceServiceImpl jobMaintenanceService = (JobMaintenanceServiceImpl) myJobMaintenanceService;
|
||||||
jobMaintenanceService.setMaintenanceJobStartedCallback(() -> {});
|
jobMaintenanceService.setMaintenanceJobStartedCallback(() -> {});
|
||||||
}
|
}
|
||||||
|
@ -122,7 +126,8 @@ public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
myFirstStepLatch.setExpectedCount(1);
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
myLastStepLatch.setExpectedCount(1);
|
myLastStepLatch.setExpectedCount(1);
|
||||||
String batchJobId = myJobCoordinator.startInstance(request).getInstanceId();
|
String batchJobId = myJobCoordinator.startInstance(new SystemRequestDetails(), request).getInstanceId();
|
||||||
|
|
||||||
myFirstStepLatch.awaitExpected();
|
myFirstStepLatch.awaitExpected();
|
||||||
|
|
||||||
myBatch2JobHelper.assertFastTracking(batchJobId);
|
myBatch2JobHelper.assertFastTracking(batchJobId);
|
||||||
|
@ -156,12 +161,12 @@ public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
||||||
public void testFirstStepToSecondStepFasttrackingDisabled_singleChunkDoesNotFasttrack() throws InterruptedException {
|
public void testFirstStepToSecondStepFasttrackingDisabled_singleChunkDoesNotFasttrack() throws InterruptedException {
|
||||||
myStorageSettings.setJobFastTrackingEnabled(false);
|
myStorageSettings.setJobFastTrackingEnabled(false);
|
||||||
|
|
||||||
IJobStepWorker<Batch2JobMaintenanceIT.TestJobParameters, VoidModel, Batch2JobMaintenanceIT.FirstStepOutput> firstStep = (step, sink) -> {
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> firstStep = (step, sink) -> {
|
||||||
sink.accept(new Batch2JobMaintenanceIT.FirstStepOutput());
|
sink.accept(new FirstStepOutput());
|
||||||
callLatch(myFirstStepLatch, step);
|
callLatch(myFirstStepLatch, step);
|
||||||
return RunOutcome.SUCCESS;
|
return RunOutcome.SUCCESS;
|
||||||
};
|
};
|
||||||
IJobStepWorker<Batch2JobMaintenanceIT.TestJobParameters, Batch2JobMaintenanceIT.FirstStepOutput, VoidModel> lastStep = (step, sink) -> callLatch(myLastStepLatch, step);
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> lastStep = (step, sink) -> callLatch(myLastStepLatch, step);
|
||||||
|
|
||||||
String jobDefId = new Exception().getStackTrace()[0].getMethodName();
|
String jobDefId = new Exception().getStackTrace()[0].getMethodName();
|
||||||
|
|
||||||
|
@ -173,7 +178,7 @@ public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
myFirstStepLatch.setExpectedCount(1);
|
myFirstStepLatch.setExpectedCount(1);
|
||||||
myLastStepLatch.setExpectedCount(1);
|
myLastStepLatch.setExpectedCount(1);
|
||||||
String batchJobId = myJobCoordinator.startInstance(request).getInstanceId();
|
String batchJobId = myJobCoordinator.startInstance(new SystemRequestDetails(), request).getInstanceId();
|
||||||
myFirstStepLatch.awaitExpected();
|
myFirstStepLatch.awaitExpected();
|
||||||
|
|
||||||
myBatch2JobHelper.assertFastTracking(batchJobId);
|
myBatch2JobHelper.assertFastTracking(batchJobId);
|
||||||
|
@ -200,65 +205,20 @@ public class Batch2JobMaintenanceIT extends BaseJpaR4Test {
|
||||||
|
|
||||||
@Nonnull
|
@Nonnull
|
||||||
private JobDefinition<? extends IModelJson> buildGatedJobDefinition(String theJobId, IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> theFirstStep, IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> theLastStep) {
|
private JobDefinition<? extends IModelJson> buildGatedJobDefinition(String theJobId, IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> theFirstStep, IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> theLastStep) {
|
||||||
return JobDefinition.newBuilder()
|
return TestJobDefinitionUtils.buildGatedJobDefinition(
|
||||||
.setJobDefinitionId(theJobId)
|
theJobId,
|
||||||
.setJobDescription("test job")
|
theFirstStep,
|
||||||
.setJobDefinitionVersion(TEST_JOB_VERSION)
|
theLastStep,
|
||||||
.setParametersType(TestJobParameters.class)
|
myCompletionHandler
|
||||||
.gatedExecution()
|
);
|
||||||
.addFirstStep(
|
|
||||||
FIRST_STEP_ID,
|
|
||||||
"Test first step",
|
|
||||||
FirstStepOutput.class,
|
|
||||||
theFirstStep
|
|
||||||
)
|
|
||||||
.addLastStep(
|
|
||||||
LAST_STEP_ID,
|
|
||||||
"Test last step",
|
|
||||||
theLastStep
|
|
||||||
)
|
|
||||||
.completionHandler(myCompletionHandler)
|
|
||||||
.build();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static class TestJobParameters implements IModelJson {
|
static class OurReductionStepOutput extends ReductionStepOutput {
|
||||||
TestJobParameters() {
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static class FirstStepOutput implements IModelJson {
|
|
||||||
FirstStepOutput() {
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static class SecondStepOutput implements IModelJson {
|
|
||||||
@JsonProperty("test")
|
|
||||||
private String myTestValue;
|
|
||||||
|
|
||||||
SecondStepOutput() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public void setValue(String theV) {
|
|
||||||
myTestValue = theV;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static class ReductionStepOutput implements IModelJson {
|
|
||||||
@JsonProperty("result")
|
@JsonProperty("result")
|
||||||
private List<?> myResult;
|
private List<?> myResult;
|
||||||
|
|
||||||
ReductionStepOutput(List<?> theResult) {
|
OurReductionStepOutput(List<?> theResult) {
|
||||||
myResult = theResult;
|
myResult = theResult;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static class SpringConfig {
|
|
||||||
@Autowired
|
|
||||||
IJobMaintenanceService myJobMaintenanceService;
|
|
||||||
|
|
||||||
@PostConstruct
|
|
||||||
void fastScheduler() {
|
|
||||||
((JobMaintenanceServiceImpl)myJobMaintenanceService).setScheduledJobFrequencyMillis(200);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package ca.uhn.fhir.jpa.batch2;
|
package ca.uhn.fhir.jpa.batch2;
|
||||||
|
|
||||||
import ca.uhn.fhir.batch2.api.IJobCoordinator;
|
import ca.uhn.fhir.batch2.api.IJobCoordinator;
|
||||||
import ca.uhn.fhir.rest.api.server.bulk.BulkExportJobParameters;
|
|
||||||
import ca.uhn.fhir.batch2.model.JobInstance;
|
import ca.uhn.fhir.batch2.model.JobInstance;
|
||||||
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
|
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
|
||||||
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
|
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
|
||||||
|
@ -10,10 +9,13 @@ import ca.uhn.fhir.jpa.batch.models.Batch2JobStartResponse;
|
||||||
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
||||||
import ca.uhn.fhir.jpa.test.config.TestR4Config;
|
import ca.uhn.fhir.jpa.test.config.TestR4Config;
|
||||||
import ca.uhn.fhir.rest.api.Constants;
|
import ca.uhn.fhir.rest.api.Constants;
|
||||||
|
import ca.uhn.fhir.rest.api.server.bulk.BulkExportJobParameters;
|
||||||
import ca.uhn.fhir.rest.server.exceptions.InternalErrorException;
|
import ca.uhn.fhir.rest.server.exceptions.InternalErrorException;
|
||||||
import ca.uhn.fhir.util.Batch2JobDefinitionConstants;
|
import ca.uhn.fhir.util.Batch2JobDefinitionConstants;
|
||||||
import ca.uhn.fhir.util.JsonUtil;
|
import ca.uhn.fhir.util.JsonUtil;
|
||||||
import com.google.common.collect.Sets;
|
import com.google.common.collect.Sets;
|
||||||
|
import org.hl7.fhir.instance.model.api.IBaseResource;
|
||||||
|
import org.hl7.fhir.instance.model.api.IIdType;
|
||||||
import org.hl7.fhir.r4.model.Binary;
|
import org.hl7.fhir.r4.model.Binary;
|
||||||
import org.hl7.fhir.r4.model.Enumerations;
|
import org.hl7.fhir.r4.model.Enumerations;
|
||||||
import org.hl7.fhir.r4.model.Group;
|
import org.hl7.fhir.r4.model.Group;
|
||||||
|
@ -36,13 +38,16 @@ import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.concurrent.BlockingQueue;
|
import java.util.concurrent.BlockingQueue;
|
||||||
|
import java.util.concurrent.CompletionService;
|
||||||
import java.util.concurrent.ExecutionException;
|
import java.util.concurrent.ExecutionException;
|
||||||
|
import java.util.concurrent.ExecutorCompletionService;
|
||||||
import java.util.concurrent.ExecutorService;
|
import java.util.concurrent.ExecutorService;
|
||||||
import java.util.concurrent.Future;
|
import java.util.concurrent.Future;
|
||||||
import java.util.concurrent.LinkedBlockingQueue;
|
import java.util.concurrent.LinkedBlockingQueue;
|
||||||
import java.util.concurrent.ThreadPoolExecutor;
|
import java.util.concurrent.ThreadPoolExecutor;
|
||||||
import java.util.concurrent.TimeUnit;
|
import java.util.concurrent.TimeUnit;
|
||||||
|
|
||||||
|
import static org.awaitility.Awaitility.await;
|
||||||
import static org.hamcrest.MatcherAssert.assertThat;
|
import static org.hamcrest.MatcherAssert.assertThat;
|
||||||
import static org.hamcrest.Matchers.emptyOrNullString;
|
import static org.hamcrest.Matchers.emptyOrNullString;
|
||||||
import static org.hamcrest.Matchers.equalTo;
|
import static org.hamcrest.Matchers.equalTo;
|
||||||
|
@ -64,6 +69,7 @@ public class BulkDataErrorAbuseTest extends BaseResourceProviderR4Test {
|
||||||
|
|
||||||
@BeforeEach
|
@BeforeEach
|
||||||
void beforeEach() {
|
void beforeEach() {
|
||||||
|
ourLog.info("BulkDataErrorAbuseTest.beforeEach");
|
||||||
afterPurgeDatabase();
|
afterPurgeDatabase();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -93,7 +99,7 @@ public class BulkDataErrorAbuseTest extends BaseResourceProviderR4Test {
|
||||||
duAbuseTest(Integer.MAX_VALUE);
|
duAbuseTest(Integer.MAX_VALUE);
|
||||||
}
|
}
|
||||||
|
|
||||||
private void duAbuseTest(int taskExecutions) throws InterruptedException, ExecutionException {
|
private void duAbuseTest(int taskExecutions) {
|
||||||
// Create some resources
|
// Create some resources
|
||||||
Patient patient = new Patient();
|
Patient patient = new Patient();
|
||||||
patient.setId("PING1");
|
patient.setId("PING1");
|
||||||
|
@ -133,18 +139,19 @@ public class BulkDataErrorAbuseTest extends BaseResourceProviderR4Test {
|
||||||
ExecutorService executorService = new ThreadPoolExecutor(workerCount, workerCount,
|
ExecutorService executorService = new ThreadPoolExecutor(workerCount, workerCount,
|
||||||
0L, TimeUnit.MILLISECONDS,
|
0L, TimeUnit.MILLISECONDS,
|
||||||
workQueue);
|
workQueue);
|
||||||
|
CompletionService<Boolean> completionService = new ExecutorCompletionService<>(executorService);
|
||||||
|
|
||||||
ourLog.info("Starting task creation");
|
ourLog.info("Starting task creation");
|
||||||
|
|
||||||
List<Future<Boolean>> futures = new ArrayList<>();
|
int maxFuturesToProcess = 500;
|
||||||
for (int i = 0; i < taskExecutions; i++) {
|
for (int i = 0; i < taskExecutions; i++) {
|
||||||
futures.add(executorService.submit(() -> {
|
completionService.submit(() -> {
|
||||||
String instanceId = null;
|
String instanceId = null;
|
||||||
try {
|
try {
|
||||||
instanceId = startJob(options);
|
instanceId = startJob(options);
|
||||||
|
|
||||||
// Run a scheduled pass to build the export
|
// Run a scheduled pass to build the export
|
||||||
myBatch2JobHelper.awaitJobCompletion(instanceId, 60);
|
myBatch2JobHelper.awaitJobCompletion(instanceId, 10);
|
||||||
|
|
||||||
verifyBulkExportResults(instanceId, List.of("Patient/PING1", "Patient/PING2"), Collections.singletonList("Patient/PNING3"));
|
verifyBulkExportResults(instanceId, List.of("Patient/PING1", "Patient/PING2"), Collections.singletonList("Patient/PNING3"));
|
||||||
|
|
||||||
|
@ -153,14 +160,11 @@ public class BulkDataErrorAbuseTest extends BaseResourceProviderR4Test {
|
||||||
ourLog.error("Caught an error during processing instance {}", instanceId, theError);
|
ourLog.error("Caught an error during processing instance {}", instanceId, theError);
|
||||||
throw new InternalErrorException("Caught an error during processing instance " + instanceId, theError);
|
throw new InternalErrorException("Caught an error during processing instance " + instanceId, theError);
|
||||||
}
|
}
|
||||||
}));
|
});
|
||||||
|
|
||||||
// Don't let the list of futures grow so big we run out of memory
|
// Don't let the list of futures grow so big we run out of memory
|
||||||
if (futures.size() > 1000) {
|
if (i != 0 && i % maxFuturesToProcess == 0) {
|
||||||
while (futures.size() > 500) {
|
executeFutures(completionService, maxFuturesToProcess);
|
||||||
// This should always return true, but it'll throw an exception if we failed
|
|
||||||
assertTrue(futures.remove(0).get());
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -168,18 +172,53 @@ public class BulkDataErrorAbuseTest extends BaseResourceProviderR4Test {
|
||||||
|
|
||||||
// wait for completion to avoid stranding background tasks.
|
// wait for completion to avoid stranding background tasks.
|
||||||
executorService.shutdown();
|
executorService.shutdown();
|
||||||
assertTrue(executorService.awaitTermination(60, TimeUnit.SECONDS), "Finished before timeout");
|
await()
|
||||||
|
.atMost(60, TimeUnit.SECONDS)
|
||||||
|
.until(() -> {
|
||||||
|
return executorService.isTerminated() && executorService.isShutdown();
|
||||||
|
});
|
||||||
|
|
||||||
// verify that all requests succeeded
|
// verify that all requests succeeded
|
||||||
ourLog.info("All tasks complete. Verify results.");
|
ourLog.info("All tasks complete. Verify results.");
|
||||||
for (var next : futures) {
|
executeFutures(completionService, taskExecutions % maxFuturesToProcess);
|
||||||
// This should always return true, but it'll throw an exception if we failed
|
|
||||||
assertTrue(next.get());
|
executorService.shutdown();
|
||||||
}
|
await()
|
||||||
|
.atMost(60, TimeUnit.SECONDS)
|
||||||
|
.until(() -> {
|
||||||
|
return executorService.isTerminated() && executorService.isShutdown();
|
||||||
|
});
|
||||||
|
|
||||||
ourLog.info("Finished task execution");
|
ourLog.info("Finished task execution");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private void executeFutures(CompletionService<Boolean> theCompletionService, int theTotal) {
|
||||||
|
List<String> errors = new ArrayList<>();
|
||||||
|
int count = 0;
|
||||||
|
|
||||||
|
while (count + errors.size() < theTotal) {
|
||||||
|
try {
|
||||||
|
Future<Boolean> future = theCompletionService.take();
|
||||||
|
boolean r = future.get();
|
||||||
|
assertTrue(r);
|
||||||
|
count++;
|
||||||
|
} catch (Exception ex) {
|
||||||
|
// we will run all the threads to completion, even if we have errors;
|
||||||
|
// this is so we don't have background threads kicking around with
|
||||||
|
// partial changes.
|
||||||
|
// we either do this, or shutdown the completion service in an
|
||||||
|
// "inelegant" manner, dropping all threads (which we aren't doing)
|
||||||
|
ourLog.error("Failed after checking " + count + " futures");
|
||||||
|
errors.add(ex.getMessage());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!errors.isEmpty()) {
|
||||||
|
fail(String.format("Failed to execute futures. Found %d errors :\n", errors.size())
|
||||||
|
+ String.join(", ", errors));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
private void verifyBulkExportResults(String theInstanceId, List<String> theContainedList, List<String> theExcludedList) {
|
private void verifyBulkExportResults(String theInstanceId, List<String> theContainedList, List<String> theExcludedList) {
|
||||||
// Iterate over the files
|
// Iterate over the files
|
||||||
|
@ -196,7 +235,6 @@ public class BulkDataErrorAbuseTest extends BaseResourceProviderR4Test {
|
||||||
String resourceType = file.getKey();
|
String resourceType = file.getKey();
|
||||||
List<String> binaryIds = file.getValue();
|
List<String> binaryIds = file.getValue();
|
||||||
for (var nextBinaryId : binaryIds) {
|
for (var nextBinaryId : binaryIds) {
|
||||||
|
|
||||||
Binary binary = myBinaryDao.read(new IdType(nextBinaryId), mySrd);
|
Binary binary = myBinaryDao.read(new IdType(nextBinaryId), mySrd);
|
||||||
assertEquals(Constants.CT_FHIR_NDJSON, binary.getContentType());
|
assertEquals(Constants.CT_FHIR_NDJSON, binary.getContentType());
|
||||||
|
|
||||||
|
@ -207,18 +245,17 @@ public class BulkDataErrorAbuseTest extends BaseResourceProviderR4Test {
|
||||||
.lines().toList();
|
.lines().toList();
|
||||||
ourLog.debug("Export job {} file {} line-count: {}", theInstanceId, nextBinaryId, lines.size());
|
ourLog.debug("Export job {} file {} line-count: {}", theInstanceId, nextBinaryId, lines.size());
|
||||||
|
|
||||||
lines.stream()
|
for (String line : lines) {
|
||||||
.map(line -> myFhirContext.newJsonParser().parseResource(line))
|
IBaseResource resource = myFhirContext.newJsonParser().parseResource(line);
|
||||||
.map(r -> r.getIdElement().toUnqualifiedVersionless())
|
IIdType nextId = resource.getIdElement().toUnqualifiedVersionless();
|
||||||
.forEach(nextId -> {
|
if (!resourceType.equals(nextId.getResourceType())) {
|
||||||
if (!resourceType.equals(nextId.getResourceType())) {
|
fail("Found resource of type " + nextId.getResourceType() + " in file for type " + resourceType);
|
||||||
fail("Found resource of type " + nextId.getResourceType() + " in file for type " + resourceType);
|
} else {
|
||||||
} else {
|
if (!foundIds.add(nextId.getValue())) {
|
||||||
if (!foundIds.add(nextId.getValue())) {
|
fail("Found duplicate ID: " + nextId.getValue());
|
||||||
fail("Found duplicate ID: " + nextId.getValue());
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
});
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,6 @@ import ca.uhn.fhir.batch2.api.IJobPersistence;
|
||||||
import ca.uhn.fhir.batch2.model.FetchJobInstancesRequest;
|
import ca.uhn.fhir.batch2.model.FetchJobInstancesRequest;
|
||||||
import ca.uhn.fhir.batch2.model.JobInstance;
|
import ca.uhn.fhir.batch2.model.JobInstance;
|
||||||
import ca.uhn.fhir.batch2.model.StatusEnum;
|
import ca.uhn.fhir.batch2.model.StatusEnum;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
|
|
||||||
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
||||||
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
||||||
import org.junit.jupiter.api.AfterEach;
|
import org.junit.jupiter.api.AfterEach;
|
||||||
|
@ -23,8 +22,6 @@ import static org.hamcrest.Matchers.hasSize;
|
||||||
|
|
||||||
public class JobInstanceRepositoryTest extends BaseJpaR4Test {
|
public class JobInstanceRepositoryTest extends BaseJpaR4Test {
|
||||||
|
|
||||||
@Autowired
|
|
||||||
private IBatch2JobInstanceRepository myJobInstanceRepository;
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private IJobPersistence myJobPersistenceSvc;
|
private IJobPersistence myJobPersistenceSvc;
|
||||||
private static final String PARAMS = "{\"param1\":\"value1\"}";
|
private static final String PARAMS = "{\"param1\":\"value1\"}";
|
||||||
|
|
|
@ -1,9 +1,15 @@
|
||||||
package ca.uhn.fhir.jpa.batch2;
|
package ca.uhn.fhir.jpa.batch2;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.batch2.api.IJobMaintenanceService;
|
||||||
import ca.uhn.fhir.batch2.api.IJobPersistence;
|
import ca.uhn.fhir.batch2.api.IJobPersistence;
|
||||||
import ca.uhn.fhir.batch2.api.JobOperationResultJson;
|
import ca.uhn.fhir.batch2.api.JobOperationResultJson;
|
||||||
|
import ca.uhn.fhir.batch2.api.RunOutcome;
|
||||||
|
import ca.uhn.fhir.batch2.channel.BatchJobSender;
|
||||||
|
import ca.uhn.fhir.batch2.coordinator.JobDefinitionRegistry;
|
||||||
import ca.uhn.fhir.batch2.jobs.imprt.NdJsonFileJson;
|
import ca.uhn.fhir.batch2.jobs.imprt.NdJsonFileJson;
|
||||||
|
import ca.uhn.fhir.batch2.model.JobDefinition;
|
||||||
import ca.uhn.fhir.batch2.model.JobInstance;
|
import ca.uhn.fhir.batch2.model.JobInstance;
|
||||||
|
import ca.uhn.fhir.batch2.model.JobWorkNotification;
|
||||||
import ca.uhn.fhir.batch2.model.StatusEnum;
|
import ca.uhn.fhir.batch2.model.StatusEnum;
|
||||||
import ca.uhn.fhir.batch2.model.WorkChunk;
|
import ca.uhn.fhir.batch2.model.WorkChunk;
|
||||||
import ca.uhn.fhir.batch2.model.WorkChunkCompletionEvent;
|
import ca.uhn.fhir.batch2.model.WorkChunkCompletionEvent;
|
||||||
|
@ -18,26 +24,34 @@ import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
||||||
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
import ca.uhn.fhir.jpa.entity.Batch2JobInstanceEntity;
|
||||||
import ca.uhn.fhir.jpa.entity.Batch2WorkChunkEntity;
|
import ca.uhn.fhir.jpa.entity.Batch2WorkChunkEntity;
|
||||||
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
||||||
|
import ca.uhn.fhir.jpa.test.Batch2JobHelper;
|
||||||
|
import ca.uhn.fhir.jpa.test.config.Batch2FastSchedulerConfig;
|
||||||
|
import ca.uhn.fhir.testjob.TestJobDefinitionUtils;
|
||||||
|
import ca.uhn.fhir.testjob.models.FirstStepOutput;
|
||||||
import ca.uhn.fhir.util.JsonUtil;
|
import ca.uhn.fhir.util.JsonUtil;
|
||||||
import ca.uhn.hapi.fhir.batch2.test.AbstractIJobPersistenceSpecificationTest;
|
import ca.uhn.hapi.fhir.batch2.test.AbstractIJobPersistenceSpecificationTest;
|
||||||
import ca.uhn.hapi.fhir.batch2.test.configs.SpyOverrideConfig;
|
import ca.uhn.hapi.fhir.batch2.test.configs.SpyOverrideConfig;
|
||||||
|
import ca.uhn.test.concurrency.PointcutLatch;
|
||||||
import com.google.common.collect.ImmutableList;
|
import com.google.common.collect.ImmutableList;
|
||||||
import com.google.common.collect.Iterators;
|
import com.google.common.collect.Iterators;
|
||||||
|
import jakarta.annotation.Nonnull;
|
||||||
|
import org.junit.jupiter.api.AfterEach;
|
||||||
import org.junit.jupiter.api.MethodOrderer;
|
import org.junit.jupiter.api.MethodOrderer;
|
||||||
import org.junit.jupiter.api.Nested;
|
import org.junit.jupiter.api.Nested;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.junit.jupiter.api.TestMethodOrder;
|
import org.junit.jupiter.api.TestMethodOrder;
|
||||||
import org.junit.jupiter.params.ParameterizedTest;
|
import org.junit.jupiter.params.ParameterizedTest;
|
||||||
import org.junit.jupiter.params.provider.Arguments;
|
import org.junit.jupiter.params.provider.Arguments;
|
||||||
|
import org.junit.jupiter.params.provider.CsvSource;
|
||||||
import org.junit.jupiter.params.provider.MethodSource;
|
import org.junit.jupiter.params.provider.MethodSource;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.context.annotation.Import;
|
import org.springframework.context.annotation.Import;
|
||||||
import org.springframework.data.domain.Page;
|
import org.springframework.data.domain.Page;
|
||||||
import org.springframework.data.domain.PageRequest;
|
import org.springframework.data.domain.PageRequest;
|
||||||
import org.springframework.data.domain.Sort;
|
import org.springframework.data.domain.Sort;
|
||||||
|
import org.springframework.test.context.ContextConfiguration;
|
||||||
import org.springframework.transaction.PlatformTransactionManager;
|
import org.springframework.transaction.PlatformTransactionManager;
|
||||||
|
|
||||||
import jakarta.annotation.Nonnull;
|
|
||||||
import java.time.Instant;
|
import java.time.Instant;
|
||||||
import java.time.LocalDateTime;
|
import java.time.LocalDateTime;
|
||||||
import java.time.ZoneId;
|
import java.time.ZoneId;
|
||||||
|
@ -60,15 +74,25 @@ import static org.junit.jupiter.api.Assertions.assertNotEquals;
|
||||||
import static org.junit.jupiter.api.Assertions.assertNotNull;
|
import static org.junit.jupiter.api.Assertions.assertNotNull;
|
||||||
import static org.junit.jupiter.api.Assertions.assertNull;
|
import static org.junit.jupiter.api.Assertions.assertNull;
|
||||||
import static org.junit.jupiter.api.Assertions.assertTrue;
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
|
import static org.mockito.ArgumentMatchers.any;
|
||||||
|
import static org.mockito.Mockito.clearInvocations;
|
||||||
|
import static org.mockito.Mockito.doAnswer;
|
||||||
|
import static org.mockito.Mockito.never;
|
||||||
|
import static org.mockito.Mockito.times;
|
||||||
|
import static org.mockito.Mockito.verify;
|
||||||
|
|
||||||
@TestMethodOrder(MethodOrderer.MethodName.class)
|
@TestMethodOrder(MethodOrderer.MethodName.class)
|
||||||
|
@ContextConfiguration(classes = {
|
||||||
|
Batch2FastSchedulerConfig.class
|
||||||
|
})
|
||||||
@Import(SpyOverrideConfig.class)
|
@Import(SpyOverrideConfig.class)
|
||||||
public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
|
|
||||||
public static final String JOB_DEFINITION_ID = "definition-id";
|
public static final String JOB_DEFINITION_ID = "definition-id";
|
||||||
public static final String TARGET_STEP_ID = "step-id";
|
public static final String FIRST_STEP_ID = TestJobDefinitionUtils.FIRST_STEP_ID;
|
||||||
|
public static final String LAST_STEP_ID = TestJobDefinitionUtils.LAST_STEP_ID;
|
||||||
public static final String DEF_CHUNK_ID = "definition-chunkId";
|
public static final String DEF_CHUNK_ID = "definition-chunkId";
|
||||||
public static final String STEP_CHUNK_ID = "step-chunkId";
|
public static final String STEP_CHUNK_ID = TestJobDefinitionUtils.FIRST_STEP_ID;
|
||||||
public static final int JOB_DEF_VER = 1;
|
public static final int JOB_DEF_VER = 1;
|
||||||
public static final int SEQUENCE_NUMBER = 1;
|
public static final int SEQUENCE_NUMBER = 1;
|
||||||
public static final String CHUNK_DATA = "{\"key\":\"value\"}";
|
public static final String CHUNK_DATA = "{\"key\":\"value\"}";
|
||||||
|
@ -80,6 +104,25 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
@Autowired
|
@Autowired
|
||||||
private IBatch2JobInstanceRepository myJobInstanceRepository;
|
private IBatch2JobInstanceRepository myJobInstanceRepository;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
public Batch2JobHelper myBatch2JobHelper;
|
||||||
|
|
||||||
|
// this is our spy
|
||||||
|
@Autowired
|
||||||
|
private BatchJobSender myBatchSender;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private IJobMaintenanceService myMaintenanceService;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
public JobDefinitionRegistry myJobDefinitionRegistry;
|
||||||
|
|
||||||
|
@AfterEach
|
||||||
|
public void after() {
|
||||||
|
myJobDefinitionRegistry.removeJobDefinition(JOB_DEFINITION_ID, JOB_DEF_VER);
|
||||||
|
myMaintenanceService.enableMaintenancePass(true);
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testDeleteInstance() {
|
public void testDeleteInstance() {
|
||||||
// Setup
|
// Setup
|
||||||
|
@ -87,7 +130,7 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
JobInstance instance = createInstance();
|
JobInstance instance = createInstance();
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
for (int i = 0; i < 10; i++) {
|
for (int i = 0; i < 10; i++) {
|
||||||
storeWorkChunk(JOB_DEFINITION_ID, TARGET_STEP_ID, instanceId, i, JsonUtil.serialize(new NdJsonFileJson().setNdJsonText("{}")));
|
storeWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, i, JsonUtil.serialize(new NdJsonFileJson().setNdJsonText("{}")), false);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Execute
|
// Execute
|
||||||
|
@ -102,8 +145,13 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
private String storeWorkChunk(String theJobDefinitionId, String theTargetStepId, String theInstanceId, int theSequence, String theSerializedData) {
|
private String storeWorkChunk(String theJobDefinitionId, String theTargetStepId, String theInstanceId, int theSequence, String theSerializedData, boolean theGatedExecution) {
|
||||||
WorkChunkCreateEvent batchWorkChunk = new WorkChunkCreateEvent(theJobDefinitionId, JOB_DEF_VER, theTargetStepId, theInstanceId, theSequence, theSerializedData);
|
WorkChunkCreateEvent batchWorkChunk = new WorkChunkCreateEvent(theJobDefinitionId, TestJobDefinitionUtils.TEST_JOB_VERSION, theTargetStepId, theInstanceId, theSequence, theSerializedData, theGatedExecution);
|
||||||
|
return mySvc.onWorkChunkCreate(batchWorkChunk);
|
||||||
|
}
|
||||||
|
|
||||||
|
private String storeFirstWorkChunk(String theJobDefinitionId, String theTargetStepId, String theInstanceId, int theSequence, String theSerializedData) {
|
||||||
|
WorkChunkCreateEvent batchWorkChunk = new WorkChunkCreateEvent(theJobDefinitionId, TestJobDefinitionUtils.TEST_JOB_VERSION, theTargetStepId, theInstanceId, theSequence, theSerializedData, false);
|
||||||
return mySvc.onWorkChunkCreate(batchWorkChunk);
|
return mySvc.onWorkChunkCreate(batchWorkChunk);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -113,7 +161,7 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
|
||||||
runInTransaction(() -> {
|
runInTransaction(() -> {
|
||||||
Batch2JobInstanceEntity instanceEntity = myJobInstanceRepository.findById(instanceId).orElseThrow(IllegalStateException::new);
|
Batch2JobInstanceEntity instanceEntity = findInstanceByIdOrThrow(instanceId);
|
||||||
assertEquals(StatusEnum.QUEUED, instanceEntity.getStatus());
|
assertEquals(StatusEnum.QUEUED, instanceEntity.getStatus());
|
||||||
});
|
});
|
||||||
|
|
||||||
|
@ -126,7 +174,7 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertEquals(instance.getReport(), foundInstance.getReport());
|
assertEquals(instance.getReport(), foundInstance.getReport());
|
||||||
|
|
||||||
runInTransaction(() -> {
|
runInTransaction(() -> {
|
||||||
Batch2JobInstanceEntity instanceEntity = myJobInstanceRepository.findById(instanceId).orElseThrow(IllegalStateException::new);
|
Batch2JobInstanceEntity instanceEntity = findInstanceByIdOrThrow(instanceId);
|
||||||
assertEquals(StatusEnum.QUEUED, instanceEntity.getStatus());
|
assertEquals(StatusEnum.QUEUED, instanceEntity.getStatus());
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
@ -213,12 +261,14 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
|
|
||||||
@ParameterizedTest
|
@ParameterizedTest
|
||||||
@MethodSource("provideStatuses")
|
@MethodSource("provideStatuses")
|
||||||
public void testStartChunkOnlyWorksOnValidChunks(WorkChunkStatusEnum theStatus, boolean theShouldBeStartedByConsumer) {
|
public void testStartChunkOnlyWorksOnValidChunks(WorkChunkStatusEnum theStatus, boolean theShouldBeStartedByConsumer) throws InterruptedException {
|
||||||
// Setup
|
// Setup
|
||||||
JobInstance instance = createInstance();
|
JobInstance instance = createInstance();
|
||||||
|
myMaintenanceService.enableMaintenancePass(false);
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
storeWorkChunk(JOB_DEFINITION_ID, TARGET_STEP_ID, instanceId, 0, CHUNK_DATA);
|
|
||||||
WorkChunkCreateEvent batchWorkChunk = new WorkChunkCreateEvent(JOB_DEFINITION_ID, JOB_DEF_VER, TARGET_STEP_ID, instanceId, 0, CHUNK_DATA);
|
storeWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, 0, CHUNK_DATA, false);
|
||||||
|
WorkChunkCreateEvent batchWorkChunk = new WorkChunkCreateEvent(JOB_DEFINITION_ID, JOB_DEF_VER, FIRST_STEP_ID, instanceId, 0, CHUNK_DATA, false);
|
||||||
String chunkId = mySvc.onWorkChunkCreate(batchWorkChunk);
|
String chunkId = mySvc.onWorkChunkCreate(batchWorkChunk);
|
||||||
Optional<Batch2WorkChunkEntity> byId = myWorkChunkRepository.findById(chunkId);
|
Optional<Batch2WorkChunkEntity> byId = myWorkChunkRepository.findById(chunkId);
|
||||||
Batch2WorkChunkEntity entity = byId.get();
|
Batch2WorkChunkEntity entity = byId.get();
|
||||||
|
@ -230,7 +280,9 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
|
|
||||||
// Verify
|
// Verify
|
||||||
boolean chunkStarted = workChunk.isPresent();
|
boolean chunkStarted = workChunk.isPresent();
|
||||||
assertEquals(chunkStarted, theShouldBeStartedByConsumer);
|
assertEquals(theShouldBeStartedByConsumer, chunkStarted);
|
||||||
|
verify(myBatchSender, never())
|
||||||
|
.sendWorkChannelMessage(any());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
@ -344,46 +396,185 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
@Test
|
@Test
|
||||||
public void testUpdateTime() {
|
public void testUpdateTime() {
|
||||||
// Setup
|
// Setup
|
||||||
JobInstance instance = createInstance();
|
boolean isGatedExecution = false;
|
||||||
|
JobInstance instance = createInstance(true, isGatedExecution);
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
|
||||||
Date updateTime = runInTransaction(() -> new Date(myJobInstanceRepository.findById(instanceId).orElseThrow().getUpdateTime().getTime()));
|
Date updateTime = runInTransaction(() -> new Date(findInstanceByIdOrThrow(instanceId).getUpdateTime().getTime()));
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
// Test
|
// Test
|
||||||
runInTransaction(() -> mySvc.updateInstanceUpdateTime(instanceId));
|
runInTransaction(() -> mySvc.updateInstanceUpdateTime(instanceId));
|
||||||
|
|
||||||
// Verify
|
// Verify
|
||||||
Date updateTime2 = runInTransaction(() -> new Date(myJobInstanceRepository.findById(instanceId).orElseThrow().getUpdateTime().getTime()));
|
Date updateTime2 = runInTransaction(() -> new Date(findInstanceByIdOrThrow(instanceId).getUpdateTime().getTime()));
|
||||||
assertNotEquals(updateTime, updateTime2);
|
assertNotEquals(updateTime, updateTime2);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void advanceJobStepAndUpdateChunkStatus_forGatedJobWithoutReduction_updatesCurrentStepAndChunkStatus() {
|
||||||
|
// setup
|
||||||
|
boolean isGatedExecution = true;
|
||||||
|
JobInstance instance = createInstance(true, isGatedExecution);
|
||||||
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
String chunkIdSecondStep1 = storeWorkChunk(JOB_DEFINITION_ID, LAST_STEP_ID, instanceId, 0, null, isGatedExecution);
|
||||||
|
String chunkIdSecondStep2 = storeWorkChunk(JOB_DEFINITION_ID, LAST_STEP_ID, instanceId, 0, null, isGatedExecution);
|
||||||
|
|
||||||
|
runInTransaction(() -> assertEquals(FIRST_STEP_ID, findInstanceByIdOrThrow(instanceId).getCurrentGatedStepId()));
|
||||||
|
|
||||||
|
// execute
|
||||||
|
runInTransaction(() -> {
|
||||||
|
boolean changed = mySvc.advanceJobStepAndUpdateChunkStatus(instanceId, LAST_STEP_ID, false);
|
||||||
|
assertTrue(changed);
|
||||||
|
});
|
||||||
|
|
||||||
|
// verify
|
||||||
|
runInTransaction(() -> {
|
||||||
|
assertEquals(WorkChunkStatusEnum.READY, findChunkByIdOrThrow(chunkIdSecondStep1).getStatus());
|
||||||
|
assertEquals(WorkChunkStatusEnum.READY, findChunkByIdOrThrow(chunkIdSecondStep2).getStatus());
|
||||||
|
assertEquals(LAST_STEP_ID, findInstanceByIdOrThrow(instanceId).getCurrentGatedStepId());
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void advanceJobStepAndUpdateChunkStatus_whenAlreadyInTargetStep_DoesNotUpdateStepOrChunks() {
|
||||||
|
// setup
|
||||||
|
boolean isGatedExecution = true;
|
||||||
|
JobInstance instance = createInstance(true, isGatedExecution);
|
||||||
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
String chunkIdSecondStep1 = storeWorkChunk(JOB_DEFINITION_ID, LAST_STEP_ID, instanceId, 0, null, isGatedExecution);
|
||||||
|
String chunkIdSecondStep2 = storeWorkChunk(JOB_DEFINITION_ID, LAST_STEP_ID, instanceId, 0, null, isGatedExecution);
|
||||||
|
|
||||||
|
runInTransaction(() -> assertEquals(FIRST_STEP_ID, findInstanceByIdOrThrow(instanceId).getCurrentGatedStepId()));
|
||||||
|
|
||||||
|
// execute
|
||||||
|
runInTransaction(() -> {
|
||||||
|
boolean changed = mySvc.advanceJobStepAndUpdateChunkStatus(instanceId, FIRST_STEP_ID, false);
|
||||||
|
assertFalse(changed);
|
||||||
|
});
|
||||||
|
|
||||||
|
// verify
|
||||||
|
runInTransaction(() -> {
|
||||||
|
assertEquals(WorkChunkStatusEnum.GATE_WAITING, findChunkByIdOrThrow(chunkIdSecondStep1).getStatus());
|
||||||
|
assertEquals(WorkChunkStatusEnum.GATE_WAITING, findChunkByIdOrThrow(chunkIdSecondStep2).getStatus());
|
||||||
|
assertEquals(FIRST_STEP_ID, findInstanceByIdOrThrow(instanceId).getCurrentGatedStepId());
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testFetchUnknownWork() {
|
public void testFetchUnknownWork() {
|
||||||
assertFalse(myWorkChunkRepository.findById("FOO").isPresent());
|
assertFalse(myWorkChunkRepository.findById("FOO").isPresent());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@ParameterizedTest
|
||||||
public void testStoreAndFetchWorkChunk_NoData() {
|
@CsvSource({
|
||||||
JobInstance instance = createInstance();
|
"false, READY, QUEUED",
|
||||||
|
"true, GATE_WAITING, QUEUED"
|
||||||
|
})
|
||||||
|
public void testStoreAndFetchWorkChunk_withOrWithoutGatedExecutionNoData_createdAndTransitionToExpectedStatus(boolean theGatedExecution, WorkChunkStatusEnum theExpectedStatusOnCreate, WorkChunkStatusEnum theExpectedStatusAfterTransition) throws InterruptedException {
|
||||||
|
// setup
|
||||||
|
JobInstance instance = createInstance(true, theGatedExecution);
|
||||||
|
|
||||||
|
// when
|
||||||
|
PointcutLatch latch = new PointcutLatch("senderlatch");
|
||||||
|
doAnswer(a -> {
|
||||||
|
latch.call(1);
|
||||||
|
return Void.class;
|
||||||
|
}).when(myBatchSender).sendWorkChannelMessage(any(JobWorkNotification.class));
|
||||||
|
latch.setExpectedCount(1);
|
||||||
|
myMaintenanceService.enableMaintenancePass(false);
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
|
||||||
String id = storeWorkChunk(JOB_DEFINITION_ID, TARGET_STEP_ID, instanceId, 0, null);
|
// execute & verify
|
||||||
|
String firstChunkId = storeFirstWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, 0, null);
|
||||||
|
// mark the first chunk as COMPLETED to allow step advance
|
||||||
|
runInTransaction(() -> myWorkChunkRepository.updateChunkStatus(firstChunkId, WorkChunkStatusEnum.READY, WorkChunkStatusEnum.COMPLETED));
|
||||||
|
|
||||||
|
String id = storeWorkChunk(JOB_DEFINITION_ID, LAST_STEP_ID, instanceId, 0, null, theGatedExecution);
|
||||||
|
runInTransaction(() -> assertEquals(theExpectedStatusOnCreate, findChunkByIdOrThrow(id).getStatus()));
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
|
runInTransaction(() -> assertEquals(theExpectedStatusAfterTransition, findChunkByIdOrThrow(id).getStatus()));
|
||||||
|
|
||||||
WorkChunk chunk = mySvc.onWorkChunkDequeue(id).orElseThrow(IllegalArgumentException::new);
|
WorkChunk chunk = mySvc.onWorkChunkDequeue(id).orElseThrow(IllegalArgumentException::new);
|
||||||
|
// assert null since we did not input any data when creating the chunks
|
||||||
assertNull(chunk.getData());
|
assertNull(chunk.getData());
|
||||||
|
|
||||||
|
latch.awaitExpected();
|
||||||
|
verify(myBatchSender).sendWorkChannelMessage(any());
|
||||||
|
clearInvocations(myBatchSender);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void testStoreAndFetchWorkChunk_withGatedJobMultipleChunk_correctTransitions() throws InterruptedException {
|
||||||
|
// setup
|
||||||
|
boolean isGatedExecution = true;
|
||||||
|
String expectedFirstChunkData = "IAmChunk1";
|
||||||
|
String expectedSecondChunkData = "IAmChunk2";
|
||||||
|
JobInstance instance = createInstance(true, isGatedExecution);
|
||||||
|
myMaintenanceService.enableMaintenancePass(false);
|
||||||
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
PointcutLatch latch = new PointcutLatch("senderlatch");
|
||||||
|
doAnswer(a -> {
|
||||||
|
latch.call(1);
|
||||||
|
return Void.class;
|
||||||
|
}).when(myBatchSender).sendWorkChannelMessage(any(JobWorkNotification.class));
|
||||||
|
latch.setExpectedCount(2);
|
||||||
|
|
||||||
|
// execute & verify
|
||||||
|
String firstChunkId = storeFirstWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, 0, expectedFirstChunkData);
|
||||||
|
String secondChunkId = storeWorkChunk(JOB_DEFINITION_ID, LAST_STEP_ID, instanceId, 0, expectedSecondChunkData, isGatedExecution);
|
||||||
|
|
||||||
|
runInTransaction(() -> {
|
||||||
|
// check chunks created in expected states
|
||||||
|
assertEquals(WorkChunkStatusEnum.READY, findChunkByIdOrThrow(firstChunkId).getStatus());
|
||||||
|
assertEquals(WorkChunkStatusEnum.GATE_WAITING, findChunkByIdOrThrow(secondChunkId).getStatus());
|
||||||
|
});
|
||||||
|
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
|
runInTransaction(() -> {
|
||||||
|
assertEquals(WorkChunkStatusEnum.QUEUED, findChunkByIdOrThrow(firstChunkId).getStatus());
|
||||||
|
// maintenance should not affect chunks in step 2
|
||||||
|
assertEquals(WorkChunkStatusEnum.GATE_WAITING, findChunkByIdOrThrow(secondChunkId).getStatus());
|
||||||
|
});
|
||||||
|
|
||||||
|
WorkChunk actualFirstChunkData = mySvc.onWorkChunkDequeue(firstChunkId).orElseThrow(IllegalArgumentException::new);
|
||||||
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.IN_PROGRESS, findChunkByIdOrThrow(firstChunkId).getStatus()));
|
||||||
|
assertEquals(expectedFirstChunkData, actualFirstChunkData.getData());
|
||||||
|
|
||||||
|
mySvc.onWorkChunkCompletion(new WorkChunkCompletionEvent(firstChunkId, 50, 0));
|
||||||
|
runInTransaction(() -> {
|
||||||
|
assertEquals(WorkChunkStatusEnum.COMPLETED, findChunkByIdOrThrow(firstChunkId).getStatus());
|
||||||
|
assertEquals(WorkChunkStatusEnum.GATE_WAITING, findChunkByIdOrThrow(secondChunkId).getStatus());
|
||||||
|
});
|
||||||
|
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
|
runInTransaction(() -> {
|
||||||
|
assertEquals(WorkChunkStatusEnum.COMPLETED, findChunkByIdOrThrow(firstChunkId).getStatus());
|
||||||
|
// now that all chunks for step 1 is COMPLETED, should enqueue chunks in step 2
|
||||||
|
assertEquals(WorkChunkStatusEnum.QUEUED, findChunkByIdOrThrow(secondChunkId).getStatus());
|
||||||
|
});
|
||||||
|
|
||||||
|
WorkChunk actualSecondChunkData = mySvc.onWorkChunkDequeue(secondChunkId).orElseThrow(IllegalArgumentException::new);
|
||||||
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.IN_PROGRESS, findChunkByIdOrThrow(secondChunkId).getStatus()));
|
||||||
|
assertEquals(expectedSecondChunkData, actualSecondChunkData.getData());
|
||||||
|
|
||||||
|
latch.awaitExpected();
|
||||||
|
verify(myBatchSender, times(2))
|
||||||
|
.sendWorkChannelMessage(any());
|
||||||
|
clearInvocations(myBatchSender);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
void testStoreAndFetchChunksForInstance_NoData() {
|
void testStoreAndFetchChunksForInstance_NoData() {
|
||||||
// given
|
// given
|
||||||
|
boolean isGatedExecution = false;
|
||||||
JobInstance instance = createInstance();
|
JobInstance instance = createInstance();
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
|
||||||
String queuedId = storeWorkChunk(JOB_DEFINITION_ID, TARGET_STEP_ID, instanceId, 0, "some data");
|
String queuedId = storeWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, 0, "some data", isGatedExecution);
|
||||||
String erroredId = storeWorkChunk(JOB_DEFINITION_ID, TARGET_STEP_ID, instanceId, 1, "some more data");
|
String erroredId = storeWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, 1, "some more data", isGatedExecution);
|
||||||
String completedId = storeWorkChunk(JOB_DEFINITION_ID, TARGET_STEP_ID, instanceId, 2, "some more data");
|
String completedId = storeWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, 2, "some more data", isGatedExecution);
|
||||||
|
|
||||||
mySvc.onWorkChunkDequeue(erroredId);
|
mySvc.onWorkChunkDequeue(erroredId);
|
||||||
WorkChunkErrorEvent parameters = new WorkChunkErrorEvent(erroredId, "Our error message");
|
WorkChunkErrorEvent parameters = new WorkChunkErrorEvent(erroredId, "Our error message");
|
||||||
|
@ -407,9 +598,9 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertEquals(JOB_DEFINITION_ID, workChunk.getJobDefinitionId());
|
assertEquals(JOB_DEFINITION_ID, workChunk.getJobDefinitionId());
|
||||||
assertEquals(JOB_DEF_VER, workChunk.getJobDefinitionVersion());
|
assertEquals(JOB_DEF_VER, workChunk.getJobDefinitionVersion());
|
||||||
assertEquals(instanceId, workChunk.getInstanceId());
|
assertEquals(instanceId, workChunk.getInstanceId());
|
||||||
assertEquals(TARGET_STEP_ID, workChunk.getTargetStepId());
|
assertEquals(FIRST_STEP_ID, workChunk.getTargetStepId());
|
||||||
assertEquals(0, workChunk.getSequence());
|
assertEquals(0, workChunk.getSequence());
|
||||||
assertEquals(WorkChunkStatusEnum.QUEUED, workChunk.getStatus());
|
assertEquals(WorkChunkStatusEnum.READY, workChunk.getStatus());
|
||||||
|
|
||||||
|
|
||||||
assertNotNull(workChunk.getCreateTime());
|
assertNotNull(workChunk.getCreateTime());
|
||||||
|
@ -418,7 +609,7 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertNull(workChunk.getEndTime());
|
assertNull(workChunk.getEndTime());
|
||||||
assertNull(workChunk.getErrorMessage());
|
assertNull(workChunk.getErrorMessage());
|
||||||
assertEquals(0, workChunk.getErrorCount());
|
assertEquals(0, workChunk.getErrorCount());
|
||||||
assertEquals(null, workChunk.getRecordsProcessed());
|
assertNull(workChunk.getRecordsProcessed());
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
|
@ -426,7 +617,7 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertEquals(WorkChunkStatusEnum.ERRORED, workChunk1.getStatus());
|
assertEquals(WorkChunkStatusEnum.ERRORED, workChunk1.getStatus());
|
||||||
assertEquals("Our error message", workChunk1.getErrorMessage());
|
assertEquals("Our error message", workChunk1.getErrorMessage());
|
||||||
assertEquals(1, workChunk1.getErrorCount());
|
assertEquals(1, workChunk1.getErrorCount());
|
||||||
assertEquals(null, workChunk1.getRecordsProcessed());
|
assertNull(workChunk1.getRecordsProcessed());
|
||||||
assertNotNull(workChunk1.getEndTime());
|
assertNotNull(workChunk1.getEndTime());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -438,18 +629,35 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertNull(workChunk2.getErrorMessage());
|
assertNull(workChunk2.getErrorMessage());
|
||||||
assertEquals(0, workChunk2.getErrorCount());
|
assertEquals(0, workChunk2.getErrorCount());
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ParameterizedTest
|
||||||
@Test
|
@CsvSource({
|
||||||
public void testStoreAndFetchWorkChunk_WithData() {
|
"false, READY, QUEUED",
|
||||||
JobInstance instance = createInstance();
|
"true, GATE_WAITING, QUEUED"
|
||||||
|
})
|
||||||
|
public void testStoreAndFetchWorkChunk_withOrWithoutGatedExecutionwithData_createdAndTransitionToExpectedStatus(boolean theGatedExecution, WorkChunkStatusEnum theExpectedCreatedStatus, WorkChunkStatusEnum theExpectedTransitionStatus) throws InterruptedException {
|
||||||
|
// setup
|
||||||
|
JobInstance instance = createInstance(true, theGatedExecution);
|
||||||
|
myMaintenanceService.enableMaintenancePass(false);
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
|
PointcutLatch latch = new PointcutLatch("senderlatch");
|
||||||
|
doAnswer(a -> {
|
||||||
|
latch.call(1);
|
||||||
|
return Void.class;
|
||||||
|
}).when(myBatchSender).sendWorkChannelMessage(any(JobWorkNotification.class));
|
||||||
|
latch.setExpectedCount(1);
|
||||||
|
|
||||||
String id = storeWorkChunk(JOB_DEFINITION_ID, TARGET_STEP_ID, instanceId, 0, CHUNK_DATA);
|
// execute & verify
|
||||||
|
String firstChunkId = storeFirstWorkChunk(JOB_DEFINITION_ID, FIRST_STEP_ID, instanceId, 0, null);
|
||||||
|
// mark the first chunk as COMPLETED to allow step advance
|
||||||
|
runInTransaction(() -> myWorkChunkRepository.updateChunkStatus(firstChunkId, WorkChunkStatusEnum.READY, WorkChunkStatusEnum.COMPLETED));
|
||||||
|
|
||||||
|
String id = storeWorkChunk(JOB_DEFINITION_ID, LAST_STEP_ID, instanceId, 0, CHUNK_DATA, theGatedExecution);
|
||||||
assertNotNull(id);
|
assertNotNull(id);
|
||||||
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.QUEUED, myWorkChunkRepository.findById(id).orElseThrow(IllegalArgumentException::new).getStatus()));
|
runInTransaction(() -> assertEquals(theExpectedCreatedStatus, findChunkByIdOrThrow(id).getStatus()));
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
|
runInTransaction(() -> assertEquals(theExpectedTransitionStatus, findChunkByIdOrThrow(id).getStatus()));
|
||||||
|
|
||||||
WorkChunk chunk = mySvc.onWorkChunkDequeue(id).orElseThrow(IllegalArgumentException::new);
|
WorkChunk chunk = mySvc.onWorkChunkDequeue(id).orElseThrow(IllegalArgumentException::new);
|
||||||
assertEquals(36, chunk.getInstanceId().length());
|
assertEquals(36, chunk.getInstanceId().length());
|
||||||
|
@ -458,19 +666,30 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertEquals(WorkChunkStatusEnum.IN_PROGRESS, chunk.getStatus());
|
assertEquals(WorkChunkStatusEnum.IN_PROGRESS, chunk.getStatus());
|
||||||
assertEquals(CHUNK_DATA, chunk.getData());
|
assertEquals(CHUNK_DATA, chunk.getData());
|
||||||
|
|
||||||
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.IN_PROGRESS, myWorkChunkRepository.findById(id).orElseThrow(IllegalArgumentException::new).getStatus()));
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.IN_PROGRESS, findChunkByIdOrThrow(id).getStatus()));
|
||||||
|
latch.awaitExpected();
|
||||||
|
verify(myBatchSender).sendWorkChannelMessage(any());
|
||||||
|
clearInvocations(myBatchSender);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testMarkChunkAsCompleted_Success() {
|
public void testMarkChunkAsCompleted_Success() throws InterruptedException {
|
||||||
JobInstance instance = createInstance();
|
boolean isGatedExecution = false;
|
||||||
|
myMaintenanceService.enableMaintenancePass(false);
|
||||||
|
JobInstance instance = createInstance(true, isGatedExecution);
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
String chunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, CHUNK_DATA);
|
String chunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, CHUNK_DATA, isGatedExecution);
|
||||||
assertNotNull(chunkId);
|
assertNotNull(chunkId);
|
||||||
|
PointcutLatch latch = new PointcutLatch("senderlatch");
|
||||||
|
doAnswer(a -> {
|
||||||
|
latch.call(1);
|
||||||
|
return Void.class;
|
||||||
|
}).when(myBatchSender).sendWorkChannelMessage(any(JobWorkNotification.class));
|
||||||
|
latch.setExpectedCount(1);
|
||||||
|
|
||||||
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.QUEUED, myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new).getStatus()));
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.READY, findChunkByIdOrThrow(chunkId).getStatus()));
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
sleepUntilTimeChanges();
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.QUEUED, findChunkByIdOrThrow(chunkId).getStatus()));
|
||||||
|
|
||||||
WorkChunk chunk = mySvc.onWorkChunkDequeue(chunkId).orElseThrow(IllegalArgumentException::new);
|
WorkChunk chunk = mySvc.onWorkChunkDequeue(chunkId).orElseThrow(IllegalArgumentException::new);
|
||||||
assertEquals(SEQUENCE_NUMBER, chunk.getSequence());
|
assertEquals(SEQUENCE_NUMBER, chunk.getSequence());
|
||||||
|
@ -480,13 +699,13 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertNull(chunk.getEndTime());
|
assertNull(chunk.getEndTime());
|
||||||
assertNull(chunk.getRecordsProcessed());
|
assertNull(chunk.getRecordsProcessed());
|
||||||
assertNotNull(chunk.getData());
|
assertNotNull(chunk.getData());
|
||||||
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.IN_PROGRESS, myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new).getStatus()));
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.IN_PROGRESS, findChunkByIdOrThrow(chunkId).getStatus()));
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
mySvc.onWorkChunkCompletion(new WorkChunkCompletionEvent(chunkId, 50, 0));
|
mySvc.onWorkChunkCompletion(new WorkChunkCompletionEvent(chunkId, 50, 0));
|
||||||
runInTransaction(() -> {
|
runInTransaction(() -> {
|
||||||
Batch2WorkChunkEntity entity = myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new);
|
Batch2WorkChunkEntity entity = findChunkByIdOrThrow(chunkId);
|
||||||
assertEquals(WorkChunkStatusEnum.COMPLETED, entity.getStatus());
|
assertEquals(WorkChunkStatusEnum.COMPLETED, entity.getStatus());
|
||||||
assertEquals(50, entity.getRecordsProcessed());
|
assertEquals(50, entity.getRecordsProcessed());
|
||||||
assertNotNull(entity.getCreateTime());
|
assertNotNull(entity.getCreateTime());
|
||||||
|
@ -496,63 +715,41 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertTrue(entity.getCreateTime().getTime() < entity.getStartTime().getTime());
|
assertTrue(entity.getCreateTime().getTime() < entity.getStartTime().getTime());
|
||||||
assertTrue(entity.getStartTime().getTime() < entity.getEndTime().getTime());
|
assertTrue(entity.getStartTime().getTime() < entity.getEndTime().getTime());
|
||||||
});
|
});
|
||||||
}
|
latch.awaitExpected();
|
||||||
|
verify(myBatchSender).sendWorkChannelMessage(any());
|
||||||
@Test
|
clearInvocations(myBatchSender);
|
||||||
public void testGatedAdvancementByStatus() {
|
|
||||||
// Setup
|
|
||||||
JobInstance instance = createInstance();
|
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
|
||||||
String chunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, null);
|
|
||||||
mySvc.onWorkChunkCompletion(new WorkChunkCompletionEvent(chunkId, 0, 0));
|
|
||||||
|
|
||||||
boolean canAdvance = mySvc.canAdvanceInstanceToNextStep(instanceId, STEP_CHUNK_ID);
|
|
||||||
assertTrue(canAdvance);
|
|
||||||
|
|
||||||
//Storing a new chunk with QUEUED should prevent advancement.
|
|
||||||
String newChunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, null);
|
|
||||||
|
|
||||||
canAdvance = mySvc.canAdvanceInstanceToNextStep(instanceId, STEP_CHUNK_ID);
|
|
||||||
assertFalse(canAdvance);
|
|
||||||
|
|
||||||
//Toggle it to complete
|
|
||||||
mySvc.onWorkChunkCompletion(new WorkChunkCompletionEvent(newChunkId, 50, 0));
|
|
||||||
canAdvance = mySvc.canAdvanceInstanceToNextStep(instanceId, STEP_CHUNK_ID);
|
|
||||||
assertTrue(canAdvance);
|
|
||||||
|
|
||||||
//Create a new chunk and set it in progress.
|
|
||||||
String newerChunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, null);
|
|
||||||
mySvc.onWorkChunkDequeue(newerChunkId);
|
|
||||||
canAdvance = mySvc.canAdvanceInstanceToNextStep(instanceId, STEP_CHUNK_ID);
|
|
||||||
assertFalse(canAdvance);
|
|
||||||
|
|
||||||
//Toggle IN_PROGRESS to complete
|
|
||||||
mySvc.onWorkChunkCompletion(new WorkChunkCompletionEvent(newerChunkId, 50, 0));
|
|
||||||
canAdvance = mySvc.canAdvanceInstanceToNextStep(instanceId, STEP_CHUNK_ID);
|
|
||||||
assertTrue(canAdvance);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testMarkChunkAsCompleted_Error() {
|
public void testMarkChunkAsCompleted_Error() {
|
||||||
JobInstance instance = createInstance();
|
boolean isGatedExecution = false;
|
||||||
|
PointcutLatch latch = new PointcutLatch("senderlatch");
|
||||||
|
doAnswer(a -> {
|
||||||
|
latch.call(1);
|
||||||
|
return Void.class;
|
||||||
|
}).when(myBatchSender).sendWorkChannelMessage(any(JobWorkNotification.class));
|
||||||
|
latch.setExpectedCount(1);
|
||||||
|
myMaintenanceService.enableMaintenancePass(false);
|
||||||
|
|
||||||
|
JobInstance instance = createInstance(true, isGatedExecution);
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
String chunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, null);
|
String chunkId = storeWorkChunk(JOB_DEFINITION_ID, TestJobDefinitionUtils.FIRST_STEP_ID, instanceId, SEQUENCE_NUMBER, null, isGatedExecution);
|
||||||
assertNotNull(chunkId);
|
assertNotNull(chunkId);
|
||||||
|
|
||||||
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.QUEUED, myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new).getStatus()));
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.READY, findChunkByIdOrThrow(chunkId).getStatus()));
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
sleepUntilTimeChanges();
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.QUEUED, findChunkByIdOrThrow(chunkId).getStatus()));
|
||||||
|
|
||||||
WorkChunk chunk = mySvc.onWorkChunkDequeue(chunkId).orElseThrow(IllegalArgumentException::new);
|
WorkChunk chunk = mySvc.onWorkChunkDequeue(chunkId).orElseThrow(IllegalArgumentException::new);
|
||||||
assertEquals(SEQUENCE_NUMBER, chunk.getSequence());
|
assertEquals(SEQUENCE_NUMBER, chunk.getSequence());
|
||||||
assertEquals(WorkChunkStatusEnum.IN_PROGRESS, chunk.getStatus());
|
assertEquals(WorkChunkStatusEnum.IN_PROGRESS, chunk.getStatus());
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
WorkChunkErrorEvent request = new WorkChunkErrorEvent(chunkId).setErrorMsg("This is an error message");
|
WorkChunkErrorEvent request = new WorkChunkErrorEvent(chunkId).setErrorMsg("This is an error message");
|
||||||
mySvc.onWorkChunkError(request);
|
mySvc.onWorkChunkError(request);
|
||||||
runInTransaction(() -> {
|
runInTransaction(() -> {
|
||||||
Batch2WorkChunkEntity entity = myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new);
|
Batch2WorkChunkEntity entity = findChunkByIdOrThrow(chunkId);
|
||||||
assertEquals(WorkChunkStatusEnum.ERRORED, entity.getStatus());
|
assertEquals(WorkChunkStatusEnum.ERRORED, entity.getStatus());
|
||||||
assertEquals("This is an error message", entity.getErrorMessage());
|
assertEquals("This is an error message", entity.getErrorMessage());
|
||||||
assertNotNull(entity.getCreateTime());
|
assertNotNull(entity.getCreateTime());
|
||||||
|
@ -568,7 +765,7 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
WorkChunkErrorEvent request2 = new WorkChunkErrorEvent(chunkId).setErrorMsg("This is an error message 2");
|
WorkChunkErrorEvent request2 = new WorkChunkErrorEvent(chunkId).setErrorMsg("This is an error message 2");
|
||||||
mySvc.onWorkChunkError(request2);
|
mySvc.onWorkChunkError(request2);
|
||||||
runInTransaction(() -> {
|
runInTransaction(() -> {
|
||||||
Batch2WorkChunkEntity entity = myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new);
|
Batch2WorkChunkEntity entity = findChunkByIdOrThrow(chunkId);
|
||||||
assertEquals(WorkChunkStatusEnum.ERRORED, entity.getStatus());
|
assertEquals(WorkChunkStatusEnum.ERRORED, entity.getStatus());
|
||||||
assertEquals("This is an error message 2", entity.getErrorMessage());
|
assertEquals("This is an error message 2", entity.getErrorMessage());
|
||||||
assertNotNull(entity.getCreateTime());
|
assertNotNull(entity.getCreateTime());
|
||||||
|
@ -582,28 +779,39 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
List<WorkChunk> chunks = ImmutableList.copyOf(mySvc.fetchAllWorkChunksIterator(instanceId, true));
|
List<WorkChunk> chunks = ImmutableList.copyOf(mySvc.fetchAllWorkChunksIterator(instanceId, true));
|
||||||
assertEquals(1, chunks.size());
|
assertEquals(1, chunks.size());
|
||||||
assertEquals(2, chunks.get(0).getErrorCount());
|
assertEquals(2, chunks.get(0).getErrorCount());
|
||||||
|
|
||||||
|
verify(myBatchSender).sendWorkChannelMessage(any());
|
||||||
|
clearInvocations(myBatchSender);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testMarkChunkAsCompleted_Fail() {
|
public void testMarkChunkAsCompleted_Fail() throws InterruptedException {
|
||||||
JobInstance instance = createInstance();
|
boolean isGatedExecution = false;
|
||||||
|
myMaintenanceService.enableMaintenancePass(false);
|
||||||
|
JobInstance instance = createInstance(true, isGatedExecution);
|
||||||
String instanceId = mySvc.storeNewInstance(instance);
|
String instanceId = mySvc.storeNewInstance(instance);
|
||||||
String chunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, null);
|
String chunkId = storeWorkChunk(DEF_CHUNK_ID, STEP_CHUNK_ID, instanceId, SEQUENCE_NUMBER, null, isGatedExecution);
|
||||||
assertNotNull(chunkId);
|
assertNotNull(chunkId);
|
||||||
|
PointcutLatch latch = new PointcutLatch("senderlatch");
|
||||||
|
doAnswer(a -> {
|
||||||
|
latch.call(1);
|
||||||
|
return Void.class;
|
||||||
|
}).when(myBatchSender).sendWorkChannelMessage(any(JobWorkNotification.class));
|
||||||
|
latch.setExpectedCount(1);
|
||||||
|
|
||||||
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.QUEUED, myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new).getStatus()));
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.READY, findChunkByIdOrThrow(chunkId).getStatus()));
|
||||||
|
myBatch2JobHelper.runMaintenancePass();
|
||||||
sleepUntilTimeChanges();
|
runInTransaction(() -> assertEquals(WorkChunkStatusEnum.QUEUED, findChunkByIdOrThrow(chunkId).getStatus()));
|
||||||
|
|
||||||
WorkChunk chunk = mySvc.onWorkChunkDequeue(chunkId).orElseThrow(IllegalArgumentException::new);
|
WorkChunk chunk = mySvc.onWorkChunkDequeue(chunkId).orElseThrow(IllegalArgumentException::new);
|
||||||
assertEquals(SEQUENCE_NUMBER, chunk.getSequence());
|
assertEquals(SEQUENCE_NUMBER, chunk.getSequence());
|
||||||
assertEquals(WorkChunkStatusEnum.IN_PROGRESS, chunk.getStatus());
|
assertEquals(WorkChunkStatusEnum.IN_PROGRESS, chunk.getStatus());
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
mySvc.onWorkChunkFailed(chunkId, "This is an error message");
|
mySvc.onWorkChunkFailed(chunkId, "This is an error message");
|
||||||
runInTransaction(() -> {
|
runInTransaction(() -> {
|
||||||
Batch2WorkChunkEntity entity = myWorkChunkRepository.findById(chunkId).orElseThrow(IllegalArgumentException::new);
|
Batch2WorkChunkEntity entity = findChunkByIdOrThrow(chunkId);
|
||||||
assertEquals(WorkChunkStatusEnum.FAILED, entity.getStatus());
|
assertEquals(WorkChunkStatusEnum.FAILED, entity.getStatus());
|
||||||
assertEquals("This is an error message", entity.getErrorMessage());
|
assertEquals("This is an error message", entity.getErrorMessage());
|
||||||
assertNotNull(entity.getCreateTime());
|
assertNotNull(entity.getCreateTime());
|
||||||
|
@ -612,6 +820,10 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
assertTrue(entity.getCreateTime().getTime() < entity.getStartTime().getTime());
|
assertTrue(entity.getCreateTime().getTime() < entity.getStartTime().getTime());
|
||||||
assertTrue(entity.getStartTime().getTime() < entity.getEndTime().getTime());
|
assertTrue(entity.getStartTime().getTime() < entity.getEndTime().getTime());
|
||||||
});
|
});
|
||||||
|
latch.awaitExpected();
|
||||||
|
verify(myBatchSender)
|
||||||
|
.sendWorkChannelMessage(any());
|
||||||
|
clearInvocations(myBatchSender);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
@ -626,7 +838,8 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
"stepId",
|
"stepId",
|
||||||
instanceId,
|
instanceId,
|
||||||
0,
|
0,
|
||||||
"{}"
|
"{}",
|
||||||
|
false
|
||||||
);
|
);
|
||||||
String id = mySvc.onWorkChunkCreate(chunk);
|
String id = mySvc.onWorkChunkCreate(chunk);
|
||||||
chunkIds.add(id);
|
chunkIds.add(id);
|
||||||
|
@ -674,15 +887,57 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
.orElseThrow(IllegalArgumentException::new));
|
.orElseThrow(IllegalArgumentException::new));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private JobInstance createInstance() {
|
||||||
|
return createInstance(false, false);
|
||||||
|
}
|
||||||
|
|
||||||
@Nonnull
|
@Nonnull
|
||||||
private JobInstance createInstance() {
|
private JobInstance createInstance(boolean theCreateJobDefBool, boolean theCreateGatedJob) {
|
||||||
JobInstance instance = new JobInstance();
|
JobInstance instance = new JobInstance();
|
||||||
instance.setJobDefinitionId(JOB_DEFINITION_ID);
|
instance.setJobDefinitionId(JOB_DEFINITION_ID);
|
||||||
instance.setStatus(StatusEnum.QUEUED);
|
instance.setStatus(StatusEnum.QUEUED);
|
||||||
instance.setJobDefinitionVersion(JOB_DEF_VER);
|
instance.setJobDefinitionVersion(JOB_DEF_VER);
|
||||||
instance.setParameters(CHUNK_DATA);
|
instance.setParameters(CHUNK_DATA);
|
||||||
instance.setReport("TEST");
|
instance.setReport("TEST");
|
||||||
|
|
||||||
|
if (theCreateJobDefBool) {
|
||||||
|
JobDefinition<?> jobDef;
|
||||||
|
|
||||||
|
if (theCreateGatedJob) {
|
||||||
|
jobDef = TestJobDefinitionUtils.buildGatedJobDefinition(
|
||||||
|
JOB_DEFINITION_ID,
|
||||||
|
(step, sink) -> {
|
||||||
|
sink.accept(new FirstStepOutput());
|
||||||
|
return RunOutcome.SUCCESS;
|
||||||
|
},
|
||||||
|
(step, sink) -> {
|
||||||
|
return RunOutcome.SUCCESS;
|
||||||
|
},
|
||||||
|
theDetails -> {
|
||||||
|
|
||||||
|
}
|
||||||
|
);
|
||||||
|
instance.setCurrentGatedStepId(jobDef.getFirstStepId());
|
||||||
|
} else {
|
||||||
|
jobDef = TestJobDefinitionUtils.buildJobDefinition(
|
||||||
|
JOB_DEFINITION_ID,
|
||||||
|
(step, sink) -> {
|
||||||
|
sink.accept(new FirstStepOutput());
|
||||||
|
return RunOutcome.SUCCESS;
|
||||||
|
},
|
||||||
|
(step, sink) -> {
|
||||||
|
return RunOutcome.SUCCESS;
|
||||||
|
},
|
||||||
|
theDetails -> {
|
||||||
|
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
if (myJobDefinitionRegistry.getJobDefinition(jobDef.getJobDefinitionId(), jobDef.getJobDefinitionVersion()).isEmpty()) {
|
||||||
|
myJobDefinitionRegistry.addJobDefinition(jobDef);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return instance;
|
return instance;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -719,4 +974,12 @@ public class JpaJobPersistenceImplTest extends BaseJpaR4Test {
|
||||||
Arguments.of(WorkChunkStatusEnum.COMPLETED, false)
|
Arguments.of(WorkChunkStatusEnum.COMPLETED, false)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private Batch2JobInstanceEntity findInstanceByIdOrThrow(String instanceId) {
|
||||||
|
return myJobInstanceRepository.findById(instanceId).orElseThrow(IllegalStateException::new);
|
||||||
|
}
|
||||||
|
|
||||||
|
private Batch2WorkChunkEntity findChunkByIdOrThrow(String secondChunkId) {
|
||||||
|
return myWorkChunkRepository.findById(secondChunkId).orElseThrow(IllegalArgumentException::new);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -13,7 +13,6 @@ import ca.uhn.fhir.jpa.api.model.BulkExportJobResults;
|
||||||
import ca.uhn.fhir.jpa.batch.models.Batch2JobStartResponse;
|
import ca.uhn.fhir.jpa.batch.models.Batch2JobStartResponse;
|
||||||
import ca.uhn.fhir.jpa.batch2.JpaJobPersistenceImpl;
|
import ca.uhn.fhir.jpa.batch2.JpaJobPersistenceImpl;
|
||||||
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
|
||||||
import ca.uhn.fhir.jpa.entity.Batch2WorkChunkEntity;
|
|
||||||
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
||||||
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
||||||
import ca.uhn.fhir.rest.api.Constants;
|
import ca.uhn.fhir.rest.api.Constants;
|
||||||
|
@ -31,7 +30,6 @@ import ca.uhn.fhir.util.JsonUtil;
|
||||||
import com.google.common.collect.Sets;
|
import com.google.common.collect.Sets;
|
||||||
import jakarta.annotation.Nonnull;
|
import jakarta.annotation.Nonnull;
|
||||||
import org.apache.commons.io.LineIterator;
|
import org.apache.commons.io.LineIterator;
|
||||||
import org.apache.commons.lang3.StringUtils;
|
|
||||||
import org.apache.http.client.methods.CloseableHttpResponse;
|
import org.apache.http.client.methods.CloseableHttpResponse;
|
||||||
import org.apache.http.client.methods.HttpGet;
|
import org.apache.http.client.methods.HttpGet;
|
||||||
import org.apache.http.client.methods.HttpPost;
|
import org.apache.http.client.methods.HttpPost;
|
||||||
|
@ -72,7 +70,6 @@ import org.mockito.Spy;
|
||||||
import org.slf4j.Logger;
|
import org.slf4j.Logger;
|
||||||
import org.slf4j.LoggerFactory;
|
import org.slf4j.LoggerFactory;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.data.domain.PageRequest;
|
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.io.StringReader;
|
import java.io.StringReader;
|
||||||
|
@ -85,10 +82,9 @@ import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.concurrent.TimeUnit;
|
import java.util.concurrent.TimeUnit;
|
||||||
|
import java.util.concurrent.atomic.AtomicBoolean;
|
||||||
import java.util.stream.Stream;
|
import java.util.stream.Stream;
|
||||||
|
|
||||||
import static ca.uhn.fhir.batch2.jobs.export.BulkExportAppCtx.CREATE_REPORT_STEP;
|
|
||||||
import static ca.uhn.fhir.batch2.jobs.export.BulkExportAppCtx.WRITE_TO_BINARIES;
|
|
||||||
import static ca.uhn.fhir.jpa.dao.r4.FhirResourceDaoR4TagsInlineTest.createSearchParameterForInlineSecurity;
|
import static ca.uhn.fhir.jpa.dao.r4.FhirResourceDaoR4TagsInlineTest.createSearchParameterForInlineSecurity;
|
||||||
import static org.apache.commons.lang3.StringUtils.isNotBlank;
|
import static org.apache.commons.lang3.StringUtils.isNotBlank;
|
||||||
import static org.awaitility.Awaitility.await;
|
import static org.awaitility.Awaitility.await;
|
||||||
|
@ -477,7 +473,8 @@ public class BulkDataExportTest extends BaseResourceProviderR4Test {
|
||||||
verifyBulkExportResults(options, ids, new ArrayList<>());
|
verifyBulkExportResults(options, ids, new ArrayList<>());
|
||||||
|
|
||||||
assertFalse(valueSet.isEmpty());
|
assertFalse(valueSet.isEmpty());
|
||||||
assertEquals(ids.size(), valueSet.size());
|
assertEquals(ids.size(), valueSet.size(),
|
||||||
|
"Expected " + String.join(", ", ids) + ". Actual : " + String.join(", ", valueSet));
|
||||||
for (String id : valueSet) {
|
for (String id : valueSet) {
|
||||||
// should start with our value from the key-value pairs
|
// should start with our value from the key-value pairs
|
||||||
assertTrue(id.startsWith(value));
|
assertTrue(id.startsWith(value));
|
||||||
|
@ -898,6 +895,7 @@ public class BulkDataExportTest extends BaseResourceProviderR4Test {
|
||||||
options.setResourceTypes(Sets.newHashSet("Patient", "Observation", "CarePlan", "MedicationAdministration", "ServiceRequest"));
|
options.setResourceTypes(Sets.newHashSet("Patient", "Observation", "CarePlan", "MedicationAdministration", "ServiceRequest"));
|
||||||
options.setExportStyle(BulkExportJobParameters.ExportStyle.PATIENT);
|
options.setExportStyle(BulkExportJobParameters.ExportStyle.PATIENT);
|
||||||
options.setOutputFormat(Constants.CT_FHIR_NDJSON);
|
options.setOutputFormat(Constants.CT_FHIR_NDJSON);
|
||||||
|
|
||||||
verifyBulkExportResults(options, List.of("Patient/P1", carePlanId, medAdminId, sevReqId, obsSubId, obsPerId), Collections.emptyList());
|
verifyBulkExportResults(options, List.of("Patient/P1", carePlanId, medAdminId, sevReqId, obsSubId, obsPerId), Collections.emptyList());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1096,7 +1094,6 @@ public class BulkDataExportTest extends BaseResourceProviderR4Test {
|
||||||
String resourceType = file.getKey();
|
String resourceType = file.getKey();
|
||||||
List<String> binaryIds = file.getValue();
|
List<String> binaryIds = file.getValue();
|
||||||
for (var nextBinaryId : binaryIds) {
|
for (var nextBinaryId : binaryIds) {
|
||||||
|
|
||||||
String nextBinaryIdPart = new IdType(nextBinaryId).getIdPart();
|
String nextBinaryIdPart = new IdType(nextBinaryId).getIdPart();
|
||||||
assertThat(nextBinaryIdPart, matchesPattern("[a-zA-Z0-9]{32}"));
|
assertThat(nextBinaryIdPart, matchesPattern("[a-zA-Z0-9]{32}"));
|
||||||
|
|
||||||
|
@ -1105,6 +1102,7 @@ public class BulkDataExportTest extends BaseResourceProviderR4Test {
|
||||||
|
|
||||||
String nextNdJsonFileContent = new String(binary.getContent(), Constants.CHARSET_UTF8);
|
String nextNdJsonFileContent = new String(binary.getContent(), Constants.CHARSET_UTF8);
|
||||||
try (var iter = new LineIterator(new StringReader(nextNdJsonFileContent))) {
|
try (var iter = new LineIterator(new StringReader(nextNdJsonFileContent))) {
|
||||||
|
AtomicBoolean gate = new AtomicBoolean(false);
|
||||||
iter.forEachRemaining(t -> {
|
iter.forEachRemaining(t -> {
|
||||||
if (isNotBlank(t)) {
|
if (isNotBlank(t)) {
|
||||||
IBaseResource next = myFhirContext.newJsonParser().parseResource(t);
|
IBaseResource next = myFhirContext.newJsonParser().parseResource(t);
|
||||||
|
@ -1117,7 +1115,10 @@ public class BulkDataExportTest extends BaseResourceProviderR4Test {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
gate.set(true);
|
||||||
});
|
});
|
||||||
|
await().atMost(400, TimeUnit.MILLISECONDS)
|
||||||
|
.until(gate::get);
|
||||||
} catch (IOException e) {
|
} catch (IOException e) {
|
||||||
fail(e.toString());
|
fail(e.toString());
|
||||||
}
|
}
|
||||||
|
|
|
@ -93,7 +93,6 @@ import org.springframework.transaction.support.TransactionTemplate;
|
||||||
import jakarta.annotation.Nonnull;
|
import jakarta.annotation.Nonnull;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.io.InputStream;
|
import java.io.InputStream;
|
||||||
import java.io.UnsupportedEncodingException;
|
|
||||||
import java.math.BigDecimal;
|
import java.math.BigDecimal;
|
||||||
import java.nio.charset.StandardCharsets;
|
import java.nio.charset.StandardCharsets;
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
|
@ -582,13 +581,13 @@ public class FhirSystemDaoR4Test extends BaseJpaR4SystemTest {
|
||||||
p.addName().setFamily("family");
|
p.addName().setFamily("family");
|
||||||
final IIdType id = myPatientDao.create(p, mySrd).getId().toUnqualified();
|
final IIdType id = myPatientDao.create(p, mySrd).getId().toUnqualified();
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
ValueSet vs = new ValueSet();
|
ValueSet vs = new ValueSet();
|
||||||
vs.setUrl("http://foo");
|
vs.setUrl("http://foo");
|
||||||
myValueSetDao.create(vs, mySrd);
|
myValueSetDao.create(vs, mySrd);
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
ResourceTable entity = new TransactionTemplate(myTxManager).execute(t -> myEntityManager.find(ResourceTable.class, id.getIdPartAsLong()));
|
ResourceTable entity = new TransactionTemplate(myTxManager).execute(t -> myEntityManager.find(ResourceTable.class, id.getIdPartAsLong()));
|
||||||
assertEquals(Long.valueOf(1), entity.getIndexStatus());
|
assertEquals(Long.valueOf(1), entity.getIndexStatus());
|
||||||
|
|
|
@ -18,7 +18,10 @@ import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
||||||
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
||||||
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
||||||
import ca.uhn.fhir.jpa.test.PatientReindexTestHelper;
|
import ca.uhn.fhir.jpa.test.PatientReindexTestHelper;
|
||||||
|
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
||||||
import ca.uhn.fhir.rest.server.exceptions.ResourceGoneException;
|
import ca.uhn.fhir.rest.server.exceptions.ResourceGoneException;
|
||||||
|
import jakarta.annotation.PostConstruct;
|
||||||
|
import jakarta.persistence.Query;
|
||||||
import org.hl7.fhir.instance.model.api.IIdType;
|
import org.hl7.fhir.instance.model.api.IIdType;
|
||||||
import org.hl7.fhir.r4.model.Observation;
|
import org.hl7.fhir.r4.model.Observation;
|
||||||
import org.hl7.fhir.r4.model.Patient;
|
import org.hl7.fhir.r4.model.Patient;
|
||||||
|
@ -30,8 +33,6 @@ import org.junit.jupiter.params.provider.Arguments;
|
||||||
import org.junit.jupiter.params.provider.MethodSource;
|
import org.junit.jupiter.params.provider.MethodSource;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
|
||||||
import jakarta.annotation.PostConstruct;
|
|
||||||
import jakarta.persistence.Query;
|
|
||||||
import java.util.Date;
|
import java.util.Date;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.stream.Stream;
|
import java.util.stream.Stream;
|
||||||
|
@ -263,7 +264,7 @@ public class ReindexJobTest extends BaseJpaR4Test {
|
||||||
.setOptimizeStorage(ReindexParameters.OptimizeStorageModeEnum.CURRENT_VERSION)
|
.setOptimizeStorage(ReindexParameters.OptimizeStorageModeEnum.CURRENT_VERSION)
|
||||||
.setReindexSearchParameters(ReindexParameters.ReindexSearchParametersEnum.NONE)
|
.setReindexSearchParameters(ReindexParameters.ReindexSearchParametersEnum.NONE)
|
||||||
);
|
);
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(startRequest);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), startRequest);
|
||||||
JobInstance outcome = myBatch2JobHelper.awaitJobCompletion(startResponse);
|
JobInstance outcome = myBatch2JobHelper.awaitJobCompletion(startResponse);
|
||||||
assertEquals(10, outcome.getCombinedRecordsProcessed());
|
assertEquals(10, outcome.getCombinedRecordsProcessed());
|
||||||
|
|
||||||
|
@ -358,7 +359,7 @@ public class ReindexJobTest extends BaseJpaR4Test {
|
||||||
myReindexTestHelper.createObservationWithAlleleExtension(Observation.ObservationStatus.FINAL);
|
myReindexTestHelper.createObservationWithAlleleExtension(Observation.ObservationStatus.FINAL);
|
||||||
}
|
}
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
myReindexTestHelper.createAlleleSearchParameter();
|
myReindexTestHelper.createAlleleSearchParameter();
|
||||||
mySearchParamRegistry.forceRefresh();
|
mySearchParamRegistry.forceRefresh();
|
||||||
|
@ -390,7 +391,7 @@ public class ReindexJobTest extends BaseJpaR4Test {
|
||||||
JobInstanceStartRequest startRequest = new JobInstanceStartRequest();
|
JobInstanceStartRequest startRequest = new JobInstanceStartRequest();
|
||||||
startRequest.setJobDefinitionId(ReindexAppCtx.JOB_REINDEX);
|
startRequest.setJobDefinitionId(ReindexAppCtx.JOB_REINDEX);
|
||||||
startRequest.setParameters(new ReindexJobParameters());
|
startRequest.setParameters(new ReindexJobParameters());
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(startRequest);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), startRequest);
|
||||||
JobInstance myJob = myBatch2JobHelper.awaitJobCompletion(startResponse);
|
JobInstance myJob = myBatch2JobHelper.awaitJobCompletion(startResponse);
|
||||||
|
|
||||||
assertEquals(StatusEnum.COMPLETED, myJob.getStatus());
|
assertEquals(StatusEnum.COMPLETED, myJob.getStatus());
|
||||||
|
@ -445,7 +446,7 @@ public class ReindexJobTest extends BaseJpaR4Test {
|
||||||
JobInstanceStartRequest startRequest = new JobInstanceStartRequest();
|
JobInstanceStartRequest startRequest = new JobInstanceStartRequest();
|
||||||
startRequest.setJobDefinitionId(ReindexAppCtx.JOB_REINDEX);
|
startRequest.setJobDefinitionId(ReindexAppCtx.JOB_REINDEX);
|
||||||
startRequest.setParameters(new ReindexJobParameters());
|
startRequest.setParameters(new ReindexJobParameters());
|
||||||
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(startRequest);
|
Batch2JobStartResponse startResponse = myJobCoordinator.startInstance(new SystemRequestDetails(), startRequest);
|
||||||
JobInstance outcome = myBatch2JobHelper.awaitJobFailure(startResponse);
|
JobInstance outcome = myBatch2JobHelper.awaitJobFailure(startResponse);
|
||||||
|
|
||||||
// Verify
|
// Verify
|
||||||
|
|
|
@ -82,6 +82,7 @@ import static org.hamcrest.Matchers.hasSize;
|
||||||
import static org.hamcrest.Matchers.startsWith;
|
import static org.hamcrest.Matchers.startsWith;
|
||||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
import static org.junit.jupiter.api.Assertions.assertFalse;
|
import static org.junit.jupiter.api.Assertions.assertFalse;
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertNotNull;
|
||||||
import static org.junit.jupiter.api.Assertions.assertNull;
|
import static org.junit.jupiter.api.Assertions.assertNull;
|
||||||
import static org.junit.jupiter.api.Assertions.assertTrue;
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
import static org.junit.jupiter.api.Assertions.fail;
|
import static org.junit.jupiter.api.Assertions.fail;
|
||||||
|
@ -440,8 +441,9 @@ public class AuthorizationInterceptorJpaR4Test extends BaseResourceProviderR4Tes
|
||||||
}.setValidationSupport(myValidationSupport));
|
}.setValidationSupport(myValidationSupport));
|
||||||
|
|
||||||
// Should be ok
|
// Should be ok
|
||||||
myClient.read().resource(Observation.class).withId("Observation/allowed").execute();
|
Observation result = myClient.read().resource(Observation.class).withId("Observation/allowed").execute();
|
||||||
|
|
||||||
|
assertNotNull(result);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
@ -463,8 +465,10 @@ public class AuthorizationInterceptorJpaR4Test extends BaseResourceProviderR4Tes
|
||||||
}.setValidationSupport(myValidationSupport));
|
}.setValidationSupport(myValidationSupport));
|
||||||
|
|
||||||
// Should be ok
|
// Should be ok
|
||||||
myClient.read().resource(Patient.class).withId("Patient/P").execute();
|
Patient pat = myClient.read().resource(Patient.class).withId("Patient/P").execute();
|
||||||
myClient.read().resource(Observation.class).withId("Observation/O").execute();
|
Observation obs = myClient.read().resource(Observation.class).withId("Observation/O").execute();
|
||||||
|
assertNotNull(pat);
|
||||||
|
assertNotNull(obs);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -244,12 +244,15 @@ public class ResourceProviderCustomSearchParamR4Test extends BaseResourceProvide
|
||||||
mySearchParameterDao.create(fooSp, mySrd);
|
mySearchParameterDao.create(fooSp, mySrd);
|
||||||
|
|
||||||
runInTransaction(() -> {
|
runInTransaction(() -> {
|
||||||
|
myBatch2JobHelper.forceRunMaintenancePass();
|
||||||
|
|
||||||
List<JobInstance> allJobs = myBatch2JobHelper.findJobsByDefinition(ReindexAppCtx.JOB_REINDEX);
|
List<JobInstance> allJobs = myBatch2JobHelper.findJobsByDefinition(ReindexAppCtx.JOB_REINDEX);
|
||||||
assertEquals(1, allJobs.size());
|
assertEquals(1, allJobs.size());
|
||||||
assertEquals(1, allJobs.get(0).getParameters(ReindexJobParameters.class).getPartitionedUrls().size());
|
assertEquals(1, allJobs.get(0).getParameters(ReindexJobParameters.class).getPartitionedUrls().size());
|
||||||
assertEquals("Patient?", allJobs.get(0).getParameters(ReindexJobParameters.class).getPartitionedUrls().get(0).getUrl());
|
assertEquals("Patient?", allJobs.get(0).getParameters(ReindexJobParameters.class).getPartitionedUrls().get(0).getUrl());
|
||||||
});
|
});
|
||||||
|
|
||||||
|
myBatch2JobHelper.awaitNoJobsRunning();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
|
|
@ -3,9 +3,11 @@ package ca.uhn.fhir.jpa.provider.r4;
|
||||||
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
|
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
|
||||||
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
||||||
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
||||||
|
import ca.uhn.fhir.jpa.test.config.TestR4Config;
|
||||||
import ca.uhn.fhir.rest.server.exceptions.NotImplementedOperationException;
|
import ca.uhn.fhir.rest.server.exceptions.NotImplementedOperationException;
|
||||||
import com.google.common.base.Charsets;
|
import com.google.common.base.Charsets;
|
||||||
import org.apache.commons.io.IOUtils;
|
import org.apache.commons.io.IOUtils;
|
||||||
|
import org.hl7.fhir.instance.model.api.IBaseResource;
|
||||||
import org.hl7.fhir.instance.model.api.IIdType;
|
import org.hl7.fhir.instance.model.api.IIdType;
|
||||||
import org.hl7.fhir.r4.model.Bundle;
|
import org.hl7.fhir.r4.model.Bundle;
|
||||||
import org.hl7.fhir.r4.model.Bundle.BundleEntryComponent;
|
import org.hl7.fhir.r4.model.Bundle.BundleEntryComponent;
|
||||||
|
@ -24,19 +26,32 @@ import org.junit.jupiter.api.Test;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.concurrent.CompletionService;
|
||||||
|
import java.util.concurrent.ExecutorCompletionService;
|
||||||
import java.util.concurrent.ExecutorService;
|
import java.util.concurrent.ExecutorService;
|
||||||
import java.util.concurrent.Executors;
|
import java.util.concurrent.Executors;
|
||||||
import java.util.concurrent.TimeUnit;
|
import java.util.concurrent.TimeUnit;
|
||||||
|
|
||||||
|
import static org.awaitility.Awaitility.await;
|
||||||
import static org.hamcrest.CoreMatchers.containsString;
|
import static org.hamcrest.CoreMatchers.containsString;
|
||||||
import static org.hamcrest.MatcherAssert.assertThat;
|
import static org.hamcrest.MatcherAssert.assertThat;
|
||||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertFalse;
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
import static org.junit.jupiter.api.Assertions.fail;
|
import static org.junit.jupiter.api.Assertions.fail;
|
||||||
|
|
||||||
public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
||||||
|
|
||||||
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(ResourceProviderR4BundleTest.class);
|
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(ResourceProviderR4BundleTest.class);
|
||||||
|
|
||||||
|
private static final int DESIRED_MAX_THREADS = 5;
|
||||||
|
|
||||||
|
static {
|
||||||
|
if (TestR4Config.ourMaxThreads == null || TestR4Config.ourMaxThreads < DESIRED_MAX_THREADS) {
|
||||||
|
TestR4Config.ourMaxThreads = DESIRED_MAX_THREADS;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@BeforeEach
|
@BeforeEach
|
||||||
@Override
|
@Override
|
||||||
public void before() throws Exception {
|
public void before() throws Exception {
|
||||||
|
@ -52,6 +67,7 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
||||||
myStorageSettings.setBundleBatchPoolSize(JpaStorageSettings.DEFAULT_BUNDLE_BATCH_POOL_SIZE);
|
myStorageSettings.setBundleBatchPoolSize(JpaStorageSettings.DEFAULT_BUNDLE_BATCH_POOL_SIZE);
|
||||||
myStorageSettings.setBundleBatchMaxPoolSize(JpaStorageSettings.DEFAULT_BUNDLE_BATCH_MAX_POOL_SIZE);
|
myStorageSettings.setBundleBatchMaxPoolSize(JpaStorageSettings.DEFAULT_BUNDLE_BATCH_MAX_POOL_SIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* See #401
|
* See #401
|
||||||
*/
|
*/
|
||||||
|
@ -69,14 +85,13 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
||||||
|
|
||||||
Bundle retBundle = myClient.read().resource(Bundle.class).withId(id).execute();
|
Bundle retBundle = myClient.read().resource(Bundle.class).withId(id).execute();
|
||||||
|
|
||||||
ourLog.debug(myFhirContext.newXmlParser().setPrettyPrint(true).encodeResourceToString(retBundle));
|
ourLog.debug(myFhirContext.newXmlParser().setPrettyPrint(true).encodeResourceToString(retBundle));
|
||||||
|
|
||||||
assertEquals("http://foo/", bundle.getEntry().get(0).getFullUrl());
|
assertEquals("http://foo/", bundle.getEntry().get(0).getFullUrl());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testProcessMessage() {
|
public void testProcessMessage() {
|
||||||
|
|
||||||
Bundle bundle = new Bundle();
|
Bundle bundle = new Bundle();
|
||||||
bundle.setType(BundleType.MESSAGE);
|
bundle.setType(BundleType.MESSAGE);
|
||||||
|
|
||||||
|
@ -117,22 +132,41 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testHighConcurrencyWorks() throws IOException, InterruptedException {
|
public void testHighConcurrencyWorks() throws IOException {
|
||||||
List<Bundle> bundles = new ArrayList<>();
|
List<Bundle> bundles = new ArrayList<>();
|
||||||
for (int i =0 ; i < 10; i ++) {
|
for (int i =0 ; i < 10; i ++) {
|
||||||
bundles.add(myFhirContext.newJsonParser().parseResource(Bundle.class, IOUtils.toString(getClass().getResourceAsStream("/r4/identical-tags-batch.json"), Charsets.UTF_8)));
|
bundles.add(myFhirContext.newJsonParser().parseResource(Bundle.class, IOUtils.toString(getClass().getResourceAsStream("/r4/identical-tags-batch.json"), Charsets.UTF_8)));
|
||||||
}
|
}
|
||||||
|
|
||||||
ExecutorService tpe = Executors.newFixedThreadPool(4);
|
int desiredMaxThreads = DESIRED_MAX_THREADS - 1;
|
||||||
for (Bundle bundle :bundles) {
|
int maxThreads = TestR4Config.getMaxThreads();
|
||||||
tpe.execute(() -> myClient.transaction().withBundle(bundle).execute());
|
// we want strictly > because we want at least 1 extra thread hanging around for
|
||||||
}
|
// any spun off processes needed internally during the transaction
|
||||||
tpe.shutdown();
|
assertTrue(maxThreads > desiredMaxThreads, String.format("Wanted > %d threads, but we only have %d available", desiredMaxThreads, maxThreads));
|
||||||
tpe.awaitTermination(100, TimeUnit.SECONDS);
|
ExecutorService tpe = Executors.newFixedThreadPool(desiredMaxThreads);
|
||||||
}
|
CompletionService<Bundle> completionService = new ExecutorCompletionService<>(tpe);
|
||||||
|
|
||||||
|
for (Bundle bundle : bundles) {
|
||||||
|
completionService.submit(() -> myClient.transaction().withBundle(bundle).execute());
|
||||||
|
}
|
||||||
|
|
||||||
|
int count = 0;
|
||||||
|
int expected = bundles.size();
|
||||||
|
while (count < expected) {
|
||||||
|
try {
|
||||||
|
completionService.take();
|
||||||
|
count++;
|
||||||
|
} catch (Exception ex) {
|
||||||
|
ourLog.error(ex.getMessage());
|
||||||
|
fail(ex.getMessage());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tpe.shutdown();
|
||||||
|
await().atMost(100, TimeUnit.SECONDS)
|
||||||
|
.until(tpe::isShutdown);
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testBundleBatchWithSingleThread() {
|
public void testBundleBatchWithSingleThread() {
|
||||||
|
@ -144,8 +178,9 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
||||||
Bundle input = new Bundle();
|
Bundle input = new Bundle();
|
||||||
input.setType(BundleType.BATCH);
|
input.setType(BundleType.BATCH);
|
||||||
|
|
||||||
for (String id : ids)
|
for (String id : ids) {
|
||||||
input.addEntry().getRequest().setMethod(HTTPVerb.GET).setUrl(id);
|
input.addEntry().getRequest().setMethod(HTTPVerb.GET).setUrl(id);
|
||||||
|
}
|
||||||
|
|
||||||
Bundle output = myClient.transaction().withBundle(input).execute();
|
Bundle output = myClient.transaction().withBundle(input).execute();
|
||||||
|
|
||||||
|
@ -158,9 +193,8 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
||||||
for (BundleEntryComponent bundleEntry : bundleEntries) {
|
for (BundleEntryComponent bundleEntry : bundleEntries) {
|
||||||
assertEquals(ids.get(i++), bundleEntry.getResource().getIdElement().toUnqualifiedVersionless().getValueAsString());
|
assertEquals(ids.get(i++), bundleEntry.getResource().getIdElement().toUnqualifiedVersionless().getValueAsString());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testBundleBatchWithError() {
|
public void testBundleBatchWithError() {
|
||||||
List<String> ids = createPatients(5);
|
List<String> ids = createPatients(5);
|
||||||
|
@ -351,7 +385,8 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
||||||
bundle.getEntry().forEach(entry -> carePlans.add((CarePlan) entry.getResource()));
|
bundle.getEntry().forEach(entry -> carePlans.add((CarePlan) entry.getResource()));
|
||||||
|
|
||||||
// Post CarePlans should not get: HAPI-2006: Unable to perform PUT, URL provided is invalid...
|
// Post CarePlans should not get: HAPI-2006: Unable to perform PUT, URL provided is invalid...
|
||||||
myClient.transaction().withResources(carePlans).execute();
|
List<IBaseResource> result = myClient.transaction().withResources(carePlans).execute();
|
||||||
|
assertFalse(result.isEmpty());
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,6 +6,7 @@ import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
||||||
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
||||||
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
|
||||||
import ca.uhn.fhir.jpa.term.TermTestUtil;
|
import ca.uhn.fhir.jpa.term.TermTestUtil;
|
||||||
|
import ca.uhn.fhir.jpa.term.api.ITermDeferredStorageSvc;
|
||||||
import ca.uhn.fhir.rest.server.exceptions.InvalidRequestException;
|
import ca.uhn.fhir.rest.server.exceptions.InvalidRequestException;
|
||||||
import ca.uhn.fhir.rest.server.exceptions.ResourceNotFoundException;
|
import ca.uhn.fhir.rest.server.exceptions.ResourceNotFoundException;
|
||||||
import org.apache.commons.io.IOUtils;
|
import org.apache.commons.io.IOUtils;
|
||||||
|
@ -26,10 +27,13 @@ import org.hl7.fhir.r4.model.UriType;
|
||||||
import org.hl7.fhir.r4.model.codesystems.ConceptSubsumptionOutcome;
|
import org.hl7.fhir.r4.model.codesystems.ConceptSubsumptionOutcome;
|
||||||
import org.junit.jupiter.api.BeforeEach;
|
import org.junit.jupiter.api.BeforeEach;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.transaction.annotation.Transactional;
|
import org.springframework.transaction.annotation.Transactional;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
import java.util.concurrent.TimeUnit;
|
||||||
|
|
||||||
|
import static org.awaitility.Awaitility.await;
|
||||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
import static org.junit.jupiter.api.Assertions.assertFalse;
|
import static org.junit.jupiter.api.Assertions.assertFalse;
|
||||||
import static org.junit.jupiter.api.Assertions.assertTrue;
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
|
@ -37,12 +41,16 @@ import static org.junit.jupiter.api.Assertions.fail;
|
||||||
|
|
||||||
public class ResourceProviderR4CodeSystemTest extends BaseResourceProviderR4Test {
|
public class ResourceProviderR4CodeSystemTest extends BaseResourceProviderR4Test {
|
||||||
|
|
||||||
|
|
||||||
private static final String SYSTEM_PARENTCHILD = "http://parentchild";
|
private static final String SYSTEM_PARENTCHILD = "http://parentchild";
|
||||||
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(ResourceProviderR4CodeSystemTest.class);
|
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(ResourceProviderR4CodeSystemTest.class);
|
||||||
private static final String CS_ACME_URL = "http://acme.org";
|
private static final String CS_ACME_URL = "http://acme.org";
|
||||||
private Long parentChildCsId;
|
private Long parentChildCsId;
|
||||||
private IIdType myCsId;
|
private IIdType myCsId;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private ITermDeferredStorageSvc myITermDeferredStorageSvc;
|
||||||
|
|
||||||
@BeforeEach
|
@BeforeEach
|
||||||
@Transactional
|
@Transactional
|
||||||
public void before02() throws IOException {
|
public void before02() throws IOException {
|
||||||
|
@ -63,6 +71,13 @@ public class ResourceProviderR4CodeSystemTest extends BaseResourceProviderR4Test
|
||||||
DaoMethodOutcome parentChildCsOutcome = myCodeSystemDao.create(parentChildCs);
|
DaoMethodOutcome parentChildCsOutcome = myCodeSystemDao.create(parentChildCs);
|
||||||
parentChildCsId = ((ResourceTable) parentChildCsOutcome.getEntity()).getId();
|
parentChildCsId = ((ResourceTable) parentChildCsOutcome.getEntity()).getId();
|
||||||
|
|
||||||
|
// ensure all terms are loaded
|
||||||
|
await().atMost(5, TimeUnit.SECONDS)
|
||||||
|
.until(() -> {
|
||||||
|
myBatch2JobHelper.forceRunMaintenancePass();
|
||||||
|
myITermDeferredStorageSvc.saveDeferred();
|
||||||
|
return myITermDeferredStorageSvc.isStorageQueueEmpty(true);
|
||||||
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
|
|
@ -30,22 +30,22 @@ public class ResourceReindexSvcImplTest extends BaseJpaR4Test {
|
||||||
// Setup
|
// Setup
|
||||||
|
|
||||||
createPatient(withActiveFalse());
|
createPatient(withActiveFalse());
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
Date start = new Date();
|
Date start = new Date();
|
||||||
|
|
||||||
Long id0 = createPatient(withActiveFalse()).getIdPartAsLong();
|
Long id0 = createPatient(withActiveFalse()).getIdPartAsLong();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
Long id1 = createPatient(withActiveFalse()).getIdPartAsLong();
|
Long id1 = createPatient(withActiveFalse()).getIdPartAsLong();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
Date beforeLastInRange = new Date();
|
Date beforeLastInRange = new Date();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
Long id2 = createObservation(withObservationCode("http://foo", "bar")).getIdPartAsLong();
|
Long id2 = createObservation(withObservationCode("http://foo", "bar")).getIdPartAsLong();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
Date end = new Date();
|
Date end = new Date();
|
||||||
|
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
createPatient(withActiveFalse());
|
createPatient(withActiveFalse());
|
||||||
|
|
||||||
|
@ -103,26 +103,26 @@ public class ResourceReindexSvcImplTest extends BaseJpaR4Test {
|
||||||
// Setup
|
// Setup
|
||||||
|
|
||||||
final Long patientId0 = createPatient(withActiveFalse()).getIdPartAsLong();
|
final Long patientId0 = createPatient(withActiveFalse()).getIdPartAsLong();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
// Start of resources within range
|
// Start of resources within range
|
||||||
Date start = new Date();
|
Date start = new Date();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
Long patientId1 = createPatient(withActiveFalse()).getIdPartAsLong();
|
Long patientId1 = createPatient(withActiveFalse()).getIdPartAsLong();
|
||||||
createObservation(withObservationCode("http://foo", "bar"));
|
createObservation(withObservationCode("http://foo", "bar"));
|
||||||
createObservation(withObservationCode("http://foo", "bar"));
|
createObservation(withObservationCode("http://foo", "bar"));
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
Date beforeLastInRange = new Date();
|
Date beforeLastInRange = new Date();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
Long patientId2 = createPatient(withActiveFalse()).getIdPartAsLong();
|
Long patientId2 = createPatient(withActiveFalse()).getIdPartAsLong();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
Date end = new Date();
|
Date end = new Date();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
// End of resources within range
|
// End of resources within range
|
||||||
|
|
||||||
createObservation(withObservationCode("http://foo", "bar"));
|
createObservation(withObservationCode("http://foo", "bar"));
|
||||||
final Long patientId3 = createPatient(withActiveFalse()).getIdPartAsLong();
|
final Long patientId3 = createPatient(withActiveFalse()).getIdPartAsLong();
|
||||||
sleepUntilTimeChanges();
|
sleepUntilTimeChange();
|
||||||
|
|
||||||
// Execute
|
// Execute
|
||||||
|
|
||||||
|
|
|
@ -31,6 +31,7 @@ import ca.uhn.fhir.jpa.term.ZipCollectionBuilder;
|
||||||
import ca.uhn.fhir.jpa.term.models.TermCodeSystemDeleteJobParameters;
|
import ca.uhn.fhir.jpa.term.models.TermCodeSystemDeleteJobParameters;
|
||||||
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
|
||||||
import ca.uhn.fhir.jpa.test.Batch2JobHelper;
|
import ca.uhn.fhir.jpa.test.Batch2JobHelper;
|
||||||
|
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
|
||||||
import ca.uhn.fhir.rest.server.exceptions.InvalidRequestException;
|
import ca.uhn.fhir.rest.server.exceptions.InvalidRequestException;
|
||||||
import ca.uhn.fhir.rest.server.servlet.ServletRequestDetails;
|
import ca.uhn.fhir.rest.server.servlet.ServletRequestDetails;
|
||||||
import ca.uhn.fhir.util.JsonUtil;
|
import ca.uhn.fhir.util.JsonUtil;
|
||||||
|
@ -127,7 +128,7 @@ public class TermCodeSystemDeleteJobTest extends BaseJpaR4Test {
|
||||||
JobInstanceStartRequest request = new JobInstanceStartRequest();
|
JobInstanceStartRequest request = new JobInstanceStartRequest();
|
||||||
request.setJobDefinitionId(TERM_CODE_SYSTEM_DELETE_JOB_NAME);
|
request.setJobDefinitionId(TERM_CODE_SYSTEM_DELETE_JOB_NAME);
|
||||||
request.setParameters(JsonUtil.serialize(parameters));
|
request.setParameters(JsonUtil.serialize(parameters));
|
||||||
Batch2JobStartResponse response = myJobCoordinator.startInstance(request);
|
Batch2JobStartResponse response = myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
|
|
||||||
myBatch2JobHelper.awaitJobCompletion(response);
|
myBatch2JobHelper.awaitJobCompletion(response);
|
||||||
|
|
||||||
|
@ -147,7 +148,7 @@ public class TermCodeSystemDeleteJobTest extends BaseJpaR4Test {
|
||||||
request.setParameters(new TermCodeSystemDeleteJobParameters()); // no pid
|
request.setParameters(new TermCodeSystemDeleteJobParameters()); // no pid
|
||||||
|
|
||||||
InvalidRequestException exception = assertThrows(InvalidRequestException.class, () -> {
|
InvalidRequestException exception = assertThrows(InvalidRequestException.class, () -> {
|
||||||
myJobCoordinator.startInstance(request);
|
myJobCoordinator.startInstance(new SystemRequestDetails(), request);
|
||||||
});
|
});
|
||||||
assertTrue(exception.getMessage().contains("Invalid Term Code System PID 0"), exception.getMessage());
|
assertTrue(exception.getMessage().contains("Invalid Term Code System PID 0"), exception.getMessage());
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,67 @@
|
||||||
|
package ca.uhn.fhir.testjob;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.batch2.api.IJobCompletionHandler;
|
||||||
|
import ca.uhn.fhir.batch2.api.IJobStepWorker;
|
||||||
|
import ca.uhn.fhir.batch2.api.VoidModel;
|
||||||
|
import ca.uhn.fhir.batch2.model.JobDefinition;
|
||||||
|
import ca.uhn.fhir.model.api.IModelJson;
|
||||||
|
import ca.uhn.fhir.testjob.models.FirstStepOutput;
|
||||||
|
import ca.uhn.fhir.testjob.models.TestJobParameters;
|
||||||
|
|
||||||
|
@SuppressWarnings({"unchecked", "rawtypes"})
|
||||||
|
public class TestJobDefinitionUtils {
|
||||||
|
|
||||||
|
public static final int TEST_JOB_VERSION = 1;
|
||||||
|
public static final String FIRST_STEP_ID = "first-step";
|
||||||
|
public static final String LAST_STEP_ID = "last-step";
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates a test job definition.
|
||||||
|
* This job will not be gated.
|
||||||
|
*/
|
||||||
|
public static JobDefinition<? extends IModelJson> buildJobDefinition(
|
||||||
|
String theJobId,
|
||||||
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> theFirstStep,
|
||||||
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> theLastStep,
|
||||||
|
IJobCompletionHandler<TestJobParameters> theCompletionHandler) {
|
||||||
|
return getJobBuilder(theJobId, theFirstStep, theLastStep, theCompletionHandler).build();
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates a test job defintion.
|
||||||
|
* This job will be gated.
|
||||||
|
*/
|
||||||
|
public static JobDefinition<? extends IModelJson> buildGatedJobDefinition(
|
||||||
|
String theJobId,
|
||||||
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> theFirstStep,
|
||||||
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> theLastStep,
|
||||||
|
IJobCompletionHandler<TestJobParameters> theCompletionHandler) {
|
||||||
|
return getJobBuilder(theJobId, theFirstStep, theLastStep, theCompletionHandler)
|
||||||
|
.gatedExecution().build();
|
||||||
|
}
|
||||||
|
|
||||||
|
private static JobDefinition.Builder getJobBuilder(
|
||||||
|
String theJobId,
|
||||||
|
IJobStepWorker<TestJobParameters, VoidModel, FirstStepOutput> theFirstStep,
|
||||||
|
IJobStepWorker<TestJobParameters, FirstStepOutput, VoidModel> theLastStep,
|
||||||
|
IJobCompletionHandler<TestJobParameters> theCompletionHandler
|
||||||
|
) {
|
||||||
|
return JobDefinition.newBuilder()
|
||||||
|
.setJobDefinitionId(theJobId)
|
||||||
|
.setJobDescription("test job")
|
||||||
|
.setJobDefinitionVersion(TEST_JOB_VERSION)
|
||||||
|
.setParametersType(TestJobParameters.class)
|
||||||
|
.addFirstStep(
|
||||||
|
FIRST_STEP_ID,
|
||||||
|
"Test first step",
|
||||||
|
FirstStepOutput.class,
|
||||||
|
theFirstStep
|
||||||
|
)
|
||||||
|
.addLastStep(
|
||||||
|
LAST_STEP_ID,
|
||||||
|
"Test last step",
|
||||||
|
theLastStep
|
||||||
|
)
|
||||||
|
.completionHandler(theCompletionHandler);
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
package ca.uhn.fhir.testjob.models;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.model.api.IModelJson;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sample first step output for test job defintions created in {@link ca.uhn.fhir.testjob.TestJobDefinitionUtils}
|
||||||
|
*/
|
||||||
|
public class FirstStepOutput implements IModelJson {
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
package ca.uhn.fhir.testjob.models;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.model.api.IModelJson;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sample output object for reduction steps for test job created in {@link ca.uhn.fhir.testjob.TestJobDefinitionUtils}
|
||||||
|
*/
|
||||||
|
public class ReductionStepOutput implements IModelJson {
|
||||||
|
}
|
|
@ -0,0 +1,9 @@
|
||||||
|
package ca.uhn.fhir.testjob.models;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.model.api.IModelJson;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Sample job parameters; these are used for jobs created in {@link ca.uhn.fhir.testjob.TestJobDefinitionUtils}
|
||||||
|
*/
|
||||||
|
public class TestJobParameters implements IModelJson {
|
||||||
|
}
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -19,6 +19,7 @@
|
||||||
*/
|
*/
|
||||||
package ca.uhn.fhir.jpa.test;
|
package ca.uhn.fhir.jpa.test;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.batch2.api.IJobMaintenanceService;
|
||||||
import ca.uhn.fhir.batch2.jobs.export.BulkDataExportProvider;
|
import ca.uhn.fhir.batch2.jobs.export.BulkDataExportProvider;
|
||||||
import ca.uhn.fhir.context.FhirContext;
|
import ca.uhn.fhir.context.FhirContext;
|
||||||
import ca.uhn.fhir.context.support.IValidationSupport;
|
import ca.uhn.fhir.context.support.IValidationSupport;
|
||||||
|
@ -218,6 +219,7 @@ import static org.hamcrest.MatcherAssert.assertThat;
|
||||||
import static org.hamcrest.Matchers.contains;
|
import static org.hamcrest.Matchers.contains;
|
||||||
import static org.hamcrest.Matchers.empty;
|
import static org.hamcrest.Matchers.empty;
|
||||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertFalse;
|
||||||
import static org.junit.jupiter.api.Assertions.fail;
|
import static org.junit.jupiter.api.Assertions.fail;
|
||||||
|
|
||||||
@ExtendWith(SpringExtension.class)
|
@ExtendWith(SpringExtension.class)
|
||||||
|
@ -247,7 +249,7 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
@Autowired
|
@Autowired
|
||||||
protected ISearchDao mySearchEntityDao;
|
protected ISearchDao mySearchEntityDao;
|
||||||
@Autowired
|
@Autowired
|
||||||
private IBatch2JobInstanceRepository myJobInstanceRepository;
|
protected IBatch2JobInstanceRepository myJobInstanceRepository;
|
||||||
@Autowired
|
@Autowired
|
||||||
private IBatch2WorkChunkRepository myWorkChunkRepository;
|
private IBatch2WorkChunkRepository myWorkChunkRepository;
|
||||||
|
|
||||||
|
@ -553,11 +555,18 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
@Autowired
|
@Autowired
|
||||||
protected TestDaoSearch myTestDaoSearch;
|
protected TestDaoSearch myTestDaoSearch;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
protected IJobMaintenanceService myJobMaintenanceService;
|
||||||
|
|
||||||
@RegisterExtension
|
@RegisterExtension
|
||||||
private final PreventDanglingInterceptorsExtension myPreventDanglingInterceptorsExtension = new PreventDanglingInterceptorsExtension(()-> myInterceptorRegistry);
|
private final PreventDanglingInterceptorsExtension myPreventDanglingInterceptorsExtension = new PreventDanglingInterceptorsExtension(()-> myInterceptorRegistry);
|
||||||
|
|
||||||
@AfterEach()
|
@AfterEach()
|
||||||
|
@Order(0)
|
||||||
public void afterCleanupDao() {
|
public void afterCleanupDao() {
|
||||||
|
// make sure there are no running jobs
|
||||||
|
assertFalse(myBatch2JobHelper.hasRunningJobs());
|
||||||
|
|
||||||
myStorageSettings.setExpireSearchResults(new JpaStorageSettings().isExpireSearchResults());
|
myStorageSettings.setExpireSearchResults(new JpaStorageSettings().isExpireSearchResults());
|
||||||
myStorageSettings.setEnforceReferentialIntegrityOnDelete(new JpaStorageSettings().isEnforceReferentialIntegrityOnDelete());
|
myStorageSettings.setEnforceReferentialIntegrityOnDelete(new JpaStorageSettings().isEnforceReferentialIntegrityOnDelete());
|
||||||
myStorageSettings.setExpireSearchResultsAfterMillis(new JpaStorageSettings().getExpireSearchResultsAfterMillis());
|
myStorageSettings.setExpireSearchResultsAfterMillis(new JpaStorageSettings().getExpireSearchResultsAfterMillis());
|
||||||
|
@ -572,6 +581,7 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
myPagingProvider.setMaximumPageSize(BasePagingProvider.DEFAULT_MAX_PAGE_SIZE);
|
myPagingProvider.setMaximumPageSize(BasePagingProvider.DEFAULT_MAX_PAGE_SIZE);
|
||||||
|
|
||||||
myPartitionSettings.setPartitioningEnabled(false);
|
myPartitionSettings.setPartitioningEnabled(false);
|
||||||
|
ourLog.info("1 - " + getClass().getSimpleName() + ".afterCleanupDao");
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -580,6 +590,8 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
public void afterResetInterceptors() {
|
public void afterResetInterceptors() {
|
||||||
super.afterResetInterceptors();
|
super.afterResetInterceptors();
|
||||||
myInterceptorRegistry.unregisterInterceptor(myPerformanceTracingLoggingInterceptor);
|
myInterceptorRegistry.unregisterInterceptor(myPerformanceTracingLoggingInterceptor);
|
||||||
|
|
||||||
|
ourLog.info("2 - " + getClass().getSimpleName() + ".afterResetInterceptors");
|
||||||
}
|
}
|
||||||
|
|
||||||
@AfterEach
|
@AfterEach
|
||||||
|
@ -590,6 +602,8 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
TermConceptMappingSvcImpl.clearOurLastResultsFromTranslationWithReverseCache();
|
TermConceptMappingSvcImpl.clearOurLastResultsFromTranslationWithReverseCache();
|
||||||
TermDeferredStorageSvcImpl termDeferredStorageSvc = AopTestUtils.getTargetObject(myTerminologyDeferredStorageSvc);
|
TermDeferredStorageSvcImpl termDeferredStorageSvc = AopTestUtils.getTargetObject(myTerminologyDeferredStorageSvc);
|
||||||
termDeferredStorageSvc.clearDeferred();
|
termDeferredStorageSvc.clearDeferred();
|
||||||
|
|
||||||
|
ourLog.info("4 - " + getClass().getSimpleName() + ".afterClearTerminologyCaches");
|
||||||
}
|
}
|
||||||
|
|
||||||
@BeforeEach
|
@BeforeEach
|
||||||
|
@ -613,6 +627,21 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
|
|
||||||
@AfterEach
|
@AfterEach
|
||||||
public void afterPurgeDatabase() {
|
public void afterPurgeDatabase() {
|
||||||
|
/*
|
||||||
|
* We have to stop all scheduled jobs or they will
|
||||||
|
* interfere with the database cleanup!
|
||||||
|
*/
|
||||||
|
ourLog.info("Pausing Schedulers");
|
||||||
|
mySchedulerService.pause();
|
||||||
|
|
||||||
|
myTerminologyDeferredStorageSvc.logQueueForUnitTest();
|
||||||
|
if (!myTermDeferredStorageSvc.isStorageQueueEmpty(true)) {
|
||||||
|
ourLog.warn("There is deferred terminology storage stuff still in the queue. Please verify your tests clean up ok.");
|
||||||
|
if (myTermDeferredStorageSvc instanceof TermDeferredStorageSvcImpl t) {
|
||||||
|
t.clearDeferred();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
boolean registeredStorageInterceptor = false;
|
boolean registeredStorageInterceptor = false;
|
||||||
if (myMdmStorageInterceptor != null && !myInterceptorService.getAllRegisteredInterceptors().contains(myMdmStorageInterceptor)) {
|
if (myMdmStorageInterceptor != null && !myInterceptorService.getAllRegisteredInterceptors().contains(myMdmStorageInterceptor)) {
|
||||||
myInterceptorService.registerInterceptor(myMdmStorageInterceptor);
|
myInterceptorService.registerInterceptor(myMdmStorageInterceptor);
|
||||||
|
@ -635,6 +664,11 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
myInterceptorService.unregisterInterceptor(myMdmStorageInterceptor);
|
myInterceptorService.unregisterInterceptor(myMdmStorageInterceptor);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// restart the jobs
|
||||||
|
ourLog.info("Restarting the schedulers");
|
||||||
|
mySchedulerService.unpause();
|
||||||
|
ourLog.info("5 - " + getClass().getSimpleName() + ".afterPurgeDatabases");
|
||||||
}
|
}
|
||||||
|
|
||||||
@BeforeEach
|
@BeforeEach
|
||||||
|
@ -819,6 +853,7 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
||||||
@AfterEach
|
@AfterEach
|
||||||
public void afterEachClearCaches() {
|
public void afterEachClearCaches() {
|
||||||
myJpaValidationSupportChainR4.invalidateCaches();
|
myJpaValidationSupportChainR4.invalidateCaches();
|
||||||
|
ourLog.info("3 - " + getClass().getSimpleName() + ".afterEachClearCaches");
|
||||||
}
|
}
|
||||||
|
|
||||||
private static void flattenExpansionHierarchy(List<String> theFlattenedHierarchy, List<TermConcept> theCodes, String thePrefix) {
|
private static void flattenExpansionHierarchy(List<String> theFlattenedHierarchy, List<TermConcept> theCodes, String thePrefix) {
|
||||||
|
|
|
@ -69,6 +69,7 @@ import ca.uhn.fhir.jpa.model.entity.ResourceIndexedSearchParamToken;
|
||||||
import ca.uhn.fhir.jpa.model.entity.ResourceIndexedSearchParamUri;
|
import ca.uhn.fhir.jpa.model.entity.ResourceIndexedSearchParamUri;
|
||||||
import ca.uhn.fhir.jpa.model.entity.ResourceLink;
|
import ca.uhn.fhir.jpa.model.entity.ResourceLink;
|
||||||
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
||||||
|
import ca.uhn.fhir.jpa.model.sched.ISchedulerService;
|
||||||
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
||||||
import ca.uhn.fhir.jpa.partition.IPartitionLookupSvc;
|
import ca.uhn.fhir.jpa.partition.IPartitionLookupSvc;
|
||||||
import ca.uhn.fhir.jpa.search.DatabaseBackedPagingProvider;
|
import ca.uhn.fhir.jpa.search.DatabaseBackedPagingProvider;
|
||||||
|
@ -77,6 +78,7 @@ import ca.uhn.fhir.jpa.search.cache.ISearchResultCacheSvc;
|
||||||
import ca.uhn.fhir.jpa.search.reindex.IResourceReindexingSvc;
|
import ca.uhn.fhir.jpa.search.reindex.IResourceReindexingSvc;
|
||||||
import ca.uhn.fhir.jpa.subscription.match.registry.SubscriptionLoader;
|
import ca.uhn.fhir.jpa.subscription.match.registry.SubscriptionLoader;
|
||||||
import ca.uhn.fhir.jpa.subscription.match.registry.SubscriptionRegistry;
|
import ca.uhn.fhir.jpa.subscription.match.registry.SubscriptionRegistry;
|
||||||
|
import ca.uhn.fhir.jpa.term.api.ITermDeferredStorageSvc;
|
||||||
import ca.uhn.fhir.jpa.util.CircularQueueCaptureQueriesListener;
|
import ca.uhn.fhir.jpa.util.CircularQueueCaptureQueriesListener;
|
||||||
import ca.uhn.fhir.jpa.util.MemoryCacheService;
|
import ca.uhn.fhir.jpa.util.MemoryCacheService;
|
||||||
import ca.uhn.fhir.rest.api.server.IBundleProvider;
|
import ca.uhn.fhir.rest.api.server.IBundleProvider;
|
||||||
|
@ -243,6 +245,8 @@ public abstract class BaseJpaTest extends BaseTest {
|
||||||
protected ITermConceptPropertyDao myTermConceptPropertyDao;
|
protected ITermConceptPropertyDao myTermConceptPropertyDao;
|
||||||
@Autowired
|
@Autowired
|
||||||
private MemoryCacheService myMemoryCacheService;
|
private MemoryCacheService myMemoryCacheService;
|
||||||
|
@Autowired
|
||||||
|
protected ISchedulerService mySchedulerService;
|
||||||
@Qualifier(JpaConfig.JPA_VALIDATION_SUPPORT)
|
@Qualifier(JpaConfig.JPA_VALIDATION_SUPPORT)
|
||||||
@Autowired
|
@Autowired
|
||||||
private IValidationSupport myJpaPersistedValidationSupport;
|
private IValidationSupport myJpaPersistedValidationSupport;
|
||||||
|
@ -256,6 +260,8 @@ public abstract class BaseJpaTest extends BaseTest {
|
||||||
private IResourceHistoryTableDao myResourceHistoryTableDao;
|
private IResourceHistoryTableDao myResourceHistoryTableDao;
|
||||||
@Autowired
|
@Autowired
|
||||||
private DaoRegistry myDaoRegistry;
|
private DaoRegistry myDaoRegistry;
|
||||||
|
@Autowired
|
||||||
|
protected ITermDeferredStorageSvc myTermDeferredStorageSvc;
|
||||||
private final List<Object> myRegisteredInterceptors = new ArrayList<>(1);
|
private final List<Object> myRegisteredInterceptors = new ArrayList<>(1);
|
||||||
|
|
||||||
@SuppressWarnings("BusyWait")
|
@SuppressWarnings("BusyWait")
|
||||||
|
@ -291,7 +297,7 @@ public abstract class BaseJpaTest extends BaseTest {
|
||||||
}
|
}
|
||||||
|
|
||||||
@SuppressWarnings("BusyWait")
|
@SuppressWarnings("BusyWait")
|
||||||
protected static void purgeDatabase(JpaStorageSettings theStorageSettings, IFhirSystemDao<?, ?> theSystemDao, IResourceReindexingSvc theResourceReindexingSvc, ISearchCoordinatorSvc theSearchCoordinatorSvc, ISearchParamRegistry theSearchParamRegistry, IBulkDataExportJobSchedulingHelper theBulkDataJobActivator) {
|
public static void purgeDatabase(JpaStorageSettings theStorageSettings, IFhirSystemDao<?, ?> theSystemDao, IResourceReindexingSvc theResourceReindexingSvc, ISearchCoordinatorSvc theSearchCoordinatorSvc, ISearchParamRegistry theSearchParamRegistry, IBulkDataExportJobSchedulingHelper theBulkDataJobActivator) {
|
||||||
theSearchCoordinatorSvc.cancelAllActiveSearches();
|
theSearchCoordinatorSvc.cancelAllActiveSearches();
|
||||||
theResourceReindexingSvc.cancelAndPurgeAllJobs();
|
theResourceReindexingSvc.cancelAndPurgeAllJobs();
|
||||||
theBulkDataJobActivator.cancelAndPurgeAllJobs();
|
theBulkDataJobActivator.cancelAndPurgeAllJobs();
|
||||||
|
@ -303,6 +309,7 @@ public abstract class BaseJpaTest extends BaseTest {
|
||||||
|
|
||||||
for (int count = 0; ; count++) {
|
for (int count = 0; ; count++) {
|
||||||
try {
|
try {
|
||||||
|
ourLog.info("Calling Expunge count {}", count);
|
||||||
theSystemDao.expunge(new ExpungeOptions().setExpungeEverything(true), new SystemRequestDetails());
|
theSystemDao.expunge(new ExpungeOptions().setExpungeEverything(true), new SystemRequestDetails());
|
||||||
break;
|
break;
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
|
@ -595,9 +602,9 @@ public abstract class BaseJpaTest extends BaseTest {
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Sleep until at least 1 ms has elapsed
|
* Sleep until time change on the clocks
|
||||||
*/
|
*/
|
||||||
public void sleepUntilTimeChanges() {
|
public void sleepUntilTimeChange() {
|
||||||
StopWatch sw = new StopWatch();
|
StopWatch sw = new StopWatch();
|
||||||
await().until(() -> sw.getMillis() > 0);
|
await().until(() -> sw.getMillis() > 0);
|
||||||
}
|
}
|
||||||
|
|
|
@ -24,6 +24,7 @@ import ca.uhn.fhir.batch2.api.IJobMaintenanceService;
|
||||||
import ca.uhn.fhir.batch2.api.IJobPersistence;
|
import ca.uhn.fhir.batch2.api.IJobPersistence;
|
||||||
import ca.uhn.fhir.batch2.model.JobInstance;
|
import ca.uhn.fhir.batch2.model.JobInstance;
|
||||||
import ca.uhn.fhir.batch2.model.StatusEnum;
|
import ca.uhn.fhir.batch2.model.StatusEnum;
|
||||||
|
import ca.uhn.fhir.batch2.models.JobInstanceFetchRequest;
|
||||||
import ca.uhn.fhir.jpa.batch.models.Batch2JobStartResponse;
|
import ca.uhn.fhir.jpa.batch.models.Batch2JobStartResponse;
|
||||||
import org.awaitility.Awaitility;
|
import org.awaitility.Awaitility;
|
||||||
import org.awaitility.core.ConditionTimeoutException;
|
import org.awaitility.core.ConditionTimeoutException;
|
||||||
|
@ -32,10 +33,13 @@ import org.slf4j.LoggerFactory;
|
||||||
import org.springframework.transaction.support.TransactionSynchronizationManager;
|
import org.springframework.transaction.support.TransactionSynchronizationManager;
|
||||||
import org.thymeleaf.util.ArrayUtils;
|
import org.thymeleaf.util.ArrayUtils;
|
||||||
|
|
||||||
|
import java.time.Duration;
|
||||||
|
import java.time.temporal.ChronoUnit;
|
||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.concurrent.TimeUnit;
|
import java.util.concurrent.TimeUnit;
|
||||||
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
import static org.awaitility.Awaitility.await;
|
import static org.awaitility.Awaitility.await;
|
||||||
|
@ -89,10 +93,12 @@ public class Batch2JobHelper {
|
||||||
public JobInstance awaitJobHasStatus(String theInstanceId, int theSecondsToWait, StatusEnum... theExpectedStatus) {
|
public JobInstance awaitJobHasStatus(String theInstanceId, int theSecondsToWait, StatusEnum... theExpectedStatus) {
|
||||||
assert !TransactionSynchronizationManager.isActualTransactionActive();
|
assert !TransactionSynchronizationManager.isActualTransactionActive();
|
||||||
|
|
||||||
|
AtomicInteger checkCount = new AtomicInteger();
|
||||||
try {
|
try {
|
||||||
await()
|
await()
|
||||||
.atMost(theSecondsToWait, TimeUnit.SECONDS)
|
.atMost(theSecondsToWait, TimeUnit.SECONDS)
|
||||||
.until(() -> {
|
.until(() -> {
|
||||||
|
checkCount.getAndIncrement();
|
||||||
boolean inFinalStatus = false;
|
boolean inFinalStatus = false;
|
||||||
if (ArrayUtils.contains(theExpectedStatus, StatusEnum.COMPLETED) && !ArrayUtils.contains(theExpectedStatus, StatusEnum.FAILED)) {
|
if (ArrayUtils.contains(theExpectedStatus, StatusEnum.COMPLETED) && !ArrayUtils.contains(theExpectedStatus, StatusEnum.FAILED)) {
|
||||||
inFinalStatus = hasStatus(theInstanceId, StatusEnum.FAILED);
|
inFinalStatus = hasStatus(theInstanceId, StatusEnum.FAILED);
|
||||||
|
@ -113,7 +119,9 @@ public class Batch2JobHelper {
|
||||||
.map(t -> t.getInstanceId() + " " + t.getJobDefinitionId() + "/" + t.getStatus().name())
|
.map(t -> t.getInstanceId() + " " + t.getJobDefinitionId() + "/" + t.getStatus().name())
|
||||||
.collect(Collectors.joining("\n"));
|
.collect(Collectors.joining("\n"));
|
||||||
String currentStatus = myJobCoordinator.getInstance(theInstanceId).getStatus().name();
|
String currentStatus = myJobCoordinator.getInstance(theInstanceId).getStatus().name();
|
||||||
fail("Job " + theInstanceId + " still has status " + currentStatus + " - All statuses:\n" + statuses);
|
fail("Job " + theInstanceId + " still has status " + currentStatus
|
||||||
|
+ " after " + checkCount.get() + " checks in " + theSecondsToWait + " seconds."
|
||||||
|
+ " - All statuses:\n" + statuses);
|
||||||
}
|
}
|
||||||
return myJobCoordinator.getInstance(theInstanceId);
|
return myJobCoordinator.getInstance(theInstanceId);
|
||||||
}
|
}
|
||||||
|
@ -162,8 +170,39 @@ public class Batch2JobHelper {
|
||||||
return awaitJobHasStatus(theInstanceId, StatusEnum.ERRORED, StatusEnum.FAILED);
|
return awaitJobHasStatus(theInstanceId, StatusEnum.ERRORED, StatusEnum.FAILED);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void awaitJobHasStatusWithForcedMaintenanceRuns(String theInstanceId, StatusEnum theStatusEnum) {
|
||||||
|
AtomicInteger counter = new AtomicInteger();
|
||||||
|
try {
|
||||||
|
await()
|
||||||
|
.atMost(Duration.of(10, ChronoUnit.SECONDS))
|
||||||
|
.until(() -> {
|
||||||
|
counter.getAndIncrement();
|
||||||
|
forceRunMaintenancePass();
|
||||||
|
return hasStatus(theInstanceId, theStatusEnum);
|
||||||
|
});
|
||||||
|
} catch (ConditionTimeoutException ex) {
|
||||||
|
StatusEnum status = getStatus(theInstanceId);
|
||||||
|
String msg = String.format(
|
||||||
|
"Job %s has state %s after 10s timeout and %d checks",
|
||||||
|
theInstanceId,
|
||||||
|
status.name(),
|
||||||
|
counter.get()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
public void awaitJobInProgress(String theInstanceId) {
|
public void awaitJobInProgress(String theInstanceId) {
|
||||||
await().until(() -> checkStatusWithMaintenancePass(theInstanceId, StatusEnum.IN_PROGRESS));
|
try {
|
||||||
|
await()
|
||||||
|
.atMost(Duration.of(10, ChronoUnit.SECONDS))
|
||||||
|
.until(() -> checkStatusWithMaintenancePass(theInstanceId, StatusEnum.IN_PROGRESS));
|
||||||
|
} catch (ConditionTimeoutException ex) {
|
||||||
|
StatusEnum statusEnum = getStatus(theInstanceId);
|
||||||
|
String msg = String.format("Job %s still has status %s after 10 seconds.",
|
||||||
|
theInstanceId,
|
||||||
|
statusEnum.name());
|
||||||
|
fail(msg);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public void assertNotFastTracking(String theInstanceId) {
|
public void assertNotFastTracking(String theInstanceId) {
|
||||||
|
@ -175,7 +214,21 @@ public class Batch2JobHelper {
|
||||||
}
|
}
|
||||||
|
|
||||||
public void awaitGatedStepId(String theExpectedGatedStepId, String theInstanceId) {
|
public void awaitGatedStepId(String theExpectedGatedStepId, String theInstanceId) {
|
||||||
await().until(() -> theExpectedGatedStepId.equals(myJobCoordinator.getInstance(theInstanceId).getCurrentGatedStepId()));
|
try {
|
||||||
|
await().until(() -> {
|
||||||
|
String currentGatedStepId = myJobCoordinator.getInstance(theInstanceId).getCurrentGatedStepId();
|
||||||
|
return theExpectedGatedStepId.equals(currentGatedStepId);
|
||||||
|
});
|
||||||
|
} catch (ConditionTimeoutException ex) {
|
||||||
|
JobInstance instance = myJobCoordinator.getInstance(theInstanceId);
|
||||||
|
String msg = String.format("Instance %s of Job %s never got to step %s. Current step %s, current status %s.",
|
||||||
|
theInstanceId,
|
||||||
|
instance.getJobDefinitionId(),
|
||||||
|
theExpectedGatedStepId,
|
||||||
|
instance.getCurrentGatedStepId(),
|
||||||
|
instance.getStatus().name());
|
||||||
|
fail(msg);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public long getCombinedRecordsProcessed(String theInstanceId) {
|
public long getCombinedRecordsProcessed(String theInstanceId) {
|
||||||
|
@ -223,6 +276,33 @@ public class Batch2JobHelper {
|
||||||
awaitNoJobsRunning(false);
|
awaitNoJobsRunning(false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public boolean hasRunningJobs() {
|
||||||
|
HashMap<String, String> map = new HashMap<>();
|
||||||
|
List<JobInstance> jobs = myJobCoordinator.getInstances(1000, 1);
|
||||||
|
// "All Jobs" assumes at least one job exists
|
||||||
|
if (jobs.isEmpty()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (JobInstance job : jobs) {
|
||||||
|
if (job.getStatus().isIncomplete()) {
|
||||||
|
map.put(job.getInstanceId(), job.getJobDefinitionId() + " : " + job.getStatus().name());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!map.isEmpty()) {
|
||||||
|
ourLog.error(
|
||||||
|
"Found Running Jobs "
|
||||||
|
+ map.keySet().stream()
|
||||||
|
.map(k -> k + " : " + map.get(k))
|
||||||
|
.collect(Collectors.joining("\n"))
|
||||||
|
);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
public void awaitNoJobsRunning(boolean theExpectAtLeastOneJobToExist) {
|
public void awaitNoJobsRunning(boolean theExpectAtLeastOneJobToExist) {
|
||||||
HashMap<String, String> map = new HashMap<>();
|
HashMap<String, String> map = new HashMap<>();
|
||||||
Awaitility.await().atMost(10, TimeUnit.SECONDS)
|
Awaitility.await().atMost(10, TimeUnit.SECONDS)
|
||||||
|
@ -255,6 +335,10 @@ public class Batch2JobHelper {
|
||||||
myJobMaintenanceService.runMaintenancePass();
|
myJobMaintenanceService.runMaintenancePass();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void enableMaintenanceRunner(boolean theEnabled) {
|
||||||
|
myJobMaintenanceService.enableMaintenancePass(theEnabled);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Forces a run of the maintenance pass without waiting for
|
* Forces a run of the maintenance pass without waiting for
|
||||||
* the semaphore to release
|
* the semaphore to release
|
||||||
|
|
|
@ -0,0 +1,23 @@
|
||||||
|
package ca.uhn.fhir.jpa.test.config;
|
||||||
|
|
||||||
|
import ca.uhn.fhir.batch2.api.IJobMaintenanceService;
|
||||||
|
import ca.uhn.fhir.batch2.maintenance.JobMaintenanceServiceImpl;
|
||||||
|
import jakarta.annotation.PostConstruct;
|
||||||
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
import org.springframework.context.annotation.Configuration;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A fast scheduler to use for Batch2 job Integration Tests.
|
||||||
|
* This scheduler will run every 200ms (instead of the default 1min)
|
||||||
|
* so that our ITs can complete in a sane amount of time.
|
||||||
|
*/
|
||||||
|
@Configuration
|
||||||
|
public class Batch2FastSchedulerConfig {
|
||||||
|
@Autowired
|
||||||
|
IJobMaintenanceService myJobMaintenanceService;
|
||||||
|
|
||||||
|
@PostConstruct
|
||||||
|
void fastScheduler() {
|
||||||
|
((JobMaintenanceServiceImpl)myJobMaintenanceService).setScheduledJobFrequencyMillis(200);
|
||||||
|
}
|
||||||
|
}
|
|
@ -57,11 +57,7 @@ import java.sql.Connection;
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.Deque;
|
|
||||||
import java.util.HashMap;
|
|
||||||
import java.util.Iterator;
|
|
||||||
import java.util.LinkedHashMap;
|
import java.util.LinkedHashMap;
|
||||||
import java.util.LinkedList;
|
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
import java.util.concurrent.TimeUnit;
|
import java.util.concurrent.TimeUnit;
|
||||||
|
@ -85,6 +81,7 @@ import static org.junit.jupiter.api.Assertions.fail;
|
||||||
public class TestR4Config {
|
public class TestR4Config {
|
||||||
|
|
||||||
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(TestR4Config.class);
|
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(TestR4Config.class);
|
||||||
|
|
||||||
public static Integer ourMaxThreads;
|
public static Integer ourMaxThreads;
|
||||||
private final AtomicInteger myBorrowedConnectionCount = new AtomicInteger(0);
|
private final AtomicInteger myBorrowedConnectionCount = new AtomicInteger(0);
|
||||||
private final AtomicInteger myReturnedConnectionCount = new AtomicInteger(0);
|
private final AtomicInteger myReturnedConnectionCount = new AtomicInteger(0);
|
||||||
|
@ -96,7 +93,7 @@ public class TestR4Config {
|
||||||
* starvation
|
* starvation
|
||||||
*/
|
*/
|
||||||
if (ourMaxThreads == null) {
|
if (ourMaxThreads == null) {
|
||||||
ourMaxThreads = (int) (Math.random() * 6.0) + 3;
|
ourMaxThreads = (int) (Math.random() * 6.0) + 4;
|
||||||
|
|
||||||
if (HapiTestSystemProperties.isSingleDbConnectionEnabled()) {
|
if (HapiTestSystemProperties.isSingleDbConnectionEnabled()) {
|
||||||
ourMaxThreads = 1;
|
ourMaxThreads = 1;
|
||||||
|
@ -108,7 +105,7 @@ public class TestR4Config {
|
||||||
ourLog.warn("ourMaxThreads={}", ourMaxThreads);
|
ourLog.warn("ourMaxThreads={}", ourMaxThreads);
|
||||||
}
|
}
|
||||||
|
|
||||||
private Map<Connection, Exception> myConnectionRequestStackTraces = Collections.synchronizedMap(new LinkedHashMap<>());
|
private final Map<Connection, Exception> myConnectionRequestStackTraces = Collections.synchronizedMap(new LinkedHashMap<>());
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
TestHSearchAddInConfig.IHSearchConfigurer hibernateSearchConfigurer;
|
TestHSearchAddInConfig.IHSearchConfigurer hibernateSearchConfigurer;
|
||||||
|
@ -300,5 +297,4 @@ public class TestR4Config {
|
||||||
public static int getMaxThreads() {
|
public static int getMaxThreads() {
|
||||||
return ourMaxThreads;
|
return ourMaxThreads;
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir</artifactId>
|
<artifactId>hapi-fhir</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>hapi-fhir-serviceloaders</artifactId>
|
<artifactId>hapi-fhir-serviceloaders</artifactId>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>hapi-fhir-serviceloaders</artifactId>
|
<artifactId>hapi-fhir-serviceloaders</artifactId>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
@ -21,7 +21,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir-caching-api</artifactId>
|
<artifactId>hapi-fhir-caching-api</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>hapi-fhir-serviceloaders</artifactId>
|
<artifactId>hapi-fhir-serviceloaders</artifactId>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>hapi-fhir</artifactId>
|
<artifactId>hapi-fhir</artifactId>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../../pom.xml</relativePath>
|
<relativePath>../../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
|
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
<artifactId>hapi-fhir-spring-boot-sample-client-apache</artifactId>
|
<artifactId>hapi-fhir-spring-boot-sample-client-apache</artifactId>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
|
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
|
<artifactId>hapi-fhir-spring-boot-samples</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir-spring-boot</artifactId>
|
<artifactId>hapi-fhir-spring-boot</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
</parent>
|
</parent>
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-fhir</artifactId>
|
<artifactId>hapi-fhir</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../pom.xml</relativePath>
|
<relativePath>../pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>ca.uhn.hapi.fhir</groupId>
|
<groupId>ca.uhn.hapi.fhir</groupId>
|
||||||
<artifactId>hapi-deployable-pom</artifactId>
|
<artifactId>hapi-deployable-pom</artifactId>
|
||||||
<version>7.3.0-SNAPSHOT</version>
|
<version>7.3.1-SNAPSHOT</version>
|
||||||
|
|
||||||
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
<relativePath>../hapi-deployable-pom/pom.xml</relativePath>
|
||||||
</parent>
|
</parent>
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue