rel_7_4 Merge-back (#6212)

* use SearchParamater validator in package installer (#6112)

* Ensure ' ' is treated as '+' in timezones with offsets. (#6115)

* Use lockless mode when adding index on Azure Sql server (#6100)

* Use lockless mode when adding index on Azure Sql server

Use try-catch for Online add-index on Sql Server.
This avoids having to map out the entire matrix of Sql Server product names and ONLINE index support.
Warnings in docs, and cleanups

* make consent service dont call willSeeResource on children if parent resource is AUTHORIZED or REJECT (#6127)

* fix hfj search migration task (#6143)

* fix migration task

* changelog

* changelog

* code review

* spotless

---------

Co-authored-by: jdar <justin.dar@smiledigitalhealth.com>

* Enhance migration for MSSQL to change the collation for HFJ_RESOURCE.FHIR_ID to case sensitive (#6135)

* MSSQL:  Migrate HFJ_RESOURCE.FHIR_ID to new collation:  SQL_Latin1_General_CP1_CS_AS

* Spotless.

* Enhance test.  Fix case in ResourceSearchView to defend against future migration to case insensitive collation.

* Remove TODOs.  Add comment to ResourceSearchView explaining why all columns are uppercase.  Changelog.

* Update hapi-fhir-docs/src/main/resources/ca/uhn/hapi/fhir/changelog/7_4_0/6146-mssql-hfj-resource-fhir-id-colllation.yaml

Code reviewer suggestion

Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>

* Code review fixes:  Make changes conditional on the collation including _CI_, otherwise, leave it alone.

---------

Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>

* Common API for FHIR Data Access (#6141)

* Add initial interface for common FHIR API

* Fix formatting

* Update javadocs

* Address code review comments

* Add path value to _id search parameter and other missing search param… (#6128)

* Add path value to _id search parameter and other missing search parameters to IAnyResource.

* Adjust tests and remove now unnecessary addition of meta parameters which are now provided by IAnyResource

* Revert unneeded change

* _security param is not token but uri

* Add tests for new defined resource-level standard parameters

* Adjust test

---------

Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>

* update to online (#6157)

* SEARCH_UUID should be non-null (#6165)

Avoid using constants in migrations because it creates false history.

* Handle 400 and 404 codes returned by remote terminology operation. (#6151)

* Handle 400 and 404 codes returned by remote terminology  operation.

* Some simplification

* Adjust changelog

* Add a comment to explain alternate solution which can be reused.

* fix concepts with no display element for $apply-codesystem-delta-add and $apply-codesystem-delta-remove (#6164)

* allow transaction with update conditional urls (#6155)

* Revert "Add path value to _id search parameter and other missing search param…" (#6171)

This reverts commit 2275eba1a0.

* 7 2 2 mb (#6160)

* Enhance RuleBuilder code to support multiple instances (#5852)

* Overhaul bulk export permissions.

* Overhaul bulk export permissions.

* Small tweak to rule builder.

* Cleanup validation.

* Cleanup validation.

* Code review feedback.

* Postgres terminology service hard coded column names migration (#5866)

* updating parent pids column name

* updating name of the fullTestField Search

* updating name of the fullTestField Search

* fixing typo.

* failing test.

* - Moving FullTextField annotation from getter method and adding it to the newly added VC property of the entity;

- reverting the name of the FullTextField entity to its previous name of 'myParentPids';

- reverting the name of the lucene index to search on in the terminology service.

- updating the changelog;

* making spotless happy

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* 5879 back porting fix for issue 5877 (attempting to update a tokenparam with a value greater than 200 characters raises an sqlexception) to release rel_7_2 (#5881)

* initial failing test.

* solution

* adding changelog

* spotless

* moving changelog from 7_4_0 to 7_2_0 and deleting 7_4_0 folder.

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* Expose BaseRequestPartitionHelperSvc validateAndNormalize methods (#5811)

* Expose BaseRequestPartitionHelperSvc validate and normalize methods

* Compilation errors

* change mock test to jpa test

* change mock test to jpa test

* validateAndNormalizePartitionIds

* validateAndNormalizePartitionNames

* validateAndNormalizePartitionIds validation + bug fix

* validateAndNormalizePartitionNames validation

* fix test

* version bump

* Ensure a non-numeric FHIR ID doesn't result in a NumberFormatException when processing survivorship rules (#5883)

* Add failing test as well as commented out potential solution.

* Fix for NumberFormatException.

* Add conditional test for survivorship rules.

* Spotless.

* Add changelog.

* Code review feedback.

* updating documentation (#5889)

* Ensure temp file ends with "." and then suffix. (#5894)

* bugfix to https://github.com/hapifhir/hapi-fhir-jpaserver-starter/issues/675 (#5892)

Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>

* Enhance mdm interceptor (#5899)

* Add MDM Transaction Context for further downstream processing giving interceptors a better chance of figuring out what happened.

* Added javadoc

* Cahngelog

* spotless

---------

Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>

* Fix BaseHapiFhirResourceDao $meta method to use HapiTransactionService instead of @Transaction (#5896)

* Try making ResourceTable.myTags EAGER instead of LAZY and see if it breaks anything.

* Try making ResourceTable.myTags EAGER instead of LAZY and see if it breaks anything.

* Ensure BaseHapiFhirResourceDao#metaGetOperation uses HapiTransactionService instead of @Transactional in order to resolve megascale $meta bug.

* Add changelog.

* Update hapi-fhir-docs/src/main/resources/ca/uhn/hapi/fhir/changelog/7_2_0/5898-ld-megascale-meta-operation-fails-hapi-0389.yaml

Commit code reviewer suggestion.

Co-authored-by: Tadgh <garygrantgraham@gmail.com>

---------

Co-authored-by: Tadgh <garygrantgraham@gmail.com>

* Fix query chained on sort bug where we over-filter results (#5903)

* Failing test.

* Ensure test cleanup doesn't fail by deleting Patients before Practitioners.

* Implement fix.

* Spotless.

* Clean up unit test and add changelog.  Fix unit test.

* Fix changelog file.

* Apply suggestions from code review

Apply code review suggestions.

Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>

* Spotless

---------

Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>

* cve fix (#5906)

Co-authored-by: Long Ma <long@smilecdr.com>

* Fixing issues with postgres LOB migration. (#5895)

* Fixing issues with postgres LOB migration.

* addressing code review comments for audit/transaction logs.

* test and implementation for BinaryStorageEntity migration post code review.

* test and implementation for BinaryStorageEntity migration post code review.

* test and implementation for TermConcept
 migration post code review.

* applying spotless

* test and implementation for TermConceptProperty
 migration post code review.

* test and implementation for TermValueSetConcept
 migration post code review.

* fixing migration version

* fixing migration task

* changelog

* fixing changelog

* Minor renames

* addressing comments and suggestions from second code review.

* passing tests

* fixing more tests

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: Tadgh <garygrantgraham@gmail.com>

* 6051 bulk export security errors (#5915)

* Enhance RuleBuilder code to support multiple instances (#5852)

* Overhaul bulk export permissions.

* Overhaul bulk export permissions.

* Small tweak to rule builder.

* Cleanup validation.

* Cleanup validation.

* Code review feedback.

* Postgres terminology service hard coded column names migration (#5866)

* updating parent pids column name

* updating name of the fullTestField Search

* updating name of the fullTestField Search

* fixing typo.

* failing test.

* - Moving FullTextField annotation from getter method and adding it to the newly added VC property of the entity;

- reverting the name of the FullTextField entity to its previous name of 'myParentPids';

- reverting the name of the lucene index to search on in the terminology service.

- updating the changelog;

* making spotless happy

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* 5879 back porting fix for issue 5877 (attempting to update a tokenparam with a value greater than 200 characters raises an sqlexception) to release rel_7_2 (#5881)

* initial failing test.

* solution

* adding changelog

* spotless

* moving changelog from 7_4_0 to 7_2_0 and deleting 7_4_0 folder.

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* Expose BaseRequestPartitionHelperSvc validateAndNormalize methods (#5811)

* Expose BaseRequestPartitionHelperSvc validate and normalize methods

* Compilation errors

* change mock test to jpa test

* change mock test to jpa test

* validateAndNormalizePartitionIds

* validateAndNormalizePartitionNames

* validateAndNormalizePartitionIds validation + bug fix

* validateAndNormalizePartitionNames validation

* fix test

* version bump

* Ensure a non-numeric FHIR ID doesn't result in a NumberFormatException when processing survivorship rules (#5883)

* Add failing test as well as commented out potential solution.

* Fix for NumberFormatException.

* Add conditional test for survivorship rules.

* Spotless.

* Add changelog.

* Code review feedback.

* updating documentation (#5889)

* Ensure temp file ends with "." and then suffix. (#5894)

* bugfix to https://github.com/hapifhir/hapi-fhir-jpaserver-starter/issues/675 (#5892)

Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>

* Enhance mdm interceptor (#5899)

* Add MDM Transaction Context for further downstream processing giving interceptors a better chance of figuring out what happened.

* Added javadoc

* Cahngelog

* spotless

---------

Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>

* Fix BaseHapiFhirResourceDao $meta method to use HapiTransactionService instead of @Transaction (#5896)

* Try making ResourceTable.myTags EAGER instead of LAZY and see if it breaks anything.

* Try making ResourceTable.myTags EAGER instead of LAZY and see if it breaks anything.

* Ensure BaseHapiFhirResourceDao#metaGetOperation uses HapiTransactionService instead of @Transactional in order to resolve megascale $meta bug.

* Add changelog.

* Update hapi-fhir-docs/src/main/resources/ca/uhn/hapi/fhir/changelog/7_2_0/5898-ld-megascale-meta-operation-fails-hapi-0389.yaml

Commit code reviewer suggestion.

Co-authored-by: Tadgh <garygrantgraham@gmail.com>

---------

Co-authored-by: Tadgh <garygrantgraham@gmail.com>

* Fix query chained on sort bug where we over-filter results (#5903)

* Failing test.

* Ensure test cleanup doesn't fail by deleting Patients before Practitioners.

* Implement fix.

* Spotless.

* Clean up unit test and add changelog.  Fix unit test.

* Fix changelog file.

* Apply suggestions from code review

Apply code review suggestions.

Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>

* Spotless

---------

Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>

* cve fix (#5906)

Co-authored-by: Long Ma <long@smilecdr.com>

* Fixing issues with postgres LOB migration. (#5895)

* Fixing issues with postgres LOB migration.

* addressing code review comments for audit/transaction logs.

* test and implementation for BinaryStorageEntity migration post code review.

* test and implementation for BinaryStorageEntity migration post code review.

* test and implementation for TermConcept
 migration post code review.

* applying spotless

* test and implementation for TermConceptProperty
 migration post code review.

* test and implementation for TermValueSetConcept
 migration post code review.

* fixing migration version

* fixing migration task

* changelog

* fixing changelog

* Minor renames

* addressing comments and suggestions from second code review.

* passing tests

* fixing more tests

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: Tadgh <garygrantgraham@gmail.com>

* refactor bulk export rule, add concept of appliestoallpatients, fix tests

* spotless

* Cahgnelog, tests

* more tests

* refactor style checks

---------

Co-authored-by: Luke deGruchy <luke.degruchy@smilecdr.com>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: Nathan Doef <n.doef@protonmail.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: dotasek <david.otasek@smilecdr.com>
Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>
Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>
Co-authored-by: longma1 <32119004+longma1@users.noreply.github.com>
Co-authored-by: Long Ma <long@smilecdr.com>

* Convert a few nulls to aggressive denies

* Change chain sort syntax for MS SQL (#5917)

* Change sort type on chains

* Change sort type on chains

* Test for MS SQL

* Comments

* Version bump

* Updating version to: 7.2.1 post release.

* Fix queries with chained sort with Lucene by checking supported SortSpecs (#5958)

* First commit with very rough solution.

* Solidify solutions for both requirements.  Add new tests.  Enhance others.

* Spotless.

* Add new chained sort spec algorithm.  Add new Msg.codes.  Finalize tests.  Update docs.  Add changelog.

* pom remove the snapshot

* Updating version to: 7.2.2 post release.

* cherry-picked pr 6051

* changelog fix

* cherry-picked 6027

* docs and changelog

* merge fix for issue with infinite cache refresh loop

* Use lockless mode when adding index on Azure Sql server (#6100) (#6129)

* Use lockless mode when adding index on Azure Sql server

Use try-catch for Online add-index on Sql Server.
This avoids having to map out the entire matrix of Sql Server product names and ONLINE index support.
Warnings in docs, and cleanups

* added fix for 6133

* failing Test

* Add fix

* spotless

* Remove useless file

* Fix claeaner

* cleanup

* Remove dead class

* Changelog

* test description

* Add test. Fix broken logic.

* fix quantity search parameter test to pass

* reverted test testDirectPathWholeResourceNotIndexedWorks in FhirResourceDaoR4SearchWithElasticSearchIT

* spotless

* cleanup mistake during merge

* added missing imports

* fix more mergeback oopsies

* bump to 7.3.13-snapshot

---------

Co-authored-by: Luke deGruchy <luke.degruchy@smilecdr.com>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: Nathan Doef <n.doef@protonmail.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: dotasek <david.otasek@smilecdr.com>
Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>
Co-authored-by: Tadgh <garygrantgraham@gmail.com>
Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>
Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: markiantorno <markiantorno@gmail.com>

* Patient validate operation with remote terminology service enabled returns 400 bad request (#6124)

* Patient $validate operation with Remote Terminology Service enabled returns 400 Bad Request - failing test

* Patient $validate operation with Remote Terminology Service enabled returns 400 Bad Request - implementation

* - Changing method accessibility from default to public to allow method overwriting. (#6172)

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* applying Taha Attari's fix on branch merging to rel_7_4 (#6177)

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* Automated Migration Testing (HAPI-FHIR) V7_4_0 (#6170)

* Automated Migration Testing (HAPI-FHIR) - updated test migration scripts for 7_4_0

* Automated Migration Testing (HAPI-FHIR) - updated test migration scripts for 7_2_0

* To provide the target resource partitionId and partitionDate in the resourceLinlk (#6149)

* initial POC.

* addressing comments from first code review

* Adding tests

* adding changelog and spotless

* fixing tests

* spotless

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* applying patch (#6190)

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* cve for 08 release (#6197)

Co-authored-by: Long Ma <long@smilecdr.com>

* Search param path missing for _id param (#6175)

* Add path tp _id search param and definitions for _lastUpdated _tag, _profile and _security

* Add tests and changelog

* Increase snapshot version

* Irrelevant change to force new build

---------

Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>

* Reverting to core fhir-test-cases 1.1.14; (#6194)

re-enabling FhirPatchCoreTest

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* Fix $reindex job with custom partition interceptor based on resource type. Update reindex job to always run with urls. (#6185)

* Refactor logic to bring together partition related logic in batch2 jobs using IJobPartitionProvider. Update logic such that reindex job without urls will attempt to create urls for all supported resource types.

* Small changes and fix of pipeline error.

* Small change to enable mdm-submit to use PartitionedUrl in the job parameters

* Revert logback change. Fix dependency version generating errors in the pipeline.

* Spotless fix. Add test dependency back without version.

* Upgrade test dependency to another version

* Add javadoc for PartitionedUrl. Other small fixes and refactoring in tests.

* Spotless fix.

* Change to JobParameters to fix some of the tests.

* Small changes for code review in test

* Address code review comments.

* Revert change from bad merge.

* Address remaining code review comments

* 6188 subscription not marked as a cross partition subscription matches operation on resources in other partitions (#6191)

* initial failing test

* WIP

* fixing/adding tests

* added changelog

* spotless

* fixing tests

* Cleaning up tests

* addressing commetns from first code review.

* no-op to get pipelines going

---------

Co-authored-by: peartree <etienne.poirier@smilecdr.com>

* Resolve 6173 - Log unhandled Exceptions in RestfulServer (#6176) (#6205)

* 6173 - Log unhandled Exceptions in RestfulServer.

* Use placeholder for failed streams.

* Starting test for server handling.

* Got test working.

* Fixed use of synchronized keyword.

* Applied mvn spotless.

---------

Co-authored-by: Kevin Dougan <72025369+KevinDougan@users.noreply.github.com>
Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>

* Partition aware transactions (#6167)

* Partition aware transactions

* Address review comments

* Test fixes

* Remove dead issue field

* Test fixes

---------

Co-authored-by: Tadgh <garygrantgraham@gmail.com>

* Add license header

* rel_7_4 mergeback

---------

Co-authored-by: Emre Dincturk <74370953+mrdnctrk@users.noreply.github.com>
Co-authored-by: Luke deGruchy <luke.degruchy@smilecdr.com>
Co-authored-by: Michael Buckley <michaelabuckley@gmail.com>
Co-authored-by: jdar8 <69840459+jdar8@users.noreply.github.com>
Co-authored-by: jdar <justin.dar@smiledigitalhealth.com>
Co-authored-by: Luke deGruchy <luke.degruchy@smiledigitalhealth.com>
Co-authored-by: JP <jonathan.i.percival@gmail.com>
Co-authored-by: jmarchionatto <60409882+jmarchionatto@users.noreply.github.com>
Co-authored-by: juan.marchionatto <juan.marchionatto@smilecdr.com>
Co-authored-by: TipzCM <leif.stawnyczy@gmail.com>
Co-authored-by: Martha Mitran <marthamitran@gmail.com>
Co-authored-by: Etienne Poirier <33007955+epeartree@users.noreply.github.com>
Co-authored-by: longma1 <32119004+longma1@users.noreply.github.com>
Co-authored-by: peartree <etienne.poirier@smilecdr.com>
Co-authored-by: Nathan Doef <n.doef@protonmail.com>
Co-authored-by: dotasek <david.otasek@smilecdr.com>
Co-authored-by: Jens Kristian Villadsen <jenskristianvilladsen@gmail.com>
Co-authored-by: Tadgh <garygrantgraham@gmail.com>
Co-authored-by: Long Ma <long@smilecdr.com>
Co-authored-by: markiantorno <markiantorno@gmail.com>
Co-authored-by: Martha Mitran <martha.mitran@smiledigitalhealth.com>
Co-authored-by: Kevin Dougan <72025369+KevinDougan@users.noreply.github.com>
Co-authored-by: James Agnew <jamesagnew@gmail.com>
This commit is contained in:
volodymyr-korzh 2024-08-14 15:03:35 -06:00 committed by GitHub
parent 4368e33fda
commit 568c6d20db
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
158 changed files with 4241 additions and 1510 deletions

View File

@ -10,3 +10,4 @@ distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1071,8 +1071,9 @@ public interface IValidationSupport {
}
}
public void setErrorMessage(String theErrorMessage) {
public LookupCodeResult setErrorMessage(String theErrorMessage) {
myErrorMessage = theErrorMessage;
return this;
}
public String getErrorMessage() {

View File

@ -40,6 +40,7 @@ import java.util.Collections;
import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.apache.commons.lang3.ObjectUtils.defaultIfNull;
@ -98,6 +99,28 @@ public class RequestPartitionId implements IModelJson {
myAllPartitions = true;
}
/**
* Creates a new RequestPartitionId which includes all partition IDs from
* this {@link RequestPartitionId} but also includes all IDs from the given
* {@link RequestPartitionId}. Any duplicates are only included once, and
* partition names and dates are ignored and not returned. This {@link RequestPartitionId}
* and {@literal theOther} are not modified.
*
* @since 7.4.0
*/
public RequestPartitionId mergeIds(RequestPartitionId theOther) {
if (isAllPartitions() || theOther.isAllPartitions()) {
return RequestPartitionId.allPartitions();
}
List<Integer> thisPartitionIds = getPartitionIds();
List<Integer> otherPartitionIds = theOther.getPartitionIds();
List<Integer> newPartitionIds = Stream.concat(thisPartitionIds.stream(), otherPartitionIds.stream())
.distinct()
.collect(Collectors.toList());
return RequestPartitionId.fromPartitionIds(newPartitionIds);
}
public static RequestPartitionId fromJson(String theJson) throws JsonProcessingException {
return ourObjectMapper.readValue(theJson, RequestPartitionId.class);
}
@ -332,6 +355,14 @@ public class RequestPartitionId implements IModelJson {
return new RequestPartitionId(thePartitionNames, thePartitionIds, thePartitionDate);
}
public static boolean isDefaultPartition(@Nullable RequestPartitionId thePartitionId) {
if (thePartitionId == null) {
return false;
}
return thePartitionId.isDefaultPartition();
}
/**
* Create a string representation suitable for use as a cache key. Null aware.
* <p>

View File

@ -1,3 +1,22 @@
/*-
* #%L
* HAPI FHIR - Core Library
* %%
* Copyright (C) 2014 - 2024 Smile CDR, Inc.
* %%
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* #L%
*/
package ca.uhn.fhir.repository;
import ca.uhn.fhir.context.FhirContext;

View File

@ -101,7 +101,7 @@ public class FhirTerser {
return newList;
}
private ExtensionDt createEmptyExtensionDt(IBaseExtension theBaseExtension, String theUrl) {
private ExtensionDt createEmptyExtensionDt(IBaseExtension<?, ?> theBaseExtension, String theUrl) {
return createEmptyExtensionDt(theBaseExtension, false, theUrl);
}
@ -122,13 +122,13 @@ public class FhirTerser {
return theSupportsUndeclaredExtensions.addUndeclaredExtension(theIsModifier, theUrl);
}
private IBaseExtension createEmptyExtension(IBaseHasExtensions theBaseHasExtensions, String theUrl) {
return (IBaseExtension) theBaseHasExtensions.addExtension().setUrl(theUrl);
private IBaseExtension<?, ?> createEmptyExtension(IBaseHasExtensions theBaseHasExtensions, String theUrl) {
return (IBaseExtension<?, ?>) theBaseHasExtensions.addExtension().setUrl(theUrl);
}
private IBaseExtension createEmptyModifierExtension(
private IBaseExtension<?, ?> createEmptyModifierExtension(
IBaseHasModifierExtensions theBaseHasModifierExtensions, String theUrl) {
return (IBaseExtension)
return (IBaseExtension<?, ?>)
theBaseHasModifierExtensions.addModifierExtension().setUrl(theUrl);
}
@ -407,7 +407,7 @@ public class FhirTerser {
public String getSinglePrimitiveValueOrNull(IBase theTarget, String thePath) {
return getSingleValue(theTarget, thePath, IPrimitiveType.class)
.map(t -> t.getValueAsString())
.map(IPrimitiveType::getValueAsString)
.orElse(null);
}
@ -487,7 +487,7 @@ public class FhirTerser {
} else {
// DSTU3+
final String extensionUrlForLambda = extensionUrl;
List<IBaseExtension> extensions = Collections.emptyList();
List<IBaseExtension<?, ?>> extensions = Collections.emptyList();
if (theCurrentObj instanceof IBaseHasExtensions) {
extensions = ((IBaseHasExtensions) theCurrentObj)
.getExtension().stream()
@ -505,7 +505,7 @@ public class FhirTerser {
}
}
for (IBaseExtension next : extensions) {
for (IBaseExtension<?, ?> next : extensions) {
if (theWantedClass.isAssignableFrom(next.getClass())) {
retVal.add((T) next);
}
@ -581,7 +581,7 @@ public class FhirTerser {
} else {
// DSTU3+
final String extensionUrlForLambda = extensionUrl;
List<IBaseExtension> extensions = Collections.emptyList();
List<IBaseExtension<?, ?>> extensions = Collections.emptyList();
if (theCurrentObj instanceof IBaseHasModifierExtensions) {
extensions = ((IBaseHasModifierExtensions) theCurrentObj)
@ -602,7 +602,7 @@ public class FhirTerser {
}
}
for (IBaseExtension next : extensions) {
for (IBaseExtension<?, ?> next : extensions) {
if (theWantedClass.isAssignableFrom(next.getClass())) {
retVal.add((T) next);
}
@ -1203,7 +1203,6 @@ public class FhirTerser {
public void visit(IBase theElement, IModelVisitor2 theVisitor) {
BaseRuntimeElementDefinition<?> def = myContext.getElementDefinition(theElement.getClass());
if (def instanceof BaseRuntimeElementCompositeDefinition) {
BaseRuntimeElementCompositeDefinition<?> defComposite = (BaseRuntimeElementCompositeDefinition<?>) def;
visit(theElement, null, def, theVisitor, new ArrayList<>(), new ArrayList<>(), new ArrayList<>());
} else if (theElement instanceof IBaseExtension) {
theVisitor.acceptUndeclaredExtension(
@ -1562,7 +1561,7 @@ public class FhirTerser {
throw new DataFormatException(Msg.code(1796) + "Invalid path " + thePath + ": Element of type "
+ def.getName() + " has no child named " + nextPart + ". Valid names: "
+ def.getChildrenAndExtension().stream()
.map(t -> t.getElementName())
.map(BaseRuntimeChildDefinition::getElementName)
.sorted()
.collect(Collectors.joining(", ")));
}
@ -1817,7 +1816,18 @@ public class FhirTerser {
if (getResourceToIdMap() == null) {
return null;
}
return getResourceToIdMap().get(theNext);
var idFromMap = getResourceToIdMap().get(theNext);
if (idFromMap != null) {
return idFromMap;
} else if (theNext.getIdElement().getIdPart() != null) {
return getResourceToIdMap().values().stream()
.filter(id -> theNext.getIdElement().getIdPart().equals(id.getIdPart()))
.findAny()
.orElse(null);
} else {
return null;
}
}
private List<IBaseResource> getOrCreateResourceList() {

View File

@ -65,7 +65,7 @@ public class SubscriptionUtil {
populatePrimitiveValue(theContext, theSubscription, "status", theStatus);
}
public static boolean isCrossPartition(IBaseResource theSubscription) {
public static boolean isDefinedAsCrossPartitionSubcription(IBaseResource theSubscription) {
if (theSubscription instanceof IBaseHasExtensions) {
IBaseExtension extension = ExtensionUtil.getExtensionByUrl(
theSubscription, HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION);

View File

@ -154,6 +154,8 @@ public enum VersionEnum {
V7_1_0,
V7_2_0,
V7_2_1,
V7_2_2,
V7_3_0,
V7_4_0,

View File

@ -20,29 +20,115 @@
package org.hl7.fhir.instance.model.api;
import ca.uhn.fhir.model.api.annotation.SearchParamDefinition;
import ca.uhn.fhir.rest.gclient.DateClientParam;
import ca.uhn.fhir.rest.gclient.TokenClientParam;
import ca.uhn.fhir.rest.gclient.UriClientParam;
/**
* An IBaseResource that has a FHIR version of DSTU3 or higher
*/
public interface IAnyResource extends IBaseResource {
String SP_RES_ID = "_id";
/**
* Search parameter constant for <b>_id</b>
*/
@SearchParamDefinition(name = "_id", path = "", description = "The ID of the resource", type = "token")
String SP_RES_ID = "_id";
@SearchParamDefinition(
name = SP_RES_ID,
path = "Resource.id",
description = "The ID of the resource",
type = "token")
/**
* <b>Fluent Client</b> search parameter constant for <b>_id</b>
* <p>
* Description: <b>the _id of a resource</b><br>
* Type: <b>string</b><br>
* Path: <b>Resource._id</b><br>
* Path: <b>Resource.id</b><br>
* </p>
*/
TokenClientParam RES_ID = new TokenClientParam(IAnyResource.SP_RES_ID);
String SP_RES_LAST_UPDATED = "_lastUpdated";
/**
* Search parameter constant for <b>_lastUpdated</b>
*/
@SearchParamDefinition(
name = SP_RES_LAST_UPDATED,
path = "Resource.meta.lastUpdated",
description = "The last updated date of the resource",
type = "date")
/**
* <b>Fluent Client</b> search parameter constant for <b>_lastUpdated</b>
* <p>
* Description: <b>The last updated date of a resource</b><br>
* Type: <b>date</b><br>
* Path: <b>Resource.meta.lastUpdated</b><br>
* </p>
*/
DateClientParam RES_LAST_UPDATED = new DateClientParam(IAnyResource.SP_RES_LAST_UPDATED);
String SP_RES_TAG = "_tag";
/**
* Search parameter constant for <b>_tag</b>
*/
@SearchParamDefinition(
name = SP_RES_TAG,
path = "Resource.meta.tag",
description = "The tag of the resource",
type = "token")
/**
* <b>Fluent Client</b> search parameter constant for <b>_tag</b>
* <p>
* Description: <b>The tag of a resource</b><br>
* Type: <b>token</b><br>
* Path: <b>Resource.meta.tag</b><br>
* </p>
*/
TokenClientParam RES_TAG = new TokenClientParam(IAnyResource.SP_RES_TAG);
String SP_RES_PROFILE = "_profile";
/**
* Search parameter constant for <b>_profile</b>
*/
@SearchParamDefinition(
name = SP_RES_PROFILE,
path = "Resource.meta.profile",
description = "The profile of the resource",
type = "uri")
/**
* <b>Fluent Client</b> search parameter constant for <b>_profile</b>
* <p>
* Description: <b>The profile of a resource</b><br>
* Type: <b>uri</b><br>
* Path: <b>Resource.meta.profile</b><br>
* </p>
*/
UriClientParam RES_PROFILE = new UriClientParam(IAnyResource.SP_RES_PROFILE);
String SP_RES_SECURITY = "_security";
/**
* Search parameter constant for <b>_security</b>
*/
@SearchParamDefinition(
name = SP_RES_SECURITY,
path = "Resource.meta.security",
description = "The security of the resource",
type = "token")
/**
* <b>Fluent Client</b> search parameter constant for <b>_security</b>
* <p>
* Description: <b>The security of a resource</b><br>
* Type: <b>token</b><br>
* Path: <b>Resource.meta.security</b><br>
* </p>
*/
TokenClientParam RES_SECURITY = new TokenClientParam(IAnyResource.SP_RES_SECURITY);
String getId();
IIdType getIdElement();

View File

@ -6,6 +6,9 @@ org.hl7.fhir.common.hapi.validation.support.CommonCodeSystemsTerminologyService.
org.hl7.fhir.common.hapi.validation.support.CommonCodeSystemsTerminologyService.mismatchCodeSystem=Inappropriate CodeSystem URL "{0}" for ValueSet: {1}
org.hl7.fhir.common.hapi.validation.support.CommonCodeSystemsTerminologyService.codeNotFoundInValueSet=Code "{0}" is not in valueset: {1}
org.hl7.fhir.common.hapi.validation.support.RemoteTerminologyServiceValidationSupport.unknownCodeInSystem=Unknown code "{0}#{1}". The Remote Terminology server {2} returned {3}
org.hl7.fhir.common.hapi.validation.support.RemoteTerminologyServiceValidationSupport.unknownCodeInValueSet=Unknown code "{0}#{1}" for ValueSet with URL "{2}". The Remote Terminology server {3} returned {4}
ca.uhn.fhir.jpa.term.TermReadSvcImpl.expansionRefersToUnknownCs=Unknown CodeSystem URI "{0}" referenced from ValueSet
ca.uhn.fhir.jpa.term.TermReadSvcImpl.valueSetNotYetExpanded=ValueSet "{0}" has not yet been pre-expanded. Performing in-memory expansion without parameters. Current status: {1} | {2}
ca.uhn.fhir.jpa.term.TermReadSvcImpl.valueSetNotYetExpanded_OffsetNotAllowed=ValueSet expansion can not combine "offset" with "ValueSet.compose.exclude" unless the ValueSet has been pre-expanded. ValueSet "{0}" must be pre-expanded for this operation to work.
@ -91,6 +94,7 @@ ca.uhn.fhir.jpa.dao.BaseStorageDao.inlineMatchNotSupported=Inline match URLs are
ca.uhn.fhir.jpa.dao.BaseStorageDao.transactionOperationWithMultipleMatchFailure=Failed to {0} resource with match URL "{1}" because this search matched {2} resources
ca.uhn.fhir.jpa.dao.BaseStorageDao.deleteByUrlThresholdExceeded=Failed to DELETE resources with match URL "{0}" because the resolved number of resources: {1} exceeds the threshold of {2}
ca.uhn.fhir.jpa.dao.BaseStorageDao.transactionOperationWithIdNotMatchFailure=Failed to {0} resource with match URL "{1}" because the matching resource does not match the provided ID
ca.uhn.fhir.jpa.dao.BaseTransactionProcessor.multiplePartitionAccesses=Can not process transaction with {0} entries: Entries require access to multiple/conflicting partitions
ca.uhn.fhir.jpa.dao.BaseHapiFhirDao.transactionOperationFailedNoId=Failed to {0} resource in transaction because no ID was provided
ca.uhn.fhir.jpa.dao.BaseHapiFhirDao.transactionOperationFailedUnknownId=Failed to {0} resource in transaction because no resource could be found with ID {1}
ca.uhn.fhir.jpa.dao.BaseHapiFhirDao.uniqueIndexConflictFailure=Can not create resource of type {0} as it would create a duplicate unique index matching query: {1} (existing index belongs to {2}, new unique index created by {3})

View File

@ -41,6 +41,50 @@ public class RequestPartitionIdTest {
assertFalse(RequestPartitionId.forPartitionIdsAndNames(null, Lists.newArrayList(1, 2), null).isDefaultPartition());
}
@Test
public void testMergeIds() {
RequestPartitionId input0 = RequestPartitionId.fromPartitionIds(1, 2, 3);
RequestPartitionId input1 = RequestPartitionId.fromPartitionIds(1, 2, 4);
RequestPartitionId actual = input0.mergeIds(input1);
RequestPartitionId expected = RequestPartitionId.fromPartitionIds(1, 2, 3, 4);
assertEquals(expected, actual);
}
@Test
public void testMergeIds_ThisAllPartitions() {
RequestPartitionId input0 = RequestPartitionId.allPartitions();
RequestPartitionId input1 = RequestPartitionId.fromPartitionIds(1, 2, 4);
RequestPartitionId actual = input0.mergeIds(input1);
RequestPartitionId expected = RequestPartitionId.allPartitions();
assertEquals(expected, actual);
}
@Test
public void testMergeIds_OtherAllPartitions() {
RequestPartitionId input0 = RequestPartitionId.fromPartitionIds(1, 2, 3);
RequestPartitionId input1 = RequestPartitionId.allPartitions();
RequestPartitionId actual = input0.mergeIds(input1);
RequestPartitionId expected = RequestPartitionId.allPartitions();
assertEquals(expected, actual);
}
@Test
public void testMergeIds_IncludesDefault() {
RequestPartitionId input0 = RequestPartitionId.fromPartitionIds(1, 2, 3);
RequestPartitionId input1 = RequestPartitionId.defaultPartition();
RequestPartitionId actual = input0.mergeIds(input1);
RequestPartitionId expected = RequestPartitionId.fromPartitionIds(1, 2, 3, null);
assertEquals(expected, actual);
}
@Test
public void testSerDeserSer() throws JsonProcessingException {
{

View File

@ -0,0 +1,3 @@
---
release-date: "2024-05-30"
codename: "Borealis"

View File

@ -0,0 +1,3 @@
---
release-date: "2024-07-19"
codename: "Borealis"

View File

@ -0,0 +1,5 @@
---
type: fix
issue: 4837
title: "In the case where a resource was serialized, deserialized, copied and reserialized it resulted in duplication of
contained resources. This has been corrected."

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 5960
backport: 7.2.1
title: "Previously, queries with chained would fail to sort correctly with lucene and full text searches enabled.
This has been fixed."

View File

@ -1,6 +1,7 @@
---
type: fix
issue: 6024
backport: 7.2.2
title: "Fixed a bug in search where requesting a count with HSearch indexing
and FilterParameter enabled and using the _filter parameter would result
in inaccurate results being returned.

View File

@ -1,6 +1,7 @@
---
type: fix
issue: 6044
backport: 7.2.2
title: "Fixed an issue where doing a cache refresh with advanced Hibernate Search
enabled would result in an infinite loop of cache refresh -> search for
StructureDefinition -> cache refresh, etc

View File

@ -1,4 +1,5 @@
---
type: fix
issue: 6046
backport: 7.2.2
title: "Previously, using `_text` and `_content` searches in Hibernate Search in R5 was not supported. This issue has been fixed."

View File

@ -1,5 +1,6 @@
---
type: add
issue: 6046
backport: 7.2.2
title: "Added support for `:contains` parameter qualifier on the `_text` and `_content` Search Parameters. When using Hibernate Search, this will cause
the search to perform an substring match on the provided value. Documentation can be found [here](/hapi-fhir/docs/server_jpa/elastic.html#performing-fulltext-search-in-luceneelasticsearch)."

View File

@ -0,0 +1,5 @@
---
type: fix
issue: 6122
title: "Previously, executing the '$validate' operation on a resource instance could result in an HTTP 400 Bad Request
instead of an HTTP 200 OK response with a list of validation issues. This has been fixed."

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 6123
title: "`IAnyResource` `_id` search parameter was missing `path` property value, which resulted in extractor not
working when standard search parameters were instantiated from defined context. This has been fixed, and also
`_LastUpdated`, `_tag`, `_profile`, and `_security` parameter definitions were added to the class."

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 6083
backport: 7.2.2
title: "A bug with $everything operation was discovered when trying to search using hibernate search, this change makes
all $everything operation rely on database search until hibernate search fully supports the operation."

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 6134
backport: 7.2.2
title: "Fixed a regression in 7.2.0 which caused systems using `FILESYSTEM` binary storage mode to be unable to read metadata documents
that had been previously stored on disk."

View File

@ -0,0 +1,5 @@
---
type: add
issue: 6148
jira: SMILE-8613
title: "Added the target resource partitionId and partitionDate to the resourceLink table."

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 6150
title: "Previously, the resource $validate operation would return a 404 when the associated profile uses a ValueSet
that has multiple includes referencing Remote Terminology CodeSystem resources.
This has been fixed to return a 200 with issues instead."

View File

@ -0,0 +1,11 @@
---
type: fix
issue: 6153
title: "Previously, if you created a resource with some conditional url,
but then submitted a transaction bundle that
a) updated the resource to not match the condition anymore and
b) create a resource with the (same) condition
a unique index violation would result.
This has been fixed.
"

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 6156
title: "Index IDX_IDXCMBTOKNU_HASHC on table HFJ_IDX_CMB_TOK_NU's migration
is now marked as online (concurrent).
"

View File

@ -0,0 +1,8 @@
---
type: fix
issue: 6159
jira: SMILE-8604
title: "Previously, `$apply-codesystem-delta-add` and `$apply-codesystem-delta-remove` operations were failing
with a 500 Server Error when invoked with a CodeSystem Resource payload that had a concept without a
`display` element. This has now been fixed so that concepts without display field is accepted, as `display`
element is not required."

View File

@ -0,0 +1,7 @@
---
type: fix
jira: SMILE-8652
title: "When JPA servers are configured to always require a new database
transaction when switching partitions, the server will now correctly
identify the correct partition for FHIR transaction operations, and
fail the operation if multiple partitions would be required."

View File

@ -0,0 +1,7 @@
---
type: change
issue: 6179
title: "The $reindex operation could potentially initiate a reindex job without any urls provided in the parameters.
We now internally generate a list of urls out of all the supported resource types and attempt to reindex
found resources of each type separately. As a result, each reindex (batch2) job chunk will be always associated with a url."

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 6179
title: "Previously, the $reindex operation would fail when using a custom partitioning interceptor which decides the partition
based on the resource type in the request. This has been fixed, such that we avoid retrieving the resource type from
the request, rather we use the urls provided as parameters to the operation to determine the partitions."

View File

@ -0,0 +1,6 @@
---
type: fix
issue: 6188
jira: SMILE-8759
title: "Previously, a Subscription not marked as a cross-partition subscription could listen to incoming resources from
other partitions. This issue is fixed."

View File

@ -17,7 +17,7 @@
* limitations under the License.
* #L%
*/
package ca.uhn.fhir.jpa.reindex;
package ca.uhn.fhir.jpa.batch2;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.context.RuntimeResourceDefinition;
@ -41,8 +41,10 @@ import ca.uhn.fhir.rest.api.SortSpec;
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
import ca.uhn.fhir.rest.server.exceptions.InternalErrorException;
import ca.uhn.fhir.util.DateRangeUtil;
import ca.uhn.fhir.util.Logs;
import jakarta.annotation.Nonnull;
import jakarta.annotation.Nullable;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.lang3.Validate;
import java.util.Date;
@ -50,7 +52,7 @@ import java.util.function.Supplier;
import java.util.stream.Stream;
public class Batch2DaoSvcImpl implements IBatch2DaoSvc {
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(Batch2DaoSvcImpl.class);
private static final org.slf4j.Logger ourLog = Logs.getBatchTroubleshootingLog();
private final IResourceTableDao myResourceTableDao;
@ -83,7 +85,7 @@ public class Batch2DaoSvcImpl implements IBatch2DaoSvc {
@Override
public IResourcePidStream fetchResourceIdStream(
Date theStart, Date theEnd, RequestPartitionId theRequestPartitionId, String theUrl) {
if (theUrl == null) {
if (StringUtils.isBlank(theUrl)) {
return makeStreamResult(
theRequestPartitionId, () -> streamResourceIdsNoUrl(theStart, theEnd, theRequestPartitionId));
} else {
@ -127,6 +129,10 @@ public class Batch2DaoSvcImpl implements IBatch2DaoSvc {
return new TypedResourceStream(theRequestPartitionId, streamTemplate);
}
/**
* At the moment there is no use-case for this method.
* This can be cleaned up at a later point in time if there is no use for it.
*/
@Nonnull
private Stream<TypedResourcePid> streamResourceIdsNoUrl(
Date theStart, Date theEnd, RequestPartitionId theRequestPartitionId) {

View File

@ -19,7 +19,6 @@
*/
package ca.uhn.fhir.jpa.batch2;
import ca.uhn.fhir.batch2.api.IJobPartitionProvider;
import ca.uhn.fhir.batch2.api.IJobPersistence;
import ca.uhn.fhir.batch2.config.BaseBatch2Config;
import ca.uhn.fhir.interceptor.api.IInterceptorBroadcaster;
@ -28,8 +27,6 @@ import ca.uhn.fhir.jpa.dao.data.IBatch2JobInstanceRepository;
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkMetadataViewRepository;
import ca.uhn.fhir.jpa.dao.data.IBatch2WorkChunkRepository;
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
import ca.uhn.fhir.jpa.partition.IPartitionLookupSvc;
import ca.uhn.fhir.jpa.partition.IRequestPartitionHelperSvc;
import jakarta.persistence.EntityManager;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@ -55,10 +52,4 @@ public class JpaBatch2Config extends BaseBatch2Config {
theEntityManager,
theInterceptorBroadcaster);
}
@Bean
public IJobPartitionProvider jobPartitionProvider(
IRequestPartitionHelperSvc theRequestPartitionHelperSvc, IPartitionLookupSvc thePartitionLookupSvc) {
return new JpaJobPartitionProvider(theRequestPartitionHelperSvc, thePartitionLookupSvc);
}
}

View File

@ -19,45 +19,47 @@
*/
package ca.uhn.fhir.jpa.batch2;
import ca.uhn.fhir.batch2.api.IJobPartitionProvider;
import ca.uhn.fhir.batch2.coordinator.DefaultJobPartitionProvider;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrl;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.entity.PartitionEntity;
import ca.uhn.fhir.jpa.partition.IPartitionLookupSvc;
import ca.uhn.fhir.jpa.partition.IRequestPartitionHelperSvc;
import ca.uhn.fhir.rest.api.server.RequestDetails;
import ca.uhn.fhir.jpa.searchparam.MatchUrlService;
import java.util.List;
import java.util.stream.Collectors;
/**
* The default JPA implementation, which uses {@link IRequestPartitionHelperSvc} and {@link IPartitionLookupSvc}
* to compute the partition to run a batch2 job.
* to compute the {@link PartitionedUrl} list to run a batch2 job.
* The latter will be used to handle cases when the job is configured to run against all partitions
* (bulk system operation) and will return the actual list with all the configured partitions.
*/
public class JpaJobPartitionProvider implements IJobPartitionProvider {
protected final IRequestPartitionHelperSvc myRequestPartitionHelperSvc;
@Deprecated
public class JpaJobPartitionProvider extends DefaultJobPartitionProvider {
private final IPartitionLookupSvc myPartitionLookupSvc;
public JpaJobPartitionProvider(
IRequestPartitionHelperSvc theRequestPartitionHelperSvc, IPartitionLookupSvc thePartitionLookupSvc) {
myRequestPartitionHelperSvc = theRequestPartitionHelperSvc;
super(theRequestPartitionHelperSvc);
myPartitionLookupSvc = thePartitionLookupSvc;
}
public JpaJobPartitionProvider(
FhirContext theFhirContext,
IRequestPartitionHelperSvc theRequestPartitionHelperSvc,
MatchUrlService theMatchUrlService,
IPartitionLookupSvc thePartitionLookupSvc) {
super(theFhirContext, theRequestPartitionHelperSvc, theMatchUrlService);
myPartitionLookupSvc = thePartitionLookupSvc;
}
@Override
public List<RequestPartitionId> getPartitions(RequestDetails theRequestDetails, String theOperation) {
RequestPartitionId partitionId = myRequestPartitionHelperSvc.determineReadPartitionForRequestForServerOperation(
theRequestDetails, theOperation);
if (!partitionId.isAllPartitions()) {
return List.of(partitionId);
}
// handle (bulk) system operations that are typically configured with RequestPartitionId.allPartitions()
// populate the actual list of all partitions
List<RequestPartitionId> partitionIdList = myPartitionLookupSvc.listPartitions().stream()
public List<RequestPartitionId> getAllPartitions() {
return myPartitionLookupSvc.listPartitions().stream()
.map(PartitionEntity::toRequestPartitionId)
.collect(Collectors.toList());
partitionIdList.add(RequestPartitionId.defaultPartition());
return partitionIdList;
}
}

View File

@ -25,6 +25,7 @@ import ca.uhn.fhir.jpa.api.dao.DaoRegistry;
import ca.uhn.fhir.jpa.api.svc.IBatch2DaoSvc;
import ca.uhn.fhir.jpa.api.svc.IDeleteExpungeSvc;
import ca.uhn.fhir.jpa.api.svc.IIdHelperService;
import ca.uhn.fhir.jpa.batch2.Batch2DaoSvcImpl;
import ca.uhn.fhir.jpa.dao.IFulltextSearchSvc;
import ca.uhn.fhir.jpa.dao.data.IResourceLinkDao;
import ca.uhn.fhir.jpa.dao.data.IResourceTableDao;
@ -32,7 +33,6 @@ import ca.uhn.fhir.jpa.dao.expunge.ResourceTableFKProvider;
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
import ca.uhn.fhir.jpa.delete.batch2.DeleteExpungeSqlBuilder;
import ca.uhn.fhir.jpa.delete.batch2.DeleteExpungeSvcImpl;
import ca.uhn.fhir.jpa.reindex.Batch2DaoSvcImpl;
import ca.uhn.fhir.jpa.searchparam.MatchUrlService;
import jakarta.persistence.EntityManager;
import org.springframework.beans.factory.annotation.Autowired;

View File

@ -20,7 +20,7 @@
package ca.uhn.fhir.jpa.dao;
import ca.uhn.fhir.batch2.api.IJobCoordinator;
import ca.uhn.fhir.batch2.jobs.parameters.UrlPartitioner;
import ca.uhn.fhir.batch2.api.IJobPartitionProvider;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexAppCtx;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexJobParameters;
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
@ -103,7 +103,6 @@ import ca.uhn.fhir.rest.server.exceptions.PreconditionFailedException;
import ca.uhn.fhir.rest.server.exceptions.ResourceNotFoundException;
import ca.uhn.fhir.rest.server.exceptions.ResourceVersionConflictException;
import ca.uhn.fhir.rest.server.exceptions.UnprocessableEntityException;
import ca.uhn.fhir.rest.server.provider.ProviderConstants;
import ca.uhn.fhir.rest.server.servlet.ServletRequestDetails;
import ca.uhn.fhir.rest.server.util.CompositeInterceptorBroadcaster;
import ca.uhn.fhir.util.ReflectionUtil;
@ -193,6 +192,9 @@ public abstract class BaseHapiFhirResourceDao<T extends IBaseResource> extends B
@Autowired
private IRequestPartitionHelperSvc myRequestPartitionHelperService;
@Autowired
private IJobPartitionProvider myJobPartitionProvider;
@Autowired
private MatchUrlService myMatchUrlService;
@ -214,9 +216,6 @@ public abstract class BaseHapiFhirResourceDao<T extends IBaseResource> extends B
private TransactionTemplate myTxTemplate;
@Autowired
private UrlPartitioner myUrlPartitioner;
@Autowired
private ResourceSearchUrlSvc myResourceSearchUrlSvc;
@ -1306,14 +1305,12 @@ public abstract class BaseHapiFhirResourceDao<T extends IBaseResource> extends B
ReindexJobParameters params = new ReindexJobParameters();
List<String> urls = List.of();
if (!isCommonSearchParam(theBase)) {
addAllResourcesTypesToReindex(theBase, theRequestDetails, params);
urls = theBase.stream().map(t -> t + "?").collect(Collectors.toList());
}
RequestPartitionId requestPartition =
myRequestPartitionHelperService.determineReadPartitionForRequestForServerOperation(
theRequestDetails, ProviderConstants.OPERATION_REINDEX);
params.setRequestPartitionId(requestPartition);
myJobPartitionProvider.getPartitionedUrls(theRequestDetails, urls).forEach(params::addPartitionedUrl);
JobInstanceStartRequest request = new JobInstanceStartRequest();
request.setJobDefinitionId(ReindexAppCtx.JOB_REINDEX);
@ -1334,14 +1331,6 @@ public abstract class BaseHapiFhirResourceDao<T extends IBaseResource> extends B
return Boolean.parseBoolean(shouldSkip.toString());
}
private void addAllResourcesTypesToReindex(
List<String> theBase, RequestDetails theRequestDetails, ReindexJobParameters params) {
theBase.stream()
.map(t -> t + "?")
.map(url -> myUrlPartitioner.partitionUrl(url, theRequestDetails))
.forEach(params::addPartitionedUrl);
}
private boolean isCommonSearchParam(List<String> theBase) {
// If the base contains the special resource "Resource", this is a common SP that applies to all resources
return theBase.stream().map(String::toLowerCase).anyMatch(BASE_RESOURCE_NAME::equals);
@ -2457,11 +2446,13 @@ public abstract class BaseHapiFhirResourceDao<T extends IBaseResource> extends B
RestOperationTypeEnum theOperationType,
TransactionDetails theTransactionDetails) {
// we stored a resource searchUrl at creation time to prevent resource duplication. Let's remove the entry on
// the
// first update but guard against unnecessary trips to the database on subsequent ones.
/*
* We stored a resource searchUrl at creation time to prevent resource duplication.
* We'll clear any currently existing urls from the db, otherwise we could hit
* duplicate index violations if we try to add another (after this create/update)
*/
ResourceTable entity = (ResourceTable) theEntity;
if (entity.isSearchUrlPresent() && thePerformIndexing) {
if (entity.isSearchUrlPresent()) {
myResourceSearchUrlSvc.deleteByResId(
(Long) theEntity.getPersistentId().getId());
entity.setSearchUrlPresent(false);

View File

@ -27,7 +27,6 @@ import ca.uhn.fhir.jpa.api.dao.IFhirSystemDao;
import ca.uhn.fhir.jpa.api.model.DaoMethodOutcome;
import ca.uhn.fhir.jpa.api.svc.IIdHelperService;
import ca.uhn.fhir.jpa.config.HapiFhirHibernateJpaDialect;
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
import ca.uhn.fhir.jpa.model.dao.JpaPid;
import ca.uhn.fhir.jpa.model.entity.ResourceIndexedSearchParamToken;
import ca.uhn.fhir.jpa.model.entity.StorageSettings;
@ -97,9 +96,6 @@ public class TransactionProcessor extends BaseTransactionProcessor {
@Autowired
private IIdHelperService<JpaPid> myIdHelperService;
@Autowired
private PartitionSettings myPartitionSettings;
@Autowired
private JpaStorageSettings myStorageSettings;
@ -150,14 +146,9 @@ public class TransactionProcessor extends BaseTransactionProcessor {
List<IBase> theEntries,
StopWatch theTransactionStopWatch) {
ITransactionProcessorVersionAdapter versionAdapter = getVersionAdapter();
RequestPartitionId requestPartitionId = null;
if (!myPartitionSettings.isPartitioningEnabled()) {
requestPartitionId = RequestPartitionId.allPartitions();
} else {
// If all entries in the transaction point to the exact same partition, we'll try and do a pre-fetch
requestPartitionId = getSinglePartitionForAllEntriesOrNull(theRequest, theEntries, versionAdapter);
}
ITransactionProcessorVersionAdapter<?, ?> versionAdapter = getVersionAdapter();
RequestPartitionId requestPartitionId =
super.determineRequestPartitionIdForWriteEntries(theRequest, theEntries);
if (requestPartitionId != null) {
preFetch(theTransactionDetails, theEntries, versionAdapter, requestPartitionId);
@ -472,24 +463,6 @@ public class TransactionProcessor extends BaseTransactionProcessor {
}
}
private RequestPartitionId getSinglePartitionForAllEntriesOrNull(
RequestDetails theRequest, List<IBase> theEntries, ITransactionProcessorVersionAdapter versionAdapter) {
RequestPartitionId retVal = null;
Set<RequestPartitionId> requestPartitionIdsForAllEntries = new HashSet<>();
for (IBase nextEntry : theEntries) {
IBaseResource resource = versionAdapter.getResource(nextEntry);
if (resource != null) {
RequestPartitionId requestPartition = myRequestPartitionSvc.determineCreatePartitionForRequest(
theRequest, resource, myFhirContext.getResourceType(resource));
requestPartitionIdsForAllEntries.add(requestPartition);
}
}
if (requestPartitionIdsForAllEntries.size() == 1) {
retVal = requestPartitionIdsForAllEntries.iterator().next();
}
return retVal;
}
/**
* Given a token parameter, build the query predicate based on its hash. Uses system and value if both are available, otherwise just value.
* If neither are available, it returns null.
@ -570,11 +543,6 @@ public class TransactionProcessor extends BaseTransactionProcessor {
}
}
@VisibleForTesting
public void setPartitionSettingsForUnitTest(PartitionSettings thePartitionSettings) {
myPartitionSettings = thePartitionSettings;
}
@VisibleForTesting
public void setIdHelperServiceForUnitTest(IIdHelperService theIdHelperService) {
myIdHelperService = theIdHelperService;

View File

@ -135,7 +135,8 @@ public interface IResourceTableDao
* This method returns a Collection where each row is an element in the collection. Each element in the collection
* is an object array, where the order matters (the array represents columns returned by the query). Be careful if you change this query in any way.
*/
@Query("SELECT t.myResourceType, t.myId, t.myDeleted FROM ResourceTable t WHERE t.myId IN (:pid)")
@Query(
"SELECT t.myResourceType, t.myId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue FROM ResourceTable t WHERE t.myId IN (:pid)")
Collection<Object[]> findLookupFieldsByResourcePid(@Param("pid") List<Long> thePids);
/**
@ -143,7 +144,7 @@ public interface IResourceTableDao
* is an object array, where the order matters (the array represents columns returned by the query). Be careful if you change this query in any way.
*/
@Query(
"SELECT t.myResourceType, t.myId, t.myDeleted FROM ResourceTable t WHERE t.myId IN (:pid) AND t.myPartitionIdValue IN :partition_id")
"SELECT t.myResourceType, t.myId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue FROM ResourceTable t WHERE t.myId IN (:pid) AND t.myPartitionIdValue IN :partition_id")
Collection<Object[]> findLookupFieldsByResourcePidInPartitionIds(
@Param("pid") List<Long> thePids, @Param("partition_id") Collection<Integer> thePartitionId);
@ -152,7 +153,7 @@ public interface IResourceTableDao
* is an object array, where the order matters (the array represents columns returned by the query). Be careful if you change this query in any way.
*/
@Query(
"SELECT t.myResourceType, t.myId, t.myDeleted FROM ResourceTable t WHERE t.myId IN (:pid) AND (t.myPartitionIdValue IS NULL OR t.myPartitionIdValue IN :partition_id)")
"SELECT t.myResourceType, t.myId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue FROM ResourceTable t WHERE t.myId IN (:pid) AND (t.myPartitionIdValue IS NULL OR t.myPartitionIdValue IN :partition_id)")
Collection<Object[]> findLookupFieldsByResourcePidInPartitionIdsOrNullPartition(
@Param("pid") List<Long> thePids, @Param("partition_id") Collection<Integer> thePartitionId);
@ -161,7 +162,7 @@ public interface IResourceTableDao
* is an object array, where the order matters (the array represents columns returned by the query). Be careful if you change this query in any way.
*/
@Query(
"SELECT t.myResourceType, t.myId, t.myDeleted FROM ResourceTable t WHERE t.myId IN (:pid) AND t.myPartitionIdValue IS NULL")
"SELECT t.myResourceType, t.myId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue FROM ResourceTable t WHERE t.myId IN (:pid) AND t.myPartitionIdValue IS NULL")
Collection<Object[]> findLookupFieldsByResourcePidInPartitionNull(@Param("pid") List<Long> thePids);
@Query("SELECT t.myVersion FROM ResourceTable t WHERE t.myId = :pid")

View File

@ -56,9 +56,10 @@ public class IResourceTableDaoImpl implements IForcedIdQueries {
@Override
public Collection<Object[]> findAndResolveByForcedIdWithNoType(
String theResourceType, Collection<String> theForcedIds, boolean theExcludeDeleted) {
String query = "SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id )";
String query =
"SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id )";
if (theExcludeDeleted) {
query += " AND t.myDeleted IS NULL";
@ -82,9 +83,10 @@ public class IResourceTableDaoImpl implements IForcedIdQueries {
Collection<String> theForcedIds,
Collection<Integer> thePartitionId,
boolean theExcludeDeleted) {
String query = "SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id ) AND t.myPartitionIdValue IN ( :partition_id )";
String query =
"SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id ) AND t.myPartitionIdValue IN ( :partition_id )";
if (theExcludeDeleted) {
query += " AND t.myDeleted IS NULL";
@ -106,9 +108,11 @@ public class IResourceTableDaoImpl implements IForcedIdQueries {
@Override
public Collection<Object[]> findAndResolveByForcedIdWithNoTypeInPartitionNull(
String theResourceType, Collection<String> theForcedIds, boolean theExcludeDeleted) {
String query = "SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id ) AND t.myPartitionIdValue IS NULL";
// we fetch myPartitionIdValue and myPartitionDateValue for resultSet processing consistency
String query =
"SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id ) AND t.myPartitionIdValue IS NULL";
if (theExcludeDeleted) {
query += " AND t.myDeleted IS NULL";
@ -132,9 +136,10 @@ public class IResourceTableDaoImpl implements IForcedIdQueries {
Collection<String> theForcedIds,
List<Integer> thePartitionIdsWithoutDefault,
boolean theExcludeDeleted) {
String query = "SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id ) AND (t.myPartitionIdValue IS NULL OR t.myPartitionIdValue IN ( :partition_id ))";
String query =
"SELECT t.myResourceType, t.myId, t.myFhirId, t.myDeleted, t.myPartitionIdValue, t.myPartitionDateValue "
+ "FROM ResourceTable t "
+ "WHERE t.myResourceType = :resource_type AND t.myFhirId IN ( :forced_id ) AND (t.myPartitionIdValue IS NULL OR t.myPartitionIdValue IN ( :partition_id ))";
if (theExcludeDeleted) {
query += " AND t.myDeleted IS NULL";

View File

@ -30,6 +30,7 @@ import ca.uhn.fhir.jpa.model.config.PartitionSettings;
import ca.uhn.fhir.jpa.model.cross.IResourceLookup;
import ca.uhn.fhir.jpa.model.cross.JpaResourceLookup;
import ca.uhn.fhir.jpa.model.dao.JpaPid;
import ca.uhn.fhir.jpa.model.entity.PartitionablePartitionId;
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
import ca.uhn.fhir.jpa.search.builder.SearchBuilder;
import ca.uhn.fhir.jpa.util.MemoryCacheService;
@ -59,12 +60,11 @@ import org.hl7.fhir.instance.model.api.IAnyResource;
import org.hl7.fhir.instance.model.api.IBaseResource;
import org.hl7.fhir.instance.model.api.IIdType;
import org.hl7.fhir.r4.model.IdType;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.support.TransactionSynchronizationManager;
import java.time.LocalDate;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
@ -100,7 +100,6 @@ import static org.apache.commons.lang3.StringUtils.isNotBlank;
*/
@Service
public class IdHelperService implements IIdHelperService<JpaPid> {
private static final Logger ourLog = LoggerFactory.getLogger(IdHelperService.class);
public static final Predicate[] EMPTY_PREDICATE_ARRAY = new Predicate[0];
public static final String RESOURCE_PID = "RESOURCE_PID";
@ -523,7 +522,7 @@ public class IdHelperService implements IIdHelperService<JpaPid> {
if (myStorageSettings.getResourceClientIdStrategy() != JpaStorageSettings.ClientIdStrategyEnum.ANY) {
List<Long> pids = theId.stream()
.filter(t -> isValidPid(t))
.map(t -> t.getIdPartAsLong())
.map(IIdType::getIdPartAsLong)
.collect(Collectors.toList());
if (!pids.isEmpty()) {
resolvePids(requestPartitionId, pids, retVal);
@ -578,8 +577,14 @@ public class IdHelperService implements IIdHelperService<JpaPid> {
Long resourcePid = (Long) next[1];
String forcedId = (String) next[2];
Date deletedAt = (Date) next[3];
Integer partitionId = (Integer) next[4];
LocalDate partitionDate = (LocalDate) next[5];
JpaResourceLookup lookup = new JpaResourceLookup(resourceType, resourcePid, deletedAt);
JpaResourceLookup lookup = new JpaResourceLookup(
resourceType,
resourcePid,
deletedAt,
PartitionablePartitionId.with(partitionId, partitionDate));
retVal.computeIfAbsent(forcedId, id -> new ArrayList<>()).add(lookup);
if (!myStorageSettings.isDeleteEnabled()) {
@ -638,7 +643,11 @@ public class IdHelperService implements IIdHelperService<JpaPid> {
}
}
lookup.stream()
.map(t -> new JpaResourceLookup((String) t[0], (Long) t[1], (Date) t[2]))
.map(t -> new JpaResourceLookup(
(String) t[0],
(Long) t[1],
(Date) t[2],
PartitionablePartitionId.with((Integer) t[3], (LocalDate) t[4])))
.forEach(t -> {
String id = t.getPersistentId().toString();
if (!theTargets.containsKey(id)) {
@ -683,9 +692,8 @@ public class IdHelperService implements IIdHelperService<JpaPid> {
MemoryCacheService.CacheEnum.PID_TO_FORCED_ID, nextResourcePid, Optional.empty());
}
Map<JpaPid, Optional<String>> convertRetVal = new HashMap<>();
retVal.forEach((k, v) -> {
convertRetVal.put(JpaPid.fromId(k), v);
});
retVal.forEach((k, v) -> convertRetVal.put(JpaPid.fromId(k), v));
return new PersistentIdToForcedIdMap<>(convertRetVal);
}
@ -716,7 +724,8 @@ public class IdHelperService implements IIdHelperService<JpaPid> {
}
if (!myStorageSettings.isDeleteEnabled()) {
JpaResourceLookup lookup = new JpaResourceLookup(theResourceType, theJpaPid.getId(), theDeletedAt);
JpaResourceLookup lookup = new JpaResourceLookup(
theResourceType, theJpaPid.getId(), theDeletedAt, theJpaPid.getPartitionablePartitionId());
String nextKey = theJpaPid.toString();
myMemoryCacheService.putAfterCommit(MemoryCacheService.CacheEnum.RESOURCE_LOOKUP, nextKey, lookup);
}
@ -744,8 +753,7 @@ public class IdHelperService implements IIdHelperService<JpaPid> {
@Nonnull
public List<JpaPid> getPidsOrThrowException(
@Nonnull RequestPartitionId theRequestPartitionId, List<IIdType> theIds) {
List<JpaPid> resourcePersistentIds = resolveResourcePersistentIdsWithCache(theRequestPartitionId, theIds);
return resourcePersistentIds;
return resolveResourcePersistentIdsWithCache(theRequestPartitionId, theIds);
}
@Override

View File

@ -59,7 +59,7 @@ public class ExtendedHSearchSearchBuilder {
/**
* These params have complicated semantics, or are best resolved at the JPA layer for now.
*/
public static final Set<String> ourUnsafeSearchParmeters = Sets.newHashSet("_id", "_meta");
public static final Set<String> ourUnsafeSearchParmeters = Sets.newHashSet("_id", "_meta", "_count");
/**
* Determine if ExtendedHibernateSearchBuilder can support this parameter
@ -67,20 +67,22 @@ public class ExtendedHSearchSearchBuilder {
* @param theActiveParamsForResourceType active search parameters for the desired resource type
* @return whether or not this search parameter is supported in hibernate
*/
public boolean supportsSearchParameter(String theParamName, ResourceSearchParams theActiveParamsForResourceType) {
public boolean illegalForHibernateSearch(String theParamName, ResourceSearchParams theActiveParamsForResourceType) {
if (theActiveParamsForResourceType == null) {
return false;
return true;
}
if (ourUnsafeSearchParmeters.contains(theParamName)) {
return false;
return true;
}
if (!theActiveParamsForResourceType.containsParamName(theParamName)) {
return false;
return true;
}
return true;
return false;
}
/**
* By default, do not use Hibernate Search.
* If a Search Parameter is supported by hibernate search,
* Are any of the queries supported by our indexing?
* -
* If not, do not use hibernate, because the results will
@ -88,12 +90,12 @@ public class ExtendedHSearchSearchBuilder {
*/
public boolean canUseHibernateSearch(
String theResourceType, SearchParameterMap myParams, ISearchParamRegistry theSearchParamRegistry) {
boolean canUseHibernate = true;
boolean canUseHibernate = false;
ResourceSearchParams resourceActiveSearchParams = theSearchParamRegistry.getActiveSearchParams(theResourceType);
for (String paramName : myParams.keySet()) {
// is this parameter supported?
if (!supportsSearchParameter(paramName, resourceActiveSearchParams)) {
if (illegalForHibernateSearch(paramName, resourceActiveSearchParams)) {
canUseHibernate = false;
} else {
// are the parameter values supported?
@ -218,7 +220,7 @@ public class ExtendedHSearchSearchBuilder {
ArrayList<String> paramNames = compileParamNames(searchParameterMap);
ResourceSearchParams activeSearchParams = searchParamRegistry.getActiveSearchParams(resourceType);
for (String nextParam : paramNames) {
if (!supportsSearchParameter(nextParam, activeSearchParams)) {
if (illegalForHibernateSearch(nextParam, activeSearchParams)) {
// ignore magic params handled in JPA
continue;
}

View File

@ -414,6 +414,7 @@ public class HapiFhirJpaMigrationTasks extends BaseMigrationTasks<VersionEnum> {
version.onTable("HFJ_IDX_CMB_TOK_NU")
.addIndex("20240625.10", "IDX_IDXCMBTOKNU_HASHC")
.unique(false)
.online(true)
.withColumns("HASH_COMPLETE", "RES_ID", "PARTITION_ID");
version.onTable("HFJ_IDX_CMP_STRING_UNIQ")
.addColumn("20240625.20", "HASH_COMPLETE")
@ -470,10 +471,26 @@ public class HapiFhirJpaMigrationTasks extends BaseMigrationTasks<VersionEnum> {
}
}
version.onTable(Search.HFJ_SEARCH)
.modifyColumn("20240722.1", Search.SEARCH_UUID)
.nonNullable()
.withType(ColumnTypeEnum.STRING, 48);
{
// Add target resource partition id/date columns to resource link
Builder.BuilderWithTableName resourceLinkTable = version.onTable("HFJ_RES_LINK");
resourceLinkTable
.addColumn("20240718.10", "TARGET_RES_PARTITION_ID")
.nullable()
.type(ColumnTypeEnum.INT);
resourceLinkTable
.addColumn("20240718.20", "TARGET_RES_PARTITION_DATE")
.nullable()
.type(ColumnTypeEnum.DATE_ONLY);
}
{
version.onTable(Search.HFJ_SEARCH)
.modifyColumn("20240722.1", Search.SEARCH_UUID)
.nonNullable()
.withType(ColumnTypeEnum.STRING, 48);
}
{
final Builder.BuilderWithTableName hfjResource = version.onTable("HFJ_RESOURCE");

View File

@ -20,18 +20,26 @@
package ca.uhn.fhir.jpa.model.cross;
import ca.uhn.fhir.jpa.model.dao.JpaPid;
import ca.uhn.fhir.jpa.model.entity.PartitionablePartitionId;
import java.util.Date;
public class JpaResourceLookup implements IResourceLookup<JpaPid> {
private final String myResourceType;
private final Long myResourcePid;
private final Date myDeletedAt;
private final PartitionablePartitionId myPartitionablePartitionId;
public JpaResourceLookup(String theResourceType, Long theResourcePid, Date theDeletedAt) {
public JpaResourceLookup(
String theResourceType,
Long theResourcePid,
Date theDeletedAt,
PartitionablePartitionId thePartitionablePartitionId) {
myResourceType = theResourceType;
myResourcePid = theResourcePid;
myDeletedAt = theDeletedAt;
myPartitionablePartitionId = thePartitionablePartitionId;
}
@Override
@ -46,6 +54,9 @@ public class JpaResourceLookup implements IResourceLookup<JpaPid> {
@Override
public JpaPid getPersistentId() {
return JpaPid.fromId(myResourcePid);
JpaPid jpaPid = JpaPid.fromId(myResourcePid);
jpaPid.setPartitionablePartitionId(myPartitionablePartitionId);
return jpaPid;
}
}

View File

@ -475,6 +475,9 @@ public class TerminologyUploaderProvider extends BaseJpaProvider {
}
private static String csvEscape(String theValue) {
if (theValue == null) {
return "";
}
return '"' + theValue.replace("\"", "\"\"").replace("\n", "\\n").replace("\r", "") + '"';
}
}

View File

@ -1,76 +0,0 @@
package ca.uhn.fhir.jpa.batch2;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.entity.PartitionEntity;
import ca.uhn.fhir.jpa.partition.IPartitionLookupSvc;
import ca.uhn.fhir.jpa.partition.IRequestPartitionHelperSvc;
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
import ca.uhn.fhir.rest.server.provider.ProviderConstants;
import org.assertj.core.api.Assertions;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.ArgumentMatchers;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;
import java.util.ArrayList;
import java.util.List;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
@ExtendWith(MockitoExtension.class)
public class JpaJobPartitionProviderTest {
@Mock
private IRequestPartitionHelperSvc myRequestPartitionHelperSvc;
@Mock
private IPartitionLookupSvc myPartitionLookupSvc;
@InjectMocks
private JpaJobPartitionProvider myJobPartitionProvider;
@Test
public void getPartitions_requestSpecificPartition_returnsPartition() {
// setup
SystemRequestDetails requestDetails = new SystemRequestDetails();
String operation = ProviderConstants.OPERATION_EXPORT;
RequestPartitionId partitionId = RequestPartitionId.fromPartitionId(1);
when(myRequestPartitionHelperSvc.determineReadPartitionForRequestForServerOperation(ArgumentMatchers.eq(requestDetails), ArgumentMatchers.eq(operation))).thenReturn(partitionId);
// test
List <RequestPartitionId> partitionIds = myJobPartitionProvider.getPartitions(requestDetails, operation);
// verify
Assertions.assertThat(partitionIds).hasSize(1);
Assertions.assertThat(partitionIds).containsExactlyInAnyOrder(partitionId);
}
@Test
public void getPartitions_requestAllPartitions_returnsListOfAllSpecificPartitions() {
// setup
SystemRequestDetails requestDetails = new SystemRequestDetails();
String operation = ProviderConstants.OPERATION_EXPORT;
when(myRequestPartitionHelperSvc.determineReadPartitionForRequestForServerOperation(ArgumentMatchers.eq(requestDetails), ArgumentMatchers.eq(operation)))
.thenReturn( RequestPartitionId.allPartitions());
List<RequestPartitionId> partitionIds = List.of(RequestPartitionId.fromPartitionIds(1), RequestPartitionId.fromPartitionIds(2));
List<PartitionEntity> partitionEntities = new ArrayList<>();
partitionIds.forEach(partitionId -> {
PartitionEntity entity = mock(PartitionEntity.class);
when(entity.toRequestPartitionId()).thenReturn(partitionId);
partitionEntities.add(entity);
});
when(myPartitionLookupSvc.listPartitions()).thenReturn(partitionEntities);
List<RequestPartitionId> expectedPartitionIds = new ArrayList<>(partitionIds);
expectedPartitionIds.add(RequestPartitionId.defaultPartition());
// test
List<RequestPartitionId> actualPartitionIds = myJobPartitionProvider.getPartitions(requestDetails, operation);
// verify
Assertions.assertThat(actualPartitionIds).hasSize(expectedPartitionIds.size());
Assertions.assertThat(actualPartitionIds).containsExactlyInAnyOrder(expectedPartitionIds.toArray(new RequestPartitionId[0]));
}
}

View File

@ -23,6 +23,7 @@ import ca.uhn.fhir.jpa.model.dao.JpaPid;
import ca.uhn.fhir.jpa.model.entity.NormalizedQuantitySearchLevel;
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
import ca.uhn.fhir.jpa.model.search.StorageProcessingMessage;
import ca.uhn.fhir.jpa.rp.r4.PatientResourceProvider;
import ca.uhn.fhir.jpa.search.BaseSourceSearchParameterTestCases;
import ca.uhn.fhir.jpa.search.CompositeSearchParameterTestCases;
import ca.uhn.fhir.jpa.search.QuantitySearchParameterTestCases;
@ -73,6 +74,7 @@ import org.hl7.fhir.r4.model.DateTimeType;
import org.hl7.fhir.r4.model.DecimalType;
import org.hl7.fhir.r4.model.DiagnosticReport;
import org.hl7.fhir.r4.model.Encounter;
import org.hl7.fhir.r4.model.IdType;
import org.hl7.fhir.r4.model.Identifier;
import org.hl7.fhir.r4.model.Meta;
import org.hl7.fhir.r4.model.Narrative;
@ -101,6 +103,7 @@ import org.mockito.Mockito;
import org.mockito.junit.jupiter.MockitoExtension;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.mock.web.MockHttpServletRequest;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.ContextHierarchy;
@ -118,6 +121,7 @@ import java.io.IOException;
import java.net.URLEncoder;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.Date;
import java.util.HashSet;
import java.util.List;
@ -126,11 +130,13 @@ import java.util.stream.Collectors;
import static ca.uhn.fhir.jpa.model.util.UcumServiceUtil.UCUM_CODESYSTEM_URL;
import static ca.uhn.fhir.rest.api.Constants.CHARSET_UTF8;
import static ca.uhn.fhir.rest.api.Constants.HEADER_CACHE_CONTROL;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.mockito.Mockito.when;
@ExtendWith(SpringExtension.class)
@ExtendWith(MockitoExtension.class)
@ -229,6 +235,8 @@ public class FhirResourceDaoR4SearchWithElasticSearchIT extends BaseJpaTest impl
@Mock
private IHSearchEventListener mySearchEventListener;
@Autowired
private PatientResourceProvider myPatientRpR4;
@Autowired
private ElasticsearchSvcImpl myElasticsearchSvc;
@ -954,6 +962,24 @@ public class FhirResourceDaoR4SearchWithElasticSearchIT extends BaseJpaTest impl
myStorageSettings.setStoreResourceInHSearchIndex(defaultConfig.isStoreResourceInHSearchIndex());
}
@Test
public void testEverythingType() {
Patient p = new Patient();
p.setId("my-patient");
myPatientDao.update(p);
IBundleProvider iBundleProvider = myPatientRpR4.patientTypeEverything(new MockHttpServletRequest(), null, null, null, null, null, null, null, null, null, null, mySrd);
assertEquals(iBundleProvider.getAllResources().size(), 1);
}
@Test
public void testEverythingInstance() {
Patient p = new Patient();
p.setId("my-patient");
myPatientDao.update(p);
IBundleProvider iBundleProvider = myPatientRpR4.patientInstanceEverything(new MockHttpServletRequest(), new IdType("Patient/my-patient"), null, null, null, null, null, null, null, null, null, mySrd);
assertEquals(iBundleProvider.getAllResources().size(), 1);
}
@Test
public void testExpandWithIsAInExternalValueSet() {
createExternalCsAndLocalVs();

View File

@ -20,6 +20,7 @@
package ca.uhn.fhir.jpa.mdm.svc;
import ca.uhn.fhir.batch2.api.IJobCoordinator;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrl;
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.interceptor.api.HookParams;
@ -360,9 +361,9 @@ public class MdmControllerSvcImpl implements IMdmControllerSvc {
if (hasBatchSize) {
params.setBatchSize(theBatchSize.getValue().intValue());
}
params.setRequestPartitionId(RequestPartitionId.allPartitions());
theUrls.forEach(params::addUrl);
RequestPartitionId partitionId = RequestPartitionId.allPartitions();
theUrls.forEach(
url -> params.addPartitionedUrl(new PartitionedUrl().setUrl(url).setRequestPartitionId(partitionId)));
JobInstanceStartRequest request = new JobInstanceStartRequest();
request.setParameters(params);

View File

@ -19,6 +19,7 @@
*/
package ca.uhn.fhir.jpa.model.dao;
import ca.uhn.fhir.jpa.model.entity.PartitionablePartitionId;
import ca.uhn.fhir.rest.api.server.storage.BaseResourcePersistentId;
import java.util.ArrayList;
@ -34,6 +35,7 @@ import java.util.Set;
*/
public class JpaPid extends BaseResourcePersistentId<Long> {
private final Long myId;
private PartitionablePartitionId myPartitionablePartitionId;
private JpaPid(Long theId) {
super(null);
@ -55,6 +57,15 @@ public class JpaPid extends BaseResourcePersistentId<Long> {
myId = theId;
}
public PartitionablePartitionId getPartitionablePartitionId() {
return myPartitionablePartitionId;
}
public JpaPid setPartitionablePartitionId(PartitionablePartitionId thePartitionablePartitionId) {
myPartitionablePartitionId = thePartitionablePartitionId;
return this;
}
public static List<Long> toLongList(Collection<JpaPid> thePids) {
List<Long> retVal = new ArrayList<>(thePids.size());
for (JpaPid next : thePids) {

View File

@ -25,6 +25,7 @@ import jakarta.persistence.Embedded;
import jakarta.persistence.MappedSuperclass;
import java.io.Serializable;
import java.time.LocalDate;
/**
* This is the base class for entities with partitioning that does NOT include Hibernate Envers logging.
@ -44,6 +45,13 @@ public abstract class BasePartitionable implements Serializable {
@Column(name = PartitionablePartitionId.PARTITION_ID, insertable = false, updatable = false, nullable = true)
private Integer myPartitionIdValue;
/**
* This is here to support queries only, do not set this field directly
*/
@SuppressWarnings("unused")
@Column(name = PartitionablePartitionId.PARTITION_DATE, insertable = false, updatable = false, nullable = true)
private LocalDate myPartitionDateValue;
@Nullable
public PartitionablePartitionId getPartitionId() {
return myPartitionId;
@ -57,6 +65,7 @@ public abstract class BasePartitionable implements Serializable {
public String toString() {
return "BasePartitionable{" + "myPartitionId="
+ myPartitionId + ", myPartitionIdValue="
+ myPartitionIdValue + '}';
+ myPartitionIdValue + ", myPartitionDateValue="
+ myPartitionDateValue + '}';
}
}

View File

@ -34,6 +34,7 @@ import java.time.LocalDate;
public class PartitionablePartitionId implements Cloneable {
static final String PARTITION_ID = "PARTITION_ID";
static final String PARTITION_DATE = "PARTITION_DATE";
@Column(name = PARTITION_ID, nullable = true, insertable = true, updatable = false)
private Integer myPartitionId;
@ -132,4 +133,9 @@ public class PartitionablePartitionId implements Cloneable {
}
return new PartitionablePartitionId(partitionId, theRequestPartitionId.getPartitionDate());
}
public static PartitionablePartitionId with(
@Nullable Integer thePartitionId, @Nullable LocalDate thePartitionDate) {
return new PartitionablePartitionId(thePartitionId, thePartitionDate);
}
}

View File

@ -19,8 +19,9 @@
*/
package ca.uhn.fhir.jpa.model.entity;
import jakarta.annotation.Nullable;
import jakarta.persistence.AttributeOverride;
import jakarta.persistence.Column;
import jakarta.persistence.Embedded;
import jakarta.persistence.Entity;
import jakarta.persistence.FetchType;
import jakarta.persistence.ForeignKey;
@ -119,6 +120,11 @@ public class ResourceLink extends BaseResourceIndex {
@Transient
private transient String myTargetResourceId;
@Embedded
@AttributeOverride(name = "myPartitionId", column = @Column(name = "TARGET_RES_PARTITION_ID"))
@AttributeOverride(name = "myPartitionDate", column = @Column(name = "TARGET_RES_PARTITION_DATE"))
private PartitionablePartitionId myTargetResourcePartitionId;
/**
* Constructor
*/
@ -188,6 +194,7 @@ public class ResourceLink extends BaseResourceIndex {
myTargetResourceType = source.getTargetResourceType();
myTargetResourceVersion = source.getTargetResourceVersion();
myTargetResourceUrl = source.getTargetResourceUrl();
myTargetResourcePartitionId = source.getTargetResourcePartitionId();
}
public String getSourcePath() {
@ -270,6 +277,15 @@ public class ResourceLink extends BaseResourceIndex {
myId = theId;
}
public PartitionablePartitionId getTargetResourcePartitionId() {
return myTargetResourcePartitionId;
}
public ResourceLink setTargetResourcePartitionId(PartitionablePartitionId theTargetResourcePartitionId) {
myTargetResourcePartitionId = theTargetResourcePartitionId;
return this;
}
@Override
public void clearHashes() {
// nothing right now
@ -363,23 +379,113 @@ public class ResourceLink extends BaseResourceIndex {
return retVal;
}
/**
* @param theTargetResourceVersion This should only be populated if the reference actually had a version
*/
public static ResourceLink forLocalReference(
String theSourcePath,
ResourceTable theSourceResource,
String theTargetResourceType,
Long theTargetResourcePid,
String theTargetResourceId,
Date theUpdated,
@Nullable Long theTargetResourceVersion) {
ResourceLinkForLocalReferenceParams theResourceLinkForLocalReferenceParams) {
ResourceLink retVal = new ResourceLink();
retVal.setSourcePath(theSourcePath);
retVal.setSourceResource(theSourceResource);
retVal.setTargetResource(theTargetResourceType, theTargetResourcePid, theTargetResourceId);
retVal.setTargetResourceVersion(theTargetResourceVersion);
retVal.setUpdated(theUpdated);
retVal.setSourcePath(theResourceLinkForLocalReferenceParams.getSourcePath());
retVal.setSourceResource(theResourceLinkForLocalReferenceParams.getSourceResource());
retVal.setTargetResource(
theResourceLinkForLocalReferenceParams.getTargetResourceType(),
theResourceLinkForLocalReferenceParams.getTargetResourcePid(),
theResourceLinkForLocalReferenceParams.getTargetResourceId());
retVal.setTargetResourcePartitionId(
theResourceLinkForLocalReferenceParams.getTargetResourcePartitionablePartitionId());
retVal.setTargetResourceVersion(theResourceLinkForLocalReferenceParams.getTargetResourceVersion());
retVal.setUpdated(theResourceLinkForLocalReferenceParams.getUpdated());
return retVal;
}
public static class ResourceLinkForLocalReferenceParams {
private String mySourcePath;
private ResourceTable mySourceResource;
private String myTargetResourceType;
private Long myTargetResourcePid;
private String myTargetResourceId;
private Date myUpdated;
private Long myTargetResourceVersion;
private PartitionablePartitionId myTargetResourcePartitionablePartitionId;
public static ResourceLinkForLocalReferenceParams instance() {
return new ResourceLinkForLocalReferenceParams();
}
public String getSourcePath() {
return mySourcePath;
}
public ResourceLinkForLocalReferenceParams setSourcePath(String theSourcePath) {
mySourcePath = theSourcePath;
return this;
}
public ResourceTable getSourceResource() {
return mySourceResource;
}
public ResourceLinkForLocalReferenceParams setSourceResource(ResourceTable theSourceResource) {
mySourceResource = theSourceResource;
return this;
}
public String getTargetResourceType() {
return myTargetResourceType;
}
public ResourceLinkForLocalReferenceParams setTargetResourceType(String theTargetResourceType) {
myTargetResourceType = theTargetResourceType;
return this;
}
public Long getTargetResourcePid() {
return myTargetResourcePid;
}
public ResourceLinkForLocalReferenceParams setTargetResourcePid(Long theTargetResourcePid) {
myTargetResourcePid = theTargetResourcePid;
return this;
}
public String getTargetResourceId() {
return myTargetResourceId;
}
public ResourceLinkForLocalReferenceParams setTargetResourceId(String theTargetResourceId) {
myTargetResourceId = theTargetResourceId;
return this;
}
public Date getUpdated() {
return myUpdated;
}
public ResourceLinkForLocalReferenceParams setUpdated(Date theUpdated) {
myUpdated = theUpdated;
return this;
}
public Long getTargetResourceVersion() {
return myTargetResourceVersion;
}
/**
* @param theTargetResourceVersion This should only be populated if the reference actually had a version
*/
public ResourceLinkForLocalReferenceParams setTargetResourceVersion(Long theTargetResourceVersion) {
myTargetResourceVersion = theTargetResourceVersion;
return this;
}
public PartitionablePartitionId getTargetResourcePartitionablePartitionId() {
return myTargetResourcePartitionablePartitionId;
}
public ResourceLinkForLocalReferenceParams setTargetResourcePartitionablePartitionId(
PartitionablePartitionId theTargetResourcePartitionablePartitionId) {
myTargetResourcePartitionablePartitionId = theTargetResourcePartitionablePartitionId;
return this;
}
}
}

View File

@ -113,6 +113,24 @@ public class ReadPartitionIdRequestDetails extends PartitionIdRequestDetails {
null, RestOperationTypeEnum.EXTENDED_OPERATION_SERVER, null, null, null, null, theOperationName);
}
/**
* @since 7.4.0
*/
public static ReadPartitionIdRequestDetails forDelete(@Nonnull String theResourceType, @Nonnull IIdType theId) {
RestOperationTypeEnum op = RestOperationTypeEnum.DELETE;
return new ReadPartitionIdRequestDetails(
theResourceType, op, theId.withResourceType(theResourceType), null, null, null, null);
}
/**
* @since 7.4.0
*/
public static ReadPartitionIdRequestDetails forPatch(String theResourceType, IIdType theId) {
RestOperationTypeEnum op = RestOperationTypeEnum.PATCH;
return new ReadPartitionIdRequestDetails(
theResourceType, op, theId.withResourceType(theResourceType), null, null, null, null);
}
public static ReadPartitionIdRequestDetails forRead(
String theResourceType, @Nonnull IIdType theId, boolean theIsVread) {
RestOperationTypeEnum op = theIsVread ? RestOperationTypeEnum.VREAD : RestOperationTypeEnum.READ;

View File

@ -95,7 +95,7 @@ public class MatchUrlService {
}
if (Constants.PARAM_LASTUPDATED.equals(nextParamName)) {
if (paramList != null && paramList.size() > 0) {
if (!paramList.isEmpty()) {
if (paramList.size() > 2) {
throw new InvalidRequestException(Msg.code(484) + "Failed to parse match URL[" + theMatchUrl
+ "] - Can not have more than 2 " + Constants.PARAM_LASTUPDATED
@ -111,9 +111,7 @@ public class MatchUrlService {
myFhirContext, RestSearchParameterTypeEnum.HAS, nextParamName, paramList);
paramMap.add(nextParamName, param);
} else if (Constants.PARAM_COUNT.equals(nextParamName)) {
if (paramList != null
&& paramList.size() > 0
&& paramList.get(0).size() > 0) {
if (!paramList.isEmpty() && !paramList.get(0).isEmpty()) {
String intString = paramList.get(0).get(0);
try {
paramMap.setCount(Integer.parseInt(intString));
@ -123,16 +121,14 @@ public class MatchUrlService {
}
}
} else if (Constants.PARAM_SEARCH_TOTAL_MODE.equals(nextParamName)) {
if (paramList != null
&& !paramList.isEmpty()
&& !paramList.get(0).isEmpty()) {
if (!paramList.isEmpty() && !paramList.get(0).isEmpty()) {
String totalModeEnumStr = paramList.get(0).get(0);
SearchTotalModeEnum searchTotalMode = SearchTotalModeEnum.fromCode(totalModeEnumStr);
if (searchTotalMode == null) {
// We had an oops here supporting the UPPER CASE enum instead of the FHIR code for _total.
// Keep supporting it in case someone is using it.
try {
paramMap.setSearchTotalMode(SearchTotalModeEnum.valueOf(totalModeEnumStr));
searchTotalMode = SearchTotalModeEnum.valueOf(totalModeEnumStr);
} catch (IllegalArgumentException e) {
throw new InvalidRequestException(Msg.code(2078) + "Invalid "
+ Constants.PARAM_SEARCH_TOTAL_MODE + " value: " + totalModeEnumStr);
@ -141,9 +137,7 @@ public class MatchUrlService {
paramMap.setSearchTotalMode(searchTotalMode);
}
} else if (Constants.PARAM_OFFSET.equals(nextParamName)) {
if (paramList != null
&& paramList.size() > 0
&& paramList.get(0).size() > 0) {
if (!paramList.isEmpty() && !paramList.get(0).isEmpty()) {
String intString = paramList.get(0).get(0);
try {
paramMap.setOffset(Integer.parseInt(intString));
@ -238,40 +232,27 @@ public class MatchUrlService {
return getResourceSearch(theUrl, null);
}
public abstract static class Flag {
/**
* Constructor
*/
Flag() {
// nothing
}
abstract void process(
String theParamName, List<QualifiedParamList> theValues, SearchParameterMap theMapToPopulate);
public interface Flag {
void process(String theParamName, List<QualifiedParamList> theValues, SearchParameterMap theMapToPopulate);
}
/**
* Indicates that the parser should process _include and _revinclude (by default these are not handled)
*/
public static Flag processIncludes() {
return new Flag() {
@Override
void process(String theParamName, List<QualifiedParamList> theValues, SearchParameterMap theMapToPopulate) {
if (Constants.PARAM_INCLUDE.equals(theParamName)) {
for (QualifiedParamList nextQualifiedList : theValues) {
for (String nextValue : nextQualifiedList) {
theMapToPopulate.addInclude(new Include(
nextValue, ParameterUtil.isIncludeIterate(nextQualifiedList.getQualifier())));
}
return (theParamName, theValues, theMapToPopulate) -> {
if (Constants.PARAM_INCLUDE.equals(theParamName)) {
for (QualifiedParamList nextQualifiedList : theValues) {
for (String nextValue : nextQualifiedList) {
theMapToPopulate.addInclude(new Include(
nextValue, ParameterUtil.isIncludeIterate(nextQualifiedList.getQualifier())));
}
} else if (Constants.PARAM_REVINCLUDE.equals(theParamName)) {
for (QualifiedParamList nextQualifiedList : theValues) {
for (String nextValue : nextQualifiedList) {
theMapToPopulate.addRevInclude(new Include(
nextValue, ParameterUtil.isIncludeIterate(nextQualifiedList.getQualifier())));
}
}
} else if (Constants.PARAM_REVINCLUDE.equals(theParamName)) {
for (QualifiedParamList nextQualifiedList : theValues) {
for (String nextValue : nextQualifiedList) {
theMapToPopulate.addRevInclude(new Include(
nextValue, ParameterUtil.isIncludeIterate(nextQualifiedList.getQualifier())));
}
}
}

View File

@ -37,6 +37,7 @@ import ca.uhn.fhir.jpa.model.entity.ResourceIndexedComboStringUnique;
import ca.uhn.fhir.jpa.model.entity.ResourceIndexedComboTokenNonUnique;
import ca.uhn.fhir.jpa.model.entity.ResourceIndexedSearchParamString;
import ca.uhn.fhir.jpa.model.entity.ResourceLink;
import ca.uhn.fhir.jpa.model.entity.ResourceLink.ResourceLinkForLocalReferenceParams;
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
import ca.uhn.fhir.jpa.model.entity.SearchParamPresentEntity;
import ca.uhn.fhir.jpa.model.entity.StorageSettings;
@ -71,6 +72,8 @@ import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
import static ca.uhn.fhir.jpa.model.config.PartitionSettings.CrossPartitionReferenceMode.ALLOWED_UNQUALIFIED;
import static ca.uhn.fhir.jpa.model.entity.ResourceLink.forLocalReference;
import static org.apache.commons.lang3.StringUtils.isBlank;
import static org.apache.commons.lang3.StringUtils.isNotBlank;
@ -105,26 +108,6 @@ public class SearchParamExtractorService {
mySearchParamExtractor = theSearchParamExtractor;
}
public void extractFromResource(
RequestPartitionId theRequestPartitionId,
RequestDetails theRequestDetails,
ResourceIndexedSearchParams theParams,
ResourceTable theEntity,
IBaseResource theResource,
TransactionDetails theTransactionDetails,
boolean theFailOnInvalidReference) {
extractFromResource(
theRequestPartitionId,
theRequestDetails,
theParams,
ResourceIndexedSearchParams.withSets(),
theEntity,
theResource,
theTransactionDetails,
theFailOnInvalidReference,
ISearchParamExtractor.ALL_PARAMS);
}
/**
* This method is responsible for scanning a resource for all of the search parameter instances.
* I.e. for all search parameters defined for
@ -702,14 +685,18 @@ public class SearchParamExtractorService {
* need to resolve it again
*/
myResourceLinkResolver.validateTypeOrThrowException(type);
resourceLink = ResourceLink.forLocalReference(
thePathAndRef.getPath(),
theEntity,
typeString,
resolvedTargetId.getId(),
targetId,
transactionDate,
targetVersionId);
ResourceLinkForLocalReferenceParams params = ResourceLinkForLocalReferenceParams.instance()
.setSourcePath(thePathAndRef.getPath())
.setSourceResource(theEntity)
.setTargetResourceType(typeString)
.setTargetResourcePid(resolvedTargetId.getId())
.setTargetResourceId(targetId)
.setUpdated(transactionDate)
.setTargetResourceVersion(targetVersionId)
.setTargetResourcePartitionablePartitionId(resolvedTargetId.getPartitionablePartitionId());
resourceLink = forLocalReference(params);
} else if (theFailOnInvalidReference) {
@ -748,6 +735,7 @@ public class SearchParamExtractorService {
} else {
// Cache the outcome in the current transaction in case there are more references
JpaPid persistentId = JpaPid.fromId(resourceLink.getTargetResourcePid());
persistentId.setPartitionablePartitionId(resourceLink.getTargetResourcePartitionId());
theTransactionDetails.addResolvedResourceId(referenceElement, persistentId);
}
@ -757,11 +745,15 @@ public class SearchParamExtractorService {
* Just assume the reference is valid. This is used for in-memory matching since there
* is no expectation of a database in this situation
*/
ResourceTable target;
target = new ResourceTable();
target.setResourceType(typeString);
resourceLink = ResourceLink.forLocalReference(
thePathAndRef.getPath(), theEntity, typeString, null, targetId, transactionDate, targetVersionId);
ResourceLinkForLocalReferenceParams params = ResourceLinkForLocalReferenceParams.instance()
.setSourcePath(thePathAndRef.getPath())
.setSourceResource(theEntity)
.setTargetResourceType(typeString)
.setTargetResourceId(targetId)
.setUpdated(transactionDate)
.setTargetResourceVersion(targetVersionId);
resourceLink = forLocalReference(params);
}
theNewParams.myLinks.add(resourceLink);
@ -912,19 +904,24 @@ public class SearchParamExtractorService {
RequestDetails theRequest,
TransactionDetails theTransactionDetails) {
JpaPid resolvedResourceId = (JpaPid) theTransactionDetails.getResolvedResourceId(theNextId);
if (resolvedResourceId != null) {
String targetResourceType = theNextId.getResourceType();
Long targetResourcePid = resolvedResourceId.getId();
String targetResourceIdPart = theNextId.getIdPart();
Long targetVersion = theNextId.getVersionIdPartAsLong();
return ResourceLink.forLocalReference(
thePathAndRef.getPath(),
theEntity,
targetResourceType,
targetResourcePid,
targetResourceIdPart,
theUpdateTime,
targetVersion);
ResourceLinkForLocalReferenceParams params = ResourceLinkForLocalReferenceParams.instance()
.setSourcePath(thePathAndRef.getPath())
.setSourceResource(theEntity)
.setTargetResourceType(targetResourceType)
.setTargetResourcePid(targetResourcePid)
.setTargetResourceId(targetResourceIdPart)
.setUpdated(theUpdateTime)
.setTargetResourceVersion(targetVersion)
.setTargetResourcePartitionablePartitionId(resolvedResourceId.getPartitionablePartitionId());
return ResourceLink.forLocalReference(params);
}
/*
@ -936,8 +933,7 @@ public class SearchParamExtractorService {
IResourceLookup<JpaPid> targetResource;
if (myPartitionSettings.isPartitioningEnabled()) {
if (myPartitionSettings.getAllowReferencesAcrossPartitions()
== PartitionSettings.CrossPartitionReferenceMode.ALLOWED_UNQUALIFIED) {
if (myPartitionSettings.getAllowReferencesAcrossPartitions() == ALLOWED_UNQUALIFIED) {
// Interceptor: Pointcut.JPA_CROSS_PARTITION_REFERENCE_DETECTED
if (CompositeInterceptorBroadcaster.hasHooks(
@ -981,21 +977,25 @@ public class SearchParamExtractorService {
Long targetResourcePid = targetResource.getPersistentId().getId();
String targetResourceIdPart = theNextId.getIdPart();
Long targetVersion = theNextId.getVersionIdPartAsLong();
return ResourceLink.forLocalReference(
thePathAndRef.getPath(),
theEntity,
targetResourceType,
targetResourcePid,
targetResourceIdPart,
theUpdateTime,
targetVersion);
ResourceLinkForLocalReferenceParams params = ResourceLinkForLocalReferenceParams.instance()
.setSourcePath(thePathAndRef.getPath())
.setSourceResource(theEntity)
.setTargetResourceType(targetResourceType)
.setTargetResourcePid(targetResourcePid)
.setTargetResourceId(targetResourceIdPart)
.setUpdated(theUpdateTime)
.setTargetResourceVersion(targetVersion)
.setTargetResourcePartitionablePartitionId(
targetResource.getPersistentId().getPartitionablePartitionId());
return forLocalReference(params);
}
private RequestPartitionId determineResolverPartitionId(@Nonnull RequestPartitionId theRequestPartitionId) {
RequestPartitionId targetRequestPartitionId = theRequestPartitionId;
if (myPartitionSettings.isPartitioningEnabled()
&& myPartitionSettings.getAllowReferencesAcrossPartitions()
== PartitionSettings.CrossPartitionReferenceMode.ALLOWED_UNQUALIFIED) {
&& myPartitionSettings.getAllowReferencesAcrossPartitions() == ALLOWED_UNQUALIFIED) {
targetRequestPartitionId = RequestPartitionId.allPartitions();
}
return targetRequestPartitionId;

View File

@ -37,7 +37,7 @@ public class ResourceIndexedSearchParamsTest {
@Test
public void matchResourceLinksStringCompareToLong() {
ResourceLink link = ResourceLink.forLocalReference("organization", mySource, "Organization", 123L, LONG_ID, new Date(), null);
ResourceLink link = getResourceLinkForLocalReference(LONG_ID);
myParams.getResourceLinks().add(link);
ReferenceParam referenceParam = getReferenceParam(STRING_ID);
@ -47,7 +47,7 @@ public class ResourceIndexedSearchParamsTest {
@Test
public void matchResourceLinksStringCompareToString() {
ResourceLink link = ResourceLink.forLocalReference("organization", mySource, "Organization", 123L, STRING_ID, new Date(), null);
ResourceLink link = getResourceLinkForLocalReference(STRING_ID);
myParams.getResourceLinks().add(link);
ReferenceParam referenceParam = getReferenceParam(STRING_ID);
@ -57,7 +57,7 @@ public class ResourceIndexedSearchParamsTest {
@Test
public void matchResourceLinksLongCompareToString() {
ResourceLink link = ResourceLink.forLocalReference("organization", mySource, "Organization", 123L, STRING_ID, new Date(), null);
ResourceLink link = getResourceLinkForLocalReference(STRING_ID);
myParams.getResourceLinks().add(link);
ReferenceParam referenceParam = getReferenceParam(LONG_ID);
@ -67,7 +67,7 @@ public class ResourceIndexedSearchParamsTest {
@Test
public void matchResourceLinksLongCompareToLong() {
ResourceLink link = ResourceLink.forLocalReference("organization", mySource, "Organization", 123L, LONG_ID, new Date(), null);
ResourceLink link = getResourceLinkForLocalReference(LONG_ID);
myParams.getResourceLinks().add(link);
ReferenceParam referenceParam = getReferenceParam(LONG_ID);
@ -75,6 +75,21 @@ public class ResourceIndexedSearchParamsTest {
assertTrue(result);
}
private ResourceLink getResourceLinkForLocalReference(String theTargetResourceId){
ResourceLink.ResourceLinkForLocalReferenceParams params = ResourceLink.ResourceLinkForLocalReferenceParams
.instance()
.setSourcePath("organization")
.setSourceResource(mySource)
.setTargetResourceType("Organization")
.setTargetResourcePid(123L)
.setTargetResourceId(theTargetResourceId)
.setUpdated(new Date());
return ResourceLink.forLocalReference(params);
}
private ReferenceParam getReferenceParam(String theId) {
ReferenceParam retVal = new ReferenceParam();
retVal.setValue(theId);

View File

@ -0,0 +1,38 @@
package ca.uhn.fhir.jpa.searchparam.registry;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.context.RuntimeSearchParam;
import ca.uhn.fhir.rest.server.util.FhirContextSearchParamRegistry;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.CsvSource;
import static org.assertj.core.api.Assertions.assertThat;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_ID;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_LAST_UPDATED;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_PROFILE;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_SECURITY;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_TAG;
class FhirContextSearchParamRegistryTest {
private static final FhirContext ourFhirContext = FhirContext.forR4();
FhirContextSearchParamRegistry mySearchParamRegistry = new FhirContextSearchParamRegistry(ourFhirContext);
@ParameterizedTest
@CsvSource({
SP_RES_ID + ", Resource.id",
SP_RES_LAST_UPDATED + ", Resource.meta.lastUpdated",
SP_RES_TAG + ", Resource.meta.tag",
SP_RES_PROFILE + ", Resource.meta.profile",
SP_RES_SECURITY + ", Resource.meta.security"
})
void testResourceLevelSearchParamsAreRegistered(String theSearchParamName, String theSearchParamPath) {
RuntimeSearchParam sp = mySearchParamRegistry.getActiveSearchParam("Patient", theSearchParamName);
assertThat(sp)
.as("path is null for search parameter: '%s'", theSearchParamName)
.isNotNull().extracting("path").isEqualTo(theSearchParamPath);
}
}

View File

@ -151,15 +151,17 @@ public class SubscriptionMatchingSubscriber implements MessageHandler {
*/
private boolean processSubscription(
ResourceModifiedMessage theMsg, IIdType theResourceId, ActiveSubscription theActiveSubscription) {
// skip if the partitions don't match
CanonicalSubscription subscription = theActiveSubscription.getSubscription();
if (subscription != null
&& theMsg.getPartitionId() != null
&& theMsg.getPartitionId().hasPartitionIds()
&& !subscription.getCrossPartitionEnabled()
&& !subscription.isCrossPartitionEnabled()
&& !theMsg.getPartitionId().hasPartitionId(subscription.getRequestPartitionId())) {
return false;
}
String nextSubscriptionId = theActiveSubscription.getId();
if (isNotBlank(theMsg.getSubscriptionId())) {

View File

@ -241,7 +241,7 @@ public class SubscriptionValidatingInterceptor {
RequestPartitionId theRequestPartitionId,
Pointcut thePointcut) {
// If the subscription has the cross partition tag
if (SubscriptionUtil.isCrossPartition(theSubscription)
if (SubscriptionUtil.isDefinedAsCrossPartitionSubcription(theSubscription)
&& !(theRequestDetails instanceof SystemRequestDetails)) {
if (!mySubscriptionSettings.isCrossPartitionSubscriptionEnabled()) {
throw new UnprocessableEntityException(

View File

@ -151,7 +151,7 @@ public class SubscriptionTriggeringSvcImpl implements ISubscriptionTriggeringSvc
if (theSubscriptionId != null) {
IFhirResourceDao<?> subscriptionDao = myDaoRegistry.getSubscriptionDao();
IBaseResource subscription = subscriptionDao.read(theSubscriptionId, theRequestDetails);
if (mySubscriptionCanonicalizer.canonicalize(subscription).getCrossPartitionEnabled()) {
if (mySubscriptionCanonicalizer.canonicalize(subscription).isCrossPartitionEnabled()) {
requestPartitionId = RequestPartitionId.allPartitions();
} else {
// Otherwise, trust the partition passed in via tenant/interceptor.

View File

@ -34,7 +34,7 @@ public class SubscriptionUtil {
RequestPartitionId requestPartitionId =
new PartitionablePartitionId(theSubscription.getRequestPartitionId(), null).toPartitionId();
if (theSubscription.getCrossPartitionEnabled()) {
if (theSubscription.isCrossPartitionEnabled()) {
requestPartitionId = RequestPartitionId.allPartitions();
}

View File

@ -34,7 +34,7 @@ import org.springframework.context.annotation.Lazy;
@Import(SubscriptionConfig.class)
public class SubscriptionTopicConfig {
@Bean
SubscriptionTopicMatchingSubscriber subscriptionTopicMatchingSubscriber(
public SubscriptionTopicMatchingSubscriber subscriptionTopicMatchingSubscriber(
FhirContext theFhirContext, MemoryCacheService memoryCacheService) {
switch (theFhirContext.getVersion().getVersion()) {
case R5:
@ -47,19 +47,19 @@ public class SubscriptionTopicConfig {
@Bean
@Lazy
SubscriptionTopicRegistry subscriptionTopicRegistry() {
public SubscriptionTopicRegistry subscriptionTopicRegistry() {
return new SubscriptionTopicRegistry();
}
@Bean
@Lazy
SubscriptionTopicSupport subscriptionTopicSupport(
public SubscriptionTopicSupport subscriptionTopicSupport(
FhirContext theFhirContext, DaoRegistry theDaoRegistry, SearchParamMatcher theSearchParamMatcher) {
return new SubscriptionTopicSupport(theFhirContext, theDaoRegistry, theSearchParamMatcher);
}
@Bean
SubscriptionTopicLoader subscriptionTopicLoader(FhirContext theFhirContext) {
public SubscriptionTopicLoader subscriptionTopicLoader(FhirContext theFhirContext) {
switch (theFhirContext.getVersion().getVersion()) {
case R5:
case R4B:
@ -70,7 +70,7 @@ public class SubscriptionTopicConfig {
}
@Bean
SubscriptionTopicRegisteringSubscriber subscriptionTopicRegisteringSubscriber(FhirContext theFhirContext) {
public SubscriptionTopicRegisteringSubscriber subscriptionTopicRegisteringSubscriber(FhirContext theFhirContext) {
switch (theFhirContext.getVersion().getVersion()) {
case R5:
case R4B:
@ -81,7 +81,7 @@ public class SubscriptionTopicConfig {
}
@Bean
SubscriptionTopicValidatingInterceptor subscriptionTopicValidatingInterceptor(
public SubscriptionTopicValidatingInterceptor subscriptionTopicValidatingInterceptor(
FhirContext theFhirContext, SubscriptionQueryValidator theSubscriptionQueryValidator) {
switch (theFhirContext.getVersion().getVersion()) {
case R5:

View File

@ -1,43 +1,37 @@
package ca.uhn.fhir.jpa.subscription.match.registry;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.jpa.subscription.model.CanonicalSubscription;
import ca.uhn.fhir.jpa.subscription.model.CanonicalSubscriptionChannelType;
import ca.uhn.fhir.jpa.subscription.model.CanonicalTopicSubscriptionFilter;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.model.api.ExtensionDt;
import ca.uhn.fhir.model.api.IFhirVersion;
import ca.uhn.fhir.model.dstu2.FhirDstu2;
import ca.uhn.fhir.model.primitive.BooleanDt;
import ca.uhn.fhir.subscription.SubscriptionConstants;
import ca.uhn.fhir.subscription.SubscriptionTestDataHelper;
import ca.uhn.fhir.util.HapiExtensions;
import jakarta.annotation.Nonnull;
import org.hl7.fhir.dstu3.hapi.ctx.FhirDstu3;
import org.hl7.fhir.instance.model.api.IBaseResource;
import org.hl7.fhir.r4.hapi.ctx.FhirR4;
import org.hl7.fhir.r4.model.BooleanType;
import org.hl7.fhir.r4.model.Extension;
import org.hl7.fhir.r4.model.Subscription;
import org.hl7.fhir.r4b.hapi.ctx.FhirR4B;
import org.hl7.fhir.r5.hapi.ctx.FhirR5;
import org.hl7.fhir.r5.model.Coding;
import org.hl7.fhir.r5.model.Enumerations;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;
import org.junit.jupiter.params.provider.ValueSource;
import java.util.stream.Stream;
import static ca.uhn.fhir.rest.api.Constants.CT_FHIR_JSON_NEW;
import static ca.uhn.fhir.rest.api.Constants.RESOURCE_PARTITION_ID;
import static ca.uhn.fhir.util.HapiExtensions.EX_SEND_DELETE_MESSAGES;
import static org.assertj.core.api.AssertionsForInterfaceTypes.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertNull;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.junit.jupiter.params.provider.Arguments.arguments;
class SubscriptionCanonicalizerTest {
@ -172,44 +166,158 @@ class SubscriptionCanonicalizerTest {
verifyChannelParameters(canonical, thePayloadContent);
}
private static Stream<Arguments> crossPartitionParams() {
return Stream.of(
arguments(true, FhirContext.forDstu2Cached()),
arguments(false, FhirContext.forDstu2Cached()),
arguments(true, FhirContext.forDstu3Cached()),
arguments(false, FhirContext.forDstu3Cached()),
arguments(true, FhirContext.forR4Cached()),
arguments(false, FhirContext.forR4Cached()),
arguments(true, FhirContext.forR4BCached()),
arguments(false, FhirContext.forR4BCached()),
arguments(true, FhirContext.forR5Cached()),
arguments(false, FhirContext.forR5Cached())
);
private static Stream<RequestPartitionId> crossPartitionParams() {
return Stream.of(null, RequestPartitionId.fromPartitionId(1), RequestPartitionId.defaultPartition()) ;
}
@ParameterizedTest
@MethodSource("crossPartitionParams")
void testCrossPartition(boolean theCrossPartitionSubscriptionEnabled, FhirContext theFhirContext) {
final IFhirVersion version = theFhirContext.getVersion();
IBaseResource subscription = null;
if (version instanceof FhirDstu2){
subscription = new ca.uhn.fhir.model.dstu2.resource.Subscription();
} else if (version instanceof FhirDstu3){
subscription = new org.hl7.fhir.dstu3.model.Subscription();
} else if (version instanceof FhirR4){
subscription = new Subscription();
} else if (version instanceof FhirR4B){
subscription = new org.hl7.fhir.r4b.model.Subscription();
} else if (version instanceof FhirR5){
subscription = new org.hl7.fhir.r5.model.Subscription();
void testSubscriptionCrossPartitionEnableProperty_forDstu2WithExtensionAndPartitions(RequestPartitionId theRequestPartitionId) {
final SubscriptionSettings subscriptionSettings = new SubscriptionSettings();
subscriptionSettings.setCrossPartitionSubscriptionEnabled(true);
final SubscriptionCanonicalizer subscriptionCanonicalizer = new SubscriptionCanonicalizer(FhirContext.forDstu2(), subscriptionSettings);
ca.uhn.fhir.model.dstu2.resource.Subscription subscriptionWithoutExtension = new ca.uhn.fhir.model.dstu2.resource.Subscription();
subscriptionWithoutExtension.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
final CanonicalSubscription canonicalSubscriptionWithoutExtension = subscriptionCanonicalizer.canonicalize(subscriptionWithoutExtension);
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
}
@ParameterizedTest
@MethodSource("crossPartitionParams")
void testSubscriptionCrossPartitionEnableProperty_forDstu3WithExtensionAndPartitions(RequestPartitionId theRequestPartitionId) {
final SubscriptionSettings subscriptionSettings = new SubscriptionSettings();
subscriptionSettings.setCrossPartitionSubscriptionEnabled(true);
final SubscriptionCanonicalizer subscriptionCanonicalizer = new SubscriptionCanonicalizer(FhirContext.forDstu3(), subscriptionSettings);
org.hl7.fhir.dstu3.model.Subscription subscriptionWithoutExtension = new org.hl7.fhir.dstu3.model.Subscription();
subscriptionWithoutExtension.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
org.hl7.fhir.dstu3.model.Subscription subscriptionWithExtensionCrossPartitionTrue = new org.hl7.fhir.dstu3.model.Subscription();
subscriptionWithExtensionCrossPartitionTrue.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionTrue.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.dstu3.model.BooleanType().setValue(true));
org.hl7.fhir.dstu3.model.Subscription subscriptionWithExtensionCrossPartitionFalse = new org.hl7.fhir.dstu3.model.Subscription();
subscriptionWithExtensionCrossPartitionFalse.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionFalse.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.dstu3.model.BooleanType().setValue(false));
final CanonicalSubscription canonicalSubscriptionWithoutExtension = subscriptionCanonicalizer.canonicalize(subscriptionWithoutExtension);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionTrue = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionTrue);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionFalse = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionFalse);
if(RequestPartitionId.isDefaultPartition(theRequestPartitionId)){
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isTrue();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
} else {
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
}
}
@ParameterizedTest
@MethodSource("crossPartitionParams")
void testSubscriptionCrossPartitionEnableProperty_forR4WithExtensionAndPartitions(RequestPartitionId theRequestPartitionId) {
final SubscriptionSettings subscriptionSettings = new SubscriptionSettings();
subscriptionSettings.setCrossPartitionSubscriptionEnabled(true);
final SubscriptionCanonicalizer subscriptionCanonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4Cached(), subscriptionSettings);
Subscription subscriptionWithoutExtension = new Subscription();
subscriptionWithoutExtension.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
Subscription subscriptionWithExtensionCrossPartitionTrue = new Subscription();
subscriptionWithExtensionCrossPartitionTrue.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionTrue.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r4.model.BooleanType().setValue(true));
Subscription subscriptionWithExtensionCrossPartitionFalse = new Subscription();
subscriptionWithExtensionCrossPartitionFalse.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionFalse.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r4.model.BooleanType().setValue(false));
final CanonicalSubscription canonicalSubscriptionWithoutExtension = subscriptionCanonicalizer.canonicalize(subscriptionWithoutExtension);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionTrue = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionTrue);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionFalse = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionFalse);
if(RequestPartitionId.isDefaultPartition(theRequestPartitionId)){
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isTrue();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
} else {
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
}
final SubscriptionSettings subscriptionSettings = new SubscriptionSettings();
subscriptionSettings.setCrossPartitionSubscriptionEnabled(theCrossPartitionSubscriptionEnabled);
final SubscriptionCanonicalizer subscriptionCanonicalizer = new SubscriptionCanonicalizer(theFhirContext, subscriptionSettings);
final CanonicalSubscription canonicalSubscription = subscriptionCanonicalizer.canonicalize(subscription);
}
assertEquals(theCrossPartitionSubscriptionEnabled, canonicalSubscription.getCrossPartitionEnabled());
@ParameterizedTest
@MethodSource("crossPartitionParams")
void testSubscriptionCrossPartitionEnableProperty_forR4BWithExtensionAndPartitions(RequestPartitionId theRequestPartitionId) {
final SubscriptionSettings subscriptionSettings = new SubscriptionSettings();
subscriptionSettings.setCrossPartitionSubscriptionEnabled(true);
final SubscriptionCanonicalizer subscriptionCanonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4BCached(), subscriptionSettings);
org.hl7.fhir.r4b.model.Subscription subscriptionWithoutExtension = new org.hl7.fhir.r4b.model.Subscription();
subscriptionWithoutExtension.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
org.hl7.fhir.r4b.model.Subscription subscriptionWithExtensionCrossPartitionTrue = new org.hl7.fhir.r4b.model.Subscription();
subscriptionWithExtensionCrossPartitionTrue.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionTrue.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r4b.model.BooleanType().setValue(true));
org.hl7.fhir.r4b.model.Subscription subscriptionWithExtensionCrossPartitionFalse = new org.hl7.fhir.r4b.model.Subscription();
subscriptionWithExtensionCrossPartitionFalse.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionFalse.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r4b.model.BooleanType().setValue(false));
final CanonicalSubscription canonicalSubscriptionWithoutExtension = subscriptionCanonicalizer.canonicalize(subscriptionWithoutExtension);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionTrue = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionTrue);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionFalse = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionFalse);
if(RequestPartitionId.isDefaultPartition(theRequestPartitionId)){
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isTrue();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
} else {
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
}
}
@ParameterizedTest
@MethodSource("crossPartitionParams")
void testSubscriptionCrossPartitionEnableProperty_forR5WithExtensionAndPartitions(RequestPartitionId theRequestPartitionId) {
final SubscriptionSettings subscriptionSettings = new SubscriptionSettings();
subscriptionSettings.setCrossPartitionSubscriptionEnabled(true);
final SubscriptionCanonicalizer subscriptionCanonicalizer = new SubscriptionCanonicalizer(FhirContext.forR5Cached(), subscriptionSettings);
org.hl7.fhir.r5.model.Subscription subscriptionWithoutExtension = new org.hl7.fhir.r5.model.Subscription();
subscriptionWithoutExtension.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
org.hl7.fhir.r5.model.Subscription subscriptionWithExtensionCrossPartitionTrue = new org.hl7.fhir.r5.model.Subscription();
subscriptionWithExtensionCrossPartitionTrue.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionTrue.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r5.model.BooleanType().setValue(true));
org.hl7.fhir.r5.model.Subscription subscriptionWithExtensionCrossPartitionFalse = new org.hl7.fhir.r5.model.Subscription();
subscriptionWithExtensionCrossPartitionFalse.setUserData(RESOURCE_PARTITION_ID, theRequestPartitionId);
subscriptionWithExtensionCrossPartitionFalse.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r5.model.BooleanType().setValue(false));
final CanonicalSubscription canonicalSubscriptionWithoutExtension = subscriptionCanonicalizer.canonicalize(subscriptionWithoutExtension);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionTrue = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionTrue);
final CanonicalSubscription canonicalSubscriptionWithExtensionCrossPartitionFalse = subscriptionCanonicalizer.canonicalize(subscriptionWithExtensionCrossPartitionFalse);
if(RequestPartitionId.isDefaultPartition(theRequestPartitionId)){
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isTrue();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
} else {
assertThat(canonicalSubscriptionWithoutExtension.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionTrue.isCrossPartitionEnabled()).isFalse();
assertThat(canonicalSubscriptionWithExtensionCrossPartitionFalse.isCrossPartitionEnabled()).isFalse();
}
}
private org.hl7.fhir.r4b.model.Subscription buildR4BSubscription(String thePayloadContent) {

View File

@ -1,22 +1,23 @@
package ca.uhn.fhir.jpa.subscription.module;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.jpa.model.entity.StorageSettings;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.jpa.subscription.match.registry.SubscriptionCanonicalizer;
import ca.uhn.fhir.jpa.subscription.model.CanonicalSubscription;
import ca.uhn.fhir.jpa.subscription.model.ResourceDeliveryJsonMessage;
import ca.uhn.fhir.jpa.subscription.model.ResourceDeliveryMessage;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.util.HapiExtensions;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import jakarta.annotation.Nonnull;
import org.assertj.core.util.Lists;
import org.hl7.fhir.r4.model.BooleanType;
import org.hl7.fhir.r4.model.StringType;
import org.hl7.fhir.r4.model.Subscription;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;
import org.junit.jupiter.params.provider.ValueSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -24,11 +25,13 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.HashMap;
import java.util.List;
import java.util.stream.Stream;
import static ca.uhn.fhir.rest.api.Constants.RESOURCE_PARTITION_ID;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNull;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertNull;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class CanonicalSubscriptionTest {
@ -71,7 +74,7 @@ public class CanonicalSubscriptionTest {
}
@Test
public void testCanonicalSubscriptionRetainsMetaTags() throws IOException {
public void testCanonicalSubscriptionRetainsMetaTags() {
SubscriptionCanonicalizer canonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4(), new SubscriptionSettings());
CanonicalSubscription sub1 = canonicalizer.canonicalize(makeMdmSubscription());
assertThat(sub1.getTags()).containsKey(TAG_SYSTEM);
@ -86,40 +89,48 @@ public class CanonicalSubscriptionTest {
assertTrue(sub1.equals(sub2));
}
@Test
public void testSerializeMultiPartitionSubscription(){
SubscriptionCanonicalizer canonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4(), new SubscriptionSettings());
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSubscriptionCrossPartitionEnabled_basedOnGlobalFlagAndExtensionFalse(boolean theIsCrossPartitionEnabled){
final SubscriptionSettings subscriptionSettings = buildSubscriptionSettings(theIsCrossPartitionEnabled);
SubscriptionCanonicalizer canonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4(), subscriptionSettings);
Subscription subscriptionWithExtensionSetToBooleanFalse = makeEmailSubscription();
subscriptionWithExtensionSetToBooleanFalse.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new BooleanType().setValue(false));
CanonicalSubscription canonicalSubscriptionExtensionSetToBooleanFalse = canonicalizer.canonicalize(subscriptionWithExtensionSetToBooleanFalse);
assertEquals(canonicalSubscriptionExtensionSetToBooleanFalse.isCrossPartitionEnabled(), false);
}
static Stream<Arguments> requestPartitionIds() {
return Stream.of(
Arguments.of(null, false),
Arguments.of(RequestPartitionId.allPartitions(), false),
Arguments.of(RequestPartitionId.fromPartitionIds(1), false),
Arguments.of(RequestPartitionId.defaultPartition(), true)
);
}
@ParameterizedTest
@MethodSource("requestPartitionIds")
public void testSubscriptionCrossPartitionEnabled_basedOnGlobalFlagAndExtensionAndPartitionId(RequestPartitionId theRequestPartitionId, boolean theExpectedIsCrossPartitionEnabled){
final boolean globalIsCrossPartitionEnabled = true; // not required but to help understand what the test is doing.
SubscriptionCanonicalizer canonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4(), buildSubscriptionSettings(globalIsCrossPartitionEnabled));
Subscription subscription = makeEmailSubscription();
subscription.setUserData(RESOURCE_PARTITION_ID,theRequestPartitionId);
subscription.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new BooleanType().setValue(true));
CanonicalSubscription canonicalSubscription = canonicalizer.canonicalize(subscription);
assertEquals(canonicalSubscription.getCrossPartitionEnabled(), true);
}
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSerializeIncorrectMultiPartitionSubscription(boolean theIsCrossPartitionEnabled){
final SubscriptionSettings subscriptionSettings = buildSubscriptionSettings(theIsCrossPartitionEnabled);
SubscriptionCanonicalizer canonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4(), subscriptionSettings);
Subscription subscription = makeEmailSubscription();
subscription.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new StringType().setValue("false"));
CanonicalSubscription canonicalSubscription = canonicalizer.canonicalize(subscription);
assertEquals(canonicalSubscription.getCrossPartitionEnabled(), theIsCrossPartitionEnabled);
}
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSerializeNonMultiPartitionSubscription(boolean theIsCrossPartitionEnabled){
final SubscriptionSettings subscriptionSettings = buildSubscriptionSettings(theIsCrossPartitionEnabled);
SubscriptionCanonicalizer canonicalizer = new SubscriptionCanonicalizer(FhirContext.forR4(), subscriptionSettings);
Subscription subscription = makeEmailSubscription();
subscription.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new BooleanType().setValue(false));
CanonicalSubscription canonicalSubscription = canonicalizer.canonicalize(subscription);
System.out.print(canonicalSubscription);
assertEquals(canonicalSubscription.getCrossPartitionEnabled(), theIsCrossPartitionEnabled);
// for a Subscription to be a cross-partition subscription, 3 things are required:
// - The Subs need to be created on the default partition
// - The Subs need to have extension EXTENSION_SUBSCRIPTION_CROSS_PARTITION set to true
// - Global flag CrossPartitionSubscriptionEnabled needs to be true
assertThat(canonicalSubscription.isCrossPartitionEnabled()).isEqualTo(theExpectedIsCrossPartitionEnabled);
}
@Test
@ -130,7 +141,7 @@ public class CanonicalSubscriptionTest {
CanonicalSubscription payload = resourceDeliveryMessage.getPayload().getSubscription();
assertFalse(payload.getCrossPartitionEnabled());
assertFalse(payload.isCrossPartitionEnabled());
}
private Subscription makeEmailSubscription() {

View File

@ -4,8 +4,8 @@ import ca.uhn.fhir.interceptor.api.HookParams;
import ca.uhn.fhir.interceptor.api.IInterceptorBroadcaster;
import ca.uhn.fhir.interceptor.api.Pointcut;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
import ca.uhn.fhir.jpa.api.dao.IFhirResourceDao;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.jpa.subscription.match.matcher.subscriber.SubscriptionCriteriaParser;
import ca.uhn.fhir.jpa.subscription.match.matcher.subscriber.SubscriptionMatchDeliverer;
import ca.uhn.fhir.jpa.subscription.match.matcher.subscriber.SubscriptionMatchingSubscriber;
@ -14,7 +14,6 @@ import ca.uhn.fhir.jpa.subscription.match.registry.SubscriptionRegistry;
import ca.uhn.fhir.jpa.subscription.model.CanonicalSubscription;
import ca.uhn.fhir.jpa.subscription.model.ResourceModifiedMessage;
import ca.uhn.fhir.jpa.subscription.module.standalone.BaseBlockingQueueSubscribableChannelDstu3Test;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.model.primitive.IdDt;
import ca.uhn.fhir.rest.api.Constants;
import ca.uhn.fhir.rest.server.messaging.BaseResourceModifiedMessage;
@ -192,35 +191,16 @@ public class SubscriptionMatchingSubscriberTest extends BaseBlockingQueueSubscri
assertEquals(Constants.CT_FHIR_XML_NEW, ourContentTypes.get(0));
}
@Test
public void testSubscriptionAndResourceOnTheSamePartitionMatch() throws InterruptedException {
@ParameterizedTest
@ValueSource(ints = {0,1})
public void testSubscriptionAndResourceOnTheSamePartitionMatch(int thePartitionId) throws InterruptedException {
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
String code = "1000000050";
String criteria = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
RequestPartitionId requestPartitionId = RequestPartitionId.fromPartitionId(0);
Subscription subscription = makeActiveSubscription(criteria, payload, ourListenerServerBase);
mockSubscriptionRead(requestPartitionId, subscription);
sendSubscription(subscription, requestPartitionId, true);
ourObservationListener.setExpectedCount(1);
mySubscriptionResourceMatched.setExpectedCount(1);
sendObservation(code, "SNOMED-CT", requestPartitionId);
mySubscriptionResourceMatched.awaitExpected();
ourObservationListener.awaitExpected();
}
@Test
public void testSubscriptionAndResourceOnTheSamePartitionMatchPart2() throws InterruptedException {
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
String code = "1000000050";
String criteria = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
RequestPartitionId requestPartitionId = RequestPartitionId.fromPartitionId(1);
RequestPartitionId requestPartitionId = RequestPartitionId.fromPartitionId(thePartitionId);
Subscription subscription = makeActiveSubscription(criteria, payload, ourListenerServerBase);
mockSubscriptionRead(requestPartitionId, subscription);
sendSubscription(subscription, requestPartitionId, true);
@ -234,7 +214,7 @@ public class SubscriptionMatchingSubscriberTest extends BaseBlockingQueueSubscri
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSubscriptionAndResourceOnDiffPartitionNotMatch(boolean theIsCrossPartitionEnabled) throws InterruptedException {
public void testSubscriptionCrossPartitionMatching_whenSubscriptionAndResourceOnDiffPartition_withGlobalFlagCrossPartitionSubscriptionEnable(boolean theIsCrossPartitionEnabled) throws InterruptedException {
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(theIsCrossPartitionEnabled);
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
@ -242,8 +222,9 @@ public class SubscriptionMatchingSubscriberTest extends BaseBlockingQueueSubscri
String code = "1000000050";
String criteria = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
RequestPartitionId requestPartitionId = RequestPartitionId.fromPartitionId(1);
RequestPartitionId requestPartitionId = RequestPartitionId.defaultPartition();
Subscription subscription = makeActiveSubscription(criteria, payload, ourListenerServerBase);
subscription.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.dstu3.model.BooleanType().setValue(true));
mockSubscriptionRead(requestPartitionId, subscription);
sendSubscription(subscription, requestPartitionId, true);
@ -255,80 +236,8 @@ public class SubscriptionMatchingSubscriberTest extends BaseBlockingQueueSubscri
}
}
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSubscriptionAndResourceOnDiffPartitionNotMatchPart2(boolean theIsCrossPartitionEnabled) throws InterruptedException {
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(theIsCrossPartitionEnabled);
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
String code = "1000000050";
String criteria = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
RequestPartitionId requestPartitionId = RequestPartitionId.fromPartitionId(0);
Subscription subscription = makeActiveSubscription(criteria, payload, ourListenerServerBase);
mockSubscriptionRead(requestPartitionId, subscription);
sendSubscription(subscription, requestPartitionId, true);
final ThrowsInterrupted throwsInterrupted = () -> sendObservation(code, "SNOMED-CT", RequestPartitionId.fromPartitionId(1));
if (theIsCrossPartitionEnabled) {
runWithinLatchLogicExpectSuccess(throwsInterrupted);
} else {
runWithLatchLogicExpectFailure(throwsInterrupted);
}
}
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSubscriptionOnDefaultPartitionAndResourceOnDiffPartitionNotMatch(boolean theIsCrossPartitionEnabled) throws InterruptedException {
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(theIsCrossPartitionEnabled);
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
String code = "1000000050";
String criteria = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
RequestPartitionId requestPartitionId = RequestPartitionId.defaultPartition();
Subscription subscription = makeActiveSubscription(criteria, payload, ourListenerServerBase);
mockSubscriptionRead(requestPartitionId, subscription);
sendSubscription(subscription, requestPartitionId, true);
final ThrowsInterrupted throwsInterrupted = () -> sendObservation(code, "SNOMED-CT", RequestPartitionId.fromPartitionId(1));
if (theIsCrossPartitionEnabled) {
runWithinLatchLogicExpectSuccess(throwsInterrupted);
} else {
runWithLatchLogicExpectFailure(throwsInterrupted);
}
}
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSubscriptionOnAPartitionAndResourceOnDefaultPartitionNotMatch(boolean theIsCrossPartitionEnabled) throws InterruptedException {
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(theIsCrossPartitionEnabled);
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
String code = "1000000050";
String criteria = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
RequestPartitionId requestPartitionId = RequestPartitionId.fromPartitionId(1);
Subscription subscription = makeActiveSubscription(criteria, payload, ourListenerServerBase);
mockSubscriptionRead(requestPartitionId, subscription);
sendSubscription(subscription, requestPartitionId, true);
final ThrowsInterrupted throwsInterrupted = () -> sendObservation(code, "SNOMED-CT", RequestPartitionId.defaultPartition());
if (theIsCrossPartitionEnabled) {
runWithinLatchLogicExpectSuccess(throwsInterrupted);
} else {
runWithLatchLogicExpectFailure(throwsInterrupted);
}
}
@Test
public void testSubscriptionOnOnePartitionMatchResourceOnMultiplePartitions() throws InterruptedException {
public void testSubscriptionOnOnePartition_whenResourceCreatedOnMultiplePartitions_matchesOnlyResourceCreatedOnSamePartition() throws InterruptedException {
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
@ -350,7 +259,7 @@ public class SubscriptionMatchingSubscriberTest extends BaseBlockingQueueSubscri
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testSubscriptionOnOnePartitionDoNotMatchResourceOnMultiplePartitions(boolean theIsCrossPartitionEnabled) throws InterruptedException {
public void testSubscriptionCrossPartitionMatching_whenSubscriptionAndResourceOnDiffPartition_withGlobalFlagCrossPartitionSubscriptionEnable2(boolean theIsCrossPartitionEnabled) throws InterruptedException {
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(theIsCrossPartitionEnabled);
myPartitionSettings.setPartitioningEnabled(true);
String payload = "application/fhir+json";
@ -358,8 +267,9 @@ public class SubscriptionMatchingSubscriberTest extends BaseBlockingQueueSubscri
String code = "1000000050";
String criteria = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
RequestPartitionId requestPartitionId = RequestPartitionId.fromPartitionId(1);
RequestPartitionId requestPartitionId = RequestPartitionId.defaultPartition();
Subscription subscription = makeActiveSubscription(criteria, payload, ourListenerServerBase);
subscription.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.dstu3.model.BooleanType().setValue(true));
mockSubscriptionRead(requestPartitionId, subscription);
sendSubscription(subscription, requestPartitionId, true);

View File

@ -1,6 +1,5 @@
package ca.uhn.fhir.jpa.dao.dstu3;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import ca.uhn.fhir.i18n.Msg;
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
import ca.uhn.fhir.jpa.api.model.DaoMethodOutcome;
@ -54,8 +53,10 @@ import java.util.List;
import java.util.stream.Collectors;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.fail;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.fail;
public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu3Test {
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(FhirResourceDaoDstu3SearchCustomSearchParamTest.class);
@ -192,7 +193,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
}
@Test
public void testCustomReferenceParameter() throws Exception {
public void testCustomReferenceParameter() {
SearchParameter sp = new SearchParameter();
sp.addBase("Patient");
sp.setCode("myDoctor");
@ -238,7 +239,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
Patient p1 = new Patient();
p1.setActive(true);
p1.addExtension().setUrl("http://acme.org/eyecolour").addExtension().setUrl("http://foo").setValue(new StringType("VAL"));
IIdType p1id = myPatientDao.create(p1).getId().toUnqualifiedVersionless();
myPatientDao.create(p1).getId().toUnqualifiedVersionless();
}
@ -253,7 +254,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
attendingSp.setXpathUsage(org.hl7.fhir.dstu3.model.SearchParameter.XPathUsageType.NORMAL);
attendingSp.setStatus(org.hl7.fhir.dstu3.model.Enumerations.PublicationStatus.ACTIVE);
attendingSp.getTarget().add(new CodeType("Practitioner"));
IIdType spId = mySearchParameterDao.create(attendingSp, mySrd).getId().toUnqualifiedVersionless();
mySearchParameterDao.create(attendingSp, mySrd).getId().toUnqualifiedVersionless();
mySearchParamRegistry.forceRefresh();
@ -417,7 +418,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
Patient p2 = new Patient();
p2.addName().setFamily("P2");
p2.addExtension().setUrl("http://acme.org/sibling").setValue(new Reference(p1id));
IIdType p2id = myPatientDao.create(p2).getId().toUnqualifiedVersionless();
myPatientDao.create(p2).getId().toUnqualifiedVersionless();
SearchParameterMap map;
IBundleProvider results;
@ -571,7 +572,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
Patient p2 = new Patient();
p2.setActive(true);
p2.addExtension().setUrl("http://acme.org/eyecolour").setValue(new CodeType("green"));
IIdType p2id = myPatientDao.create(p2).getId().toUnqualifiedVersionless();
myPatientDao.create(p2).getId().toUnqualifiedVersionless();
// Try with custom gender SP
SearchParameterMap map = new SearchParameterMap();
@ -889,7 +890,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
.setUrl("http://acme.org/bar")
.setValue(new Reference(aptId.getValue()));
IIdType p2id = myPatientDao.create(patient).getId().toUnqualifiedVersionless();
myPatientDao.create(patient).getId().toUnqualifiedVersionless();
SearchParameterMap map;
IBundleProvider results;
@ -1035,7 +1036,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
Patient pat2 = new Patient();
pat.setGender(AdministrativeGender.FEMALE);
IIdType patId2 = myPatientDao.create(pat2, mySrd).getId().toUnqualifiedVersionless();
myPatientDao.create(pat2, mySrd).getId().toUnqualifiedVersionless();
SearchParameterMap map;
IBundleProvider results;
@ -1069,7 +1070,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
myPatientDao.search(map).size();
fail("");
} catch (InvalidRequestException e) {
assertEquals(Msg.code(1223) + "Unknown search parameter \"foo\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
assertEquals(Msg.code(1223) + "Unknown search parameter \"foo\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, _profile, _security, _tag, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
}
}
@ -1094,7 +1095,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
Patient pat2 = new Patient();
pat.setGender(AdministrativeGender.FEMALE);
IIdType patId2 = myPatientDao.create(pat2, mySrd).getId().toUnqualifiedVersionless();
myPatientDao.create(pat2, mySrd).getId().toUnqualifiedVersionless();
SearchParameterMap map;
IBundleProvider results;
@ -1107,7 +1108,7 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
myPatientDao.search(map).size();
fail("");
} catch (InvalidRequestException e) {
assertEquals(Msg.code(1223) + "Unknown search parameter \"foo\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
assertEquals(Msg.code(1223) + "Unknown search parameter \"foo\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, _profile, _security, _tag, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
}
// Try with normal gender SP
@ -1148,10 +1149,10 @@ public class FhirResourceDaoDstu3SearchCustomSearchParamTest extends BaseJpaDstu
.findAll()
.stream()
.filter(t -> t.getParamName().equals("medicationadministration-ingredient-medication"))
.collect(Collectors.toList());
ourLog.info("Tokens:\n * {}", tokens.stream().map(t -> t.toString()).collect(Collectors.joining("\n * ")));
.toList();
ourLog.info("Tokens:\n * {}", tokens.stream().map(ResourceIndexedSearchParamToken::toString).collect(Collectors.joining("\n * ")));
assertEquals(1, tokens.size(), tokens.toString());
assertEquals(false, tokens.get(0).isMissing());
assertFalse(tokens.get(0).isMissing());
});

View File

@ -1,5 +1,8 @@
package ca.uhn.fhir.jpa.dao.dstu3;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_PROFILE;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_SECURITY;
import static org.hl7.fhir.instance.model.api.IAnyResource.SP_RES_TAG;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertNull;
@ -33,6 +36,7 @@ import ca.uhn.fhir.rest.api.MethodOutcome;
import ca.uhn.fhir.rest.api.SortOrderEnum;
import ca.uhn.fhir.rest.api.SortSpec;
import ca.uhn.fhir.rest.api.server.IBundleProvider;
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
import ca.uhn.fhir.rest.api.server.storage.IResourcePersistentId;
import ca.uhn.fhir.rest.param.DateParam;
import ca.uhn.fhir.rest.param.DateRangeParam;
@ -42,6 +46,7 @@ import ca.uhn.fhir.rest.param.ReferenceParam;
import ca.uhn.fhir.rest.param.StringParam;
import ca.uhn.fhir.rest.param.TokenOrListParam;
import ca.uhn.fhir.rest.param.TokenParam;
import ca.uhn.fhir.rest.param.UriParam;
import ca.uhn.fhir.rest.server.exceptions.InvalidRequestException;
import ca.uhn.fhir.rest.server.exceptions.ResourceGoneException;
import ca.uhn.fhir.rest.server.exceptions.ResourceNotFoundException;
@ -111,6 +116,7 @@ import java.util.Collections;
import java.util.Comparator;
import java.util.Date;
import java.util.List;
import java.util.Objects;
import static org.apache.commons.lang3.StringUtils.defaultString;
import static org.assertj.core.api.Assertions.assertThat;
@ -183,7 +189,7 @@ public class FhirResourceDaoDstu3Test extends BaseJpaDstu3Test {
SearchParameterMap map = new SearchParameterMap();
map.setLoadSynchronous(true);
map.add("_tag", new TokenParam(methodName, methodName));
assertEquals(1, myOrganizationDao.search(map).size().intValue());
assertEquals(1, Objects.requireNonNull(myOrganizationDao.search(map).size()).intValue());
myOrganizationDao.delete(orgId, mySrd);
@ -2082,7 +2088,102 @@ public class FhirResourceDaoDstu3Test extends BaseJpaDstu3Test {
found = toList(myPatientDao.search(new SearchParameterMap(Patient.SP_BIRTHDATE + "AAAA", new DateParam(ParamPrefixEnum.GREATERTHAN, "2000-01-01")).setLoadSynchronous(true)));
assertThat(found).isEmpty();
} catch (InvalidRequestException e) {
assertEquals(Msg.code(1223) + "Unknown search parameter \"birthdateAAAA\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
assertEquals(Msg.code(1223) + "Unknown search parameter \"birthdateAAAA\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, _profile, _security, _tag, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
}
}
@Test
public void testPersistSearchParamTag() {
Coding TEST_PARAM_VALUE_CODING_1 = new Coding("test-system", "test-code-1", null);
Coding TEST_PARAM_VALUE_CODING_2 = new Coding("test-system", "test-code-2", null);
List<Patient> found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_TAG, new TokenParam(TEST_PARAM_VALUE_CODING_1)).setLoadSynchronous(true), new SystemRequestDetails()));
int initialSizeCoding1 = found.size();
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_TAG, new TokenParam(TEST_PARAM_VALUE_CODING_2)).setLoadSynchronous(true), new SystemRequestDetails()));
int initialSizeCoding2 = found.size();
Patient patient = new Patient();
patient.addIdentifier().setSystem("urn:system").setValue("001");
patient.getMeta().addTag(TEST_PARAM_VALUE_CODING_1);
myPatientDao.create(patient, mySrd);
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_TAG, new TokenParam(TEST_PARAM_VALUE_CODING_1)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).hasSize(1 + initialSizeCoding1);
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_TAG, new TokenParam(TEST_PARAM_VALUE_CODING_2)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).hasSize(initialSizeCoding2);
// If this throws an exception, that would be an acceptable outcome as well..
try {
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_TAG + "AAAA", new TokenParam(TEST_PARAM_VALUE_CODING_1)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).isEmpty();
} catch (InvalidRequestException e) {
assertEquals(Msg.code(1223) + "Unknown search parameter \"" + SP_RES_TAG+"AAAA" + "\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, _profile, _security, _tag, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
}
}
@Test
public void testPersistSearchParamProfile() {
String profileA = "profile-AAA";
String profileB = "profile-BBB";
List<Patient> found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_PROFILE, new UriParam(profileA)).setLoadSynchronous(true), new SystemRequestDetails()));
int initialSizeProfileA = found.size();
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_PROFILE, new UriParam(profileB)).setLoadSynchronous(true), new SystemRequestDetails()));
int initialSizeProfileB = found.size();
Patient patient = new Patient();
patient.addIdentifier().setSystem("urn:system").setValue("001");
patient.getMeta().addProfile(profileA);
myPatientDao.create(patient, mySrd);
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_PROFILE, new UriParam(profileA)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).hasSize(1 + initialSizeProfileA);
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_PROFILE, new UriParam(profileB)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).hasSize(initialSizeProfileB);
// If this throws an exception, that would be an acceptable outcome as well..
try {
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_PROFILE + "AAAA", new UriParam(profileA)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).isEmpty();
} catch (InvalidRequestException e) {
assertEquals(Msg.code(1223) + "Unknown search parameter \"" + SP_RES_PROFILE+"AAAA" + "\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, _profile, _security, _tag, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
}
}
@Test
public void testPersistSearchParamSecurity() {
Coding TEST_PARAM_VALUE_CODING_1 = new Coding("test-system", "test-code-1", null);
Coding TEST_PARAM_VALUE_CODING_2 = new Coding("test-system", "test-code-2", null);
List<Patient> found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_SECURITY, new TokenParam(TEST_PARAM_VALUE_CODING_1)).setLoadSynchronous(true), new SystemRequestDetails()));
int initialSizeSecurityA = found.size();
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_SECURITY, new TokenParam(TEST_PARAM_VALUE_CODING_2)).setLoadSynchronous(true), new SystemRequestDetails()));
int initialSizeSecurityB = found.size();
Patient patient = new Patient();
patient.addIdentifier().setSystem("urn:system").setValue("001");
patient.getMeta().addSecurity(TEST_PARAM_VALUE_CODING_1);
myPatientDao.create(patient, mySrd);
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_SECURITY, new TokenParam(TEST_PARAM_VALUE_CODING_1)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).hasSize(1 + initialSizeSecurityA);
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_SECURITY, new TokenParam(TEST_PARAM_VALUE_CODING_2)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).hasSize(initialSizeSecurityB);
// If this throws an exception, that would be an acceptable outcome as well..
try {
found = toList(myPatientDao.search(new SearchParameterMap(SP_RES_SECURITY + "AAAA", new TokenParam(TEST_PARAM_VALUE_CODING_1)).setLoadSynchronous(true), new SystemRequestDetails()));
assertThat(found).isEmpty();
} catch (InvalidRequestException e) {
assertEquals(Msg.code(1223) + "Unknown search parameter \"" + SP_RES_SECURITY+"AAAA" + "\" for resource type \"Patient\". Valid search parameters for this search are: [_id, _lastUpdated, _profile, _security, _tag, active, address, address-city, address-country, address-postalcode, address-state, address-use, animal-breed, animal-species, birthdate, death-date, deceased, email, family, gender, general-practitioner, given, identifier, language, link, name, organization, phone, phonetic, telecom]", e.getMessage());
}
}

View File

@ -1,7 +1,5 @@
package ca.uhn.fhir.jpa.searchparam;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.context.RuntimeResourceDefinition;
import ca.uhn.fhir.i18n.Msg;
@ -22,10 +20,10 @@ import org.springframework.test.context.junit.jupiter.SpringExtension;
import org.springframework.transaction.PlatformTransactionManager;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.fail;
import static org.assertj.core.api.Assertions.within;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.fail;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.mock;
@ -104,19 +102,19 @@ public class MatchUrlServiceTest extends BaseJpaTest {
@Test
void testTotal_fromStandardLowerCase() {
// given
// when
// given
// when
var map = myMatchUrlService.translateMatchUrl("Patient?family=smith&_total=none", ourCtx.getResourceDefinition("Patient"));
// then
assertEquals(SearchTotalModeEnum.NONE, map.getSearchTotalMode());
// then
assertEquals(SearchTotalModeEnum.NONE, map.getSearchTotalMode());
}
@Test
void testTotal_fromUpperCase() {
// given
// when
var map = myMatchUrlService.translateMatchUrl("Patient?family=smith&_total=none", ourCtx.getResourceDefinition("Patient"));
var map = myMatchUrlService.translateMatchUrl("Patient?family=smith&_total=NONE", ourCtx.getResourceDefinition("Patient"));
// then
assertEquals(SearchTotalModeEnum.NONE, map.getSearchTotalMode());

View File

@ -0,0 +1,225 @@
package ca.uhn.fhir.jpa.batch2;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.api.pid.IResourcePidStream;
import ca.uhn.fhir.jpa.api.pid.TypedResourcePid;
import ca.uhn.fhir.jpa.api.svc.IBatch2DaoSvc;
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
import ca.uhn.fhir.jpa.searchparam.MatchUrlService;
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
import ca.uhn.fhir.model.primitive.IdDt;
import ca.uhn.fhir.parser.DataFormatException;
import ca.uhn.fhir.rest.server.exceptions.InternalErrorException;
import jakarta.annotation.Nonnull;
import org.hl7.fhir.instance.model.api.IIdType;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.NullSource;
import org.junit.jupiter.params.provider.ValueSource;
import org.springframework.beans.factory.annotation.Autowired;
import java.time.LocalDate;
import java.time.Month;
import java.time.ZoneId;
import java.util.Date;
import java.util.List;
import java.util.stream.IntStream;
import java.util.stream.Stream;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertThrows;
class Batch2DaoSvcImplTest extends BaseJpaR4Test {
private static final Date PREVIOUS_MILLENNIUM = toDate(LocalDate.of(1999, Month.DECEMBER, 31));
private static final Date TOMORROW = toDate(LocalDate.now().plusDays(1));
@Autowired
private MatchUrlService myMatchUrlService;
@Autowired
private IHapiTransactionService myIHapiTransactionService ;
private IBatch2DaoSvc mySvc;
@BeforeEach
void beforeEach() {
mySvc = new Batch2DaoSvcImpl(myResourceTableDao, myMatchUrlService, myDaoRegistry, myFhirContext, myIHapiTransactionService);
}
@Test
void fetchResourceIds_ByUrlInvalidUrl() {
IResourcePidStream stream = mySvc.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, null, "Patient");
final InternalErrorException exception = assertThrows(InternalErrorException.class, () -> stream.visitStream(Stream::toList));
assertEquals("HAPI-2422: this should never happen: URL is missing a '?'", exception.getMessage());
}
@Test
void fetchResourceIds_ByUrlSingleQuestionMark() {
IResourcePidStream stream = mySvc.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, null, "?");
final IllegalArgumentException exception = assertThrows(IllegalArgumentException.class, () -> stream.visitStream(Stream::toList));
assertEquals("theResourceName must not be blank", exception.getMessage());
}
@Test
void fetchResourceIds_ByUrlNonsensicalResource() {
IResourcePidStream stream = mySvc.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, null, "Banana?_expunge=true");
final DataFormatException exception = assertThrows(DataFormatException.class, () -> stream.visitStream(Stream::toList));
assertEquals("HAPI-1684: Unknown resource name \"Banana\" (this name is not known in FHIR version \"R4\")", exception.getMessage());
}
@ParameterizedTest
@ValueSource(ints = {0, 9, 10, 11, 21, 22, 23, 45})
void fetchResourceIds_ByUrl(int expectedNumResults) {
final List<IIdType> patientIds = IntStream.range(0, expectedNumResults)
.mapToObj(num -> createPatient())
.toList();
final IResourcePidStream resourcePidList = mySvc.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, RequestPartitionId.defaultPartition(), "Patient?_expunge=true");
final List<? extends IIdType> actualPatientIds =
resourcePidList.visitStream(s-> s.map(typePid -> new IdDt(typePid.resourceType, (Long) typePid.id.getId()))
.toList());
assertIdsEqual(patientIds, actualPatientIds);
}
@Test
public void fetchResourceIds_ByUrl_WithData() {
// Setup
createPatient(withActiveFalse()).getIdPartAsLong();
sleepUntilTimeChange();
// Start of resources within range
Date start = new Date();
sleepUntilTimeChange();
Long patientId1 = createPatient(withActiveFalse()).getIdPartAsLong();
createObservation(withObservationCode("http://foo", "bar"));
createObservation(withObservationCode("http://foo", "bar"));
sleepUntilTimeChange();
Long patientId2 = createPatient(withActiveFalse()).getIdPartAsLong();
sleepUntilTimeChange();
Date end = new Date();
// End of resources within range
createObservation(withObservationCode("http://foo", "bar"));
createPatient(withActiveFalse()).getIdPartAsLong();
sleepUntilTimeChange();
// Execute
myCaptureQueriesListener.clear();
IResourcePidStream queryStream = mySvc.fetchResourceIdStream(start, end, null, "Patient?active=false");
// Verify
List<TypedResourcePid> typedResourcePids = queryStream.visitStream(Stream::toList);
assertThat(typedResourcePids)
.hasSize(2)
.containsExactly(
new TypedResourcePid("Patient", patientId1),
new TypedResourcePid("Patient", patientId2));
assertThat(myCaptureQueriesListener.logSelectQueries()).hasSize(1);
assertEquals(0, myCaptureQueriesListener.countInsertQueries());
assertEquals(0, myCaptureQueriesListener.countUpdateQueries());
assertEquals(0, myCaptureQueriesListener.countDeleteQueries());
assertEquals(1, myCaptureQueriesListener.getCommitCount());
assertEquals(0, myCaptureQueriesListener.getRollbackCount());
}
@ParameterizedTest
@ValueSource(ints = {0, 9, 10, 11, 21, 22, 23, 45})
void fetchResourceIds_NoUrl(int expectedNumResults) {
final List<IIdType> patientIds = IntStream.range(0, expectedNumResults)
.mapToObj(num -> createPatient())
.toList();
// at the moment there is no Prod use-case for noUrl use-case
// reindex will always have urls as well (see https://github.com/hapifhir/hapi-fhir/issues/6179)
final IResourcePidStream resourcePidList = mySvc.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, RequestPartitionId.defaultPartition(), null);
final List<? extends IIdType> actualPatientIds =
resourcePidList.visitStream(s-> s.map(typePid -> new IdDt(typePid.resourceType, (Long) typePid.id.getId()))
.toList());
assertIdsEqual(patientIds, actualPatientIds);
}
private static void assertIdsEqual(List<IIdType> expectedResourceIds, List<? extends IIdType> actualResourceIds) {
assertThat(actualResourceIds).hasSize(expectedResourceIds.size());
for (int index = 0; index < expectedResourceIds.size(); index++) {
final IIdType expectedIdType = expectedResourceIds.get(index);
final IIdType actualIdType = actualResourceIds.get(index);
assertEquals(expectedIdType.getResourceType(), actualIdType.getResourceType());
assertEquals(expectedIdType.getIdPartAsLong(), actualIdType.getIdPartAsLong());
}
}
@Nonnull
private static Date toDate(LocalDate theLocalDate) {
return Date.from(theLocalDate.atStartOfDay(ZoneId.systemDefault()).toInstant());
}
@ParameterizedTest
@NullSource
@ValueSource(strings = {"", " "})
public void fetchResourceIds_NoUrl_WithData(String theMissingUrl) {
// Setup
createPatient(withActiveFalse());
sleepUntilTimeChange();
Date start = new Date();
Long id0 = createPatient(withActiveFalse()).getIdPartAsLong();
sleepUntilTimeChange();
Long id1 = createPatient(withActiveFalse()).getIdPartAsLong();
sleepUntilTimeChange();
Long id2 = createObservation(withObservationCode("http://foo", "bar")).getIdPartAsLong();
sleepUntilTimeChange();
Date end = new Date();
sleepUntilTimeChange();
createPatient(withActiveFalse());
// Execute
myCaptureQueriesListener.clear();
IResourcePidStream queryStream = mySvc.fetchResourceIdStream(start, end, null, theMissingUrl);
// Verify
List<TypedResourcePid> typedPids = queryStream.visitStream(Stream::toList);
assertThat(typedPids)
.hasSize(3)
.containsExactly(
new TypedResourcePid("Patient", id0),
new TypedResourcePid("Patient", id1),
new TypedResourcePid("Observation", id2));
assertThat(myCaptureQueriesListener.logSelectQueries()).hasSize(1);
assertEquals(0, myCaptureQueriesListener.countInsertQueries());
assertEquals(0, myCaptureQueriesListener.countUpdateQueries());
assertEquals(0, myCaptureQueriesListener.countDeleteQueries());
assertEquals(1, myCaptureQueriesListener.getCommitCount());
assertEquals(0, myCaptureQueriesListener.getRollbackCount());
}
@ParameterizedTest
@NullSource
@ValueSource(strings = {"", " "})
public void fetchResourceIds_NoUrl_NoData(String theMissingUrl) {
// Execute
myCaptureQueriesListener.clear();
IResourcePidStream queryStream = mySvc.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, null, theMissingUrl);
// Verify
List<TypedResourcePid> typedPids = queryStream.visitStream(Stream::toList);
assertThat(typedPids).isEmpty();
assertThat(myCaptureQueriesListener.logSelectQueries()).hasSize(1);
assertEquals(0, myCaptureQueriesListener.countInsertQueries());
assertEquals(0, myCaptureQueriesListener.countUpdateQueries());
assertEquals(0, myCaptureQueriesListener.countDeleteQueries());
assertEquals(1, myCaptureQueriesListener.getCommitCount());
assertEquals(0, myCaptureQueriesListener.getRollbackCount());
}
}

View File

@ -14,7 +14,6 @@ import ca.uhn.fhir.batch2.model.JobDefinition;
import ca.uhn.fhir.batch2.model.JobInstance;
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
import ca.uhn.fhir.batch2.model.JobWorkNotificationJsonMessage;
import ca.uhn.fhir.batch2.model.StatusEnum;
import ca.uhn.fhir.jpa.subscription.channel.api.ChannelConsumerSettings;
import ca.uhn.fhir.jpa.subscription.channel.api.IChannelFactory;
import ca.uhn.fhir.jpa.subscription.channel.impl.LinkedBlockingChannel;
@ -53,7 +52,7 @@ import static org.assertj.core.api.Assertions.assertThat;
* {@link ca.uhn.fhir.batch2.maintenance.JobInstanceProcessor#cleanupInstance()}
* For chunks:
* {@link ca.uhn.fhir.jpa.batch2.JpaJobPersistenceImpl#onWorkChunkCreate}
* {@link JpaJobPersistenceImpl#onWorkChunkCreate}
* {@link JpaJobPersistenceImpl#onWorkChunkDequeue(String)}
* Chunk execution {@link ca.uhn.fhir.batch2.coordinator.StepExecutor#executeStep}
*/

View File

@ -1,6 +1,5 @@
package ca.uhn.fhir.jpa.batch2;
import static org.junit.jupiter.api.Assertions.assertFalse;
import ca.uhn.fhir.batch2.api.IJobMaintenanceService;
import ca.uhn.fhir.batch2.api.IJobPersistence;
import ca.uhn.fhir.batch2.api.JobOperationResultJson;
@ -68,6 +67,7 @@ import java.util.stream.Collectors;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertNotEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertNull;

View File

@ -9,6 +9,9 @@ import ca.uhn.fhir.jpa.binary.api.StoredDetails;
import ca.uhn.fhir.rest.server.exceptions.PayloadTooLargeException;
import ca.uhn.fhir.rest.server.exceptions.ResourceNotFoundException;
import ca.uhn.fhir.rest.server.servlet.ServletRequestDetails;
import com.fasterxml.jackson.annotation.JsonInclude;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import org.apache.commons.io.FileUtils;
import org.hl7.fhir.instance.model.api.IIdType;
import org.hl7.fhir.r4.model.IdType;
@ -24,6 +27,7 @@ import java.io.File;
import java.io.IOException;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.junit.jupiter.api.Assertions.fail;
public class FilesystemBinaryStorageSvcImplTest {
@ -46,6 +50,30 @@ public class FilesystemBinaryStorageSvcImplTest {
FileUtils.deleteDirectory(myPath);
}
/**
* See https://github.com/hapifhir/hapi-fhir/pull/6134
*/
@Test
public void testStoreAndRetrievePostMigration() throws IOException {
String blobId = "some-blob-id";
String oldDescriptor = "{\n" +
" \"blobId\" : \"" + blobId + "\",\n" +
" \"bytes\" : 80926,\n" +
" \"contentType\" : \"application/fhir+json\",\n" +
" \"hash\" : \"f57596cefbee4c48c8493a2a57ef5f70c52a2c5afa0e48f57cfbf4f219eb0a38\",\n" +
" \"published\" : \"2024-07-20T00:12:28.187+05:30\"\n" +
"}";
ObjectMapper myJsonSerializer;
myJsonSerializer = new ObjectMapper();
myJsonSerializer.setSerializationInclusion(JsonInclude.Include.NON_NULL);
myJsonSerializer.enable(SerializationFeature.INDENT_OUTPUT);
StoredDetails storedDetails = myJsonSerializer.readValue(oldDescriptor, StoredDetails.class);
assertTrue(storedDetails.getBinaryContentId().equals(blobId));;
}
@Test
public void testStoreAndRetrieve() throws IOException {
IIdType id = new IdType("Patient/123");

View File

@ -1,11 +1,8 @@
package ca.uhn.fhir.jpa.dao;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import ca.uhn.fhir.batch2.api.IJobCoordinator;
import ca.uhn.fhir.batch2.api.IJobPartitionProvider;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrl;
import ca.uhn.fhir.batch2.jobs.parameters.UrlPartitioner;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexJobParameters;
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
import ca.uhn.fhir.context.FhirContext;
@ -68,9 +65,10 @@ import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.fail;
import static org.junit.jupiter.api.Assertions.fail;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyLong;
import static org.mockito.ArgumentMatchers.isNotNull;
@ -87,6 +85,9 @@ class BaseHapiFhirResourceDaoTest {
@Mock
private IRequestPartitionHelperSvc myRequestPartitionHelperSvc;
@Mock
private IJobPartitionProvider myJobPartitionProvider;
@Mock
private IIdHelperService<JpaPid> myIdHelperService;
@ -102,9 +103,6 @@ class BaseHapiFhirResourceDaoTest {
@Mock
private IJpaStorageResourceParser myJpaStorageResourceParser;
@Mock
private UrlPartitioner myUrlPartitioner;
@Mock
private ApplicationContext myApplicationContext;
@ -267,12 +265,12 @@ class BaseHapiFhirResourceDaoTest {
public void requestReindexForRelatedResources_withValidBase_includesUrlsInJobParameters() {
when(myStorageSettings.isMarkResourcesForReindexingUponSearchParameterChange()).thenReturn(true);
List<String> base = Lists.newArrayList("Patient", "Group");
RequestPartitionId partitionId = RequestPartitionId.fromPartitionId(1);
List<String> base = Lists.newArrayList("Patient", "Group", "Practitioner");
when(myUrlPartitioner.partitionUrl(any(), any())).thenAnswer(i -> {
PartitionedUrl partitionedUrl = new PartitionedUrl();
partitionedUrl.setUrl(i.getArgument(0));
return partitionedUrl;
when(myJobPartitionProvider.getPartitionedUrls(any(), any())).thenAnswer(i -> {
List<String> urls = i.getArgument(1);
return urls.stream().map(url -> new PartitionedUrl().setUrl(url).setRequestPartitionId(partitionId)).collect(Collectors.toList());
});
mySvc.requestReindexForRelatedResources(false, base, new ServletRequestDetails());
@ -285,9 +283,12 @@ class BaseHapiFhirResourceDaoTest {
assertNotNull(actualRequest.getParameters());
ReindexJobParameters actualParameters = actualRequest.getParameters(ReindexJobParameters.class);
assertThat(actualParameters.getPartitionedUrls()).hasSize(2);
assertEquals("Patient?", actualParameters.getPartitionedUrls().get(0).getUrl());
assertEquals("Group?", actualParameters.getPartitionedUrls().get(1).getUrl());
assertThat(actualParameters.getPartitionedUrls()).hasSize(base.size());
for (int i = 0; i < base.size(); i++) {
PartitionedUrl partitionedUrl = actualParameters.getPartitionedUrls().get(i);
assertEquals(base.get(i) + "?", partitionedUrl.getUrl());
assertEquals(partitionId, partitionedUrl.getRequestPartitionId());
}
}
@Test

View File

@ -1,7 +1,5 @@
package ca.uhn.fhir.jpa.dao.index;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
import ca.uhn.fhir.jpa.dao.data.IResourceTableDao;
@ -29,6 +27,8 @@ import java.util.function.Function;
import java.util.stream.Collectors;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyBoolean;
import static org.mockito.Mockito.when;
@ -87,13 +87,17 @@ public class IdHelperServiceTest {
"Patient",
123l,
"RED",
new Date()
new Date(),
null,
null
};
Object[] blueView = new Object[] {
"Patient",
456l,
"BLUE",
new Date()
new Date(),
null,
null
};
// when
@ -155,11 +159,13 @@ public class IdHelperServiceTest {
String resourceType = "Patient";
String resourceForcedId = "AAA";
Object[] forcedIdView = new Object[4];
Object[] forcedIdView = new Object[6];
forcedIdView[0] = resourceType;
forcedIdView[1] = 1L;
forcedIdView[2] = resourceForcedId;
forcedIdView[3] = null;
forcedIdView[4] = null;
forcedIdView[5] = null;
Collection<Object[]> testForcedIdViews = new ArrayList<>();
testForcedIdViews.add(forcedIdView);

View File

@ -55,7 +55,7 @@ public abstract class BasePartitioningR4Test extends BaseJpaR4SystemTest {
@AfterEach
public void after() {
myPartitionInterceptor.assertNoRemainingIds();
assertNoRemainingPartitionIds();
myPartitionSettings.setIncludePartitionInSearchHashes(new PartitionSettings().isIncludePartitionInSearchHashes());
myPartitionSettings.setPartitioningEnabled(new PartitionSettings().isPartitioningEnabled());
@ -70,6 +70,10 @@ public abstract class BasePartitioningR4Test extends BaseJpaR4SystemTest {
myStorageSettings.setMatchUrlCacheEnabled(new JpaStorageSettings().getMatchUrlCache());
}
protected void assertNoRemainingPartitionIds() {
myPartitionInterceptor.assertNoRemainingIds();
}
@Override
@BeforeEach
public void before() throws Exception {
@ -89,7 +93,8 @@ public abstract class BasePartitioningR4Test extends BaseJpaR4SystemTest {
myPartitionId4 = 4;
myPartitionInterceptor = new MyReadWriteInterceptor();
mySrdInterceptorService.registerInterceptor(myPartitionInterceptor);
registerPartitionInterceptor();
myPartitionConfigSvc.createPartition(new PartitionEntity().setId(myPartitionId).setName(PARTITION_1), null);
myPartitionConfigSvc.createPartition(new PartitionEntity().setId(myPartitionId2).setName(PARTITION_2), null);
@ -106,6 +111,11 @@ public abstract class BasePartitioningR4Test extends BaseJpaR4SystemTest {
for (int i = 1; i <= 4; i++) {
myPartitionConfigSvc.getPartitionById(i);
}
}
protected void registerPartitionInterceptor() {
mySrdInterceptorService.registerInterceptor(myPartitionInterceptor);
}
@Override

View File

@ -18,7 +18,7 @@ import ca.uhn.fhir.jpa.api.model.DeleteMethodOutcome;
import ca.uhn.fhir.jpa.api.model.ExpungeOptions;
import ca.uhn.fhir.jpa.api.model.HistoryCountModeEnum;
import ca.uhn.fhir.jpa.dao.data.ISearchParamPresentDao;
import ca.uhn.fhir.jpa.delete.job.ReindexTestHelper;
import ca.uhn.fhir.jpa.reindex.ReindexTestHelper;
import ca.uhn.fhir.jpa.entity.TermValueSet;
import ca.uhn.fhir.jpa.entity.TermValueSetPreExpansionStatusEnum;
import ca.uhn.fhir.jpa.interceptor.ForceOffsetSearchModeInterceptor;
@ -595,11 +595,11 @@ public class FhirResourceDaoR4QueryCountTest extends BaseResourceProviderR4Test
fail(myFhirContext.newJsonParser().setPrettyPrint(true).encodeResourceToString(e.getOperationOutcome()));
}
myCaptureQueriesListener.logSelectQueriesForCurrentThread();
assertEquals(8, myCaptureQueriesListener.getSelectQueriesForCurrentThread().size());
assertEquals(10, myCaptureQueriesListener.getSelectQueriesForCurrentThread().size());
assertEquals(0, myCaptureQueriesListener.getUpdateQueriesForCurrentThread().size());
assertEquals(0, myCaptureQueriesListener.getInsertQueriesForCurrentThread().size());
assertEquals(0, myCaptureQueriesListener.getDeleteQueriesForCurrentThread().size());
assertEquals(6, myCaptureQueriesListener.getCommitCount());
assertEquals(8, myCaptureQueriesListener.getCommitCount());
// Validate again (should rely only on caches)
myCaptureQueriesListener.clear();
@ -2315,7 +2315,7 @@ public class FhirResourceDaoR4QueryCountTest extends BaseResourceProviderR4Test
assertEquals(4, myCaptureQueriesListener.countInsertQueries());
myCaptureQueriesListener.logUpdateQueries();
assertEquals(8, myCaptureQueriesListener.countUpdateQueries());
assertEquals(0, myCaptureQueriesListener.countDeleteQueries());
assertEquals(4, myCaptureQueriesListener.countDeleteQueries());
/*
* Third time with mass ingestion mode enabled
@ -3104,7 +3104,7 @@ public class FhirResourceDaoR4QueryCountTest extends BaseResourceProviderR4Test
assertEquals(3, myCaptureQueriesListener.countSelectQueriesForCurrentThread());
assertEquals(6, myCaptureQueriesListener.countInsertQueriesForCurrentThread());
assertEquals(1, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(1, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(1, myCaptureQueriesListener.countCommits());
assertEquals(0, myCaptureQueriesListener.countRollbacks());
@ -3496,8 +3496,8 @@ public class FhirResourceDaoR4QueryCountTest extends BaseResourceProviderR4Test
assertEquals(6, myCaptureQueriesListener.countSelectQueriesForCurrentThread());
myCaptureQueriesListener.logInsertQueries();
assertEquals(4, myCaptureQueriesListener.countInsertQueriesForCurrentThread());
assertEquals(6, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(7, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(2, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
ourLog.info(myFhirContext.newJsonParser().setPrettyPrint(true).encodeResourceToString(outcome));
IdType patientId = new IdType(outcome.getEntry().get(1).getResponse().getLocation());
@ -3579,8 +3579,8 @@ public class FhirResourceDaoR4QueryCountTest extends BaseResourceProviderR4Test
myCaptureQueriesListener.logSelectQueriesForCurrentThread();
assertEquals(6, myCaptureQueriesListener.countSelectQueriesForCurrentThread());
assertEquals(2, myCaptureQueriesListener.countInsertQueriesForCurrentThread());
assertEquals(5, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(6, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(2, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
}

View File

@ -1,12 +1,10 @@
package ca.uhn.fhir.jpa.dao.r4;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertNull;
import static org.junit.jupiter.api.Assertions.assertFalse;
import ca.uhn.fhir.i18n.Msg;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
import ca.uhn.fhir.jpa.api.dao.IFhirResourceDao;
import ca.uhn.fhir.jpa.api.dao.IFhirSystemDao;
import ca.uhn.fhir.jpa.api.model.HistoryCountModeEnum;
import ca.uhn.fhir.jpa.api.pid.StreamTemplate;
import ca.uhn.fhir.jpa.dao.BaseHapiFhirDao;
@ -33,11 +31,13 @@ import ca.uhn.fhir.model.api.Include;
import ca.uhn.fhir.model.api.ResourceMetadataKeyEnum;
import ca.uhn.fhir.model.valueset.BundleEntrySearchModeEnum;
import ca.uhn.fhir.model.valueset.BundleEntryTransactionMethodEnum;
import ca.uhn.fhir.parser.IParser;
import ca.uhn.fhir.rest.api.Constants;
import ca.uhn.fhir.rest.api.MethodOutcome;
import ca.uhn.fhir.rest.api.SortOrderEnum;
import ca.uhn.fhir.rest.api.SortSpec;
import ca.uhn.fhir.rest.api.server.IBundleProvider;
import ca.uhn.fhir.rest.api.server.RequestDetails;
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
import ca.uhn.fhir.rest.api.server.storage.IResourcePersistentId;
import ca.uhn.fhir.rest.param.DateParam;
@ -114,8 +114,10 @@ import org.hl7.fhir.r4.model.Reference;
import org.hl7.fhir.r4.model.SimpleQuantity;
import org.hl7.fhir.r4.model.StringType;
import org.hl7.fhir.r4.model.StructureDefinition;
import org.hl7.fhir.r4.model.Task;
import org.hl7.fhir.r4.model.Timing;
import org.hl7.fhir.r4.model.UriType;
import org.intellij.lang.annotations.Language;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Disabled;
@ -142,6 +144,7 @@ import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
@ -150,9 +153,12 @@ import static ca.uhn.fhir.rest.api.Constants.PARAM_HAS;
import static org.apache.commons.lang3.StringUtils.countMatches;
import static org.apache.commons.lang3.StringUtils.defaultString;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.fail;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertNull;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.junit.jupiter.api.Assertions.fail;
@SuppressWarnings({"unchecked", "deprecation", "Duplicates"})
@ -4316,6 +4322,158 @@ public class FhirResourceDaoR4Test extends BaseJpaR4Test {
assertEquals(ids, createdIds);
}
@Test
public void bundle1CreatesResourceByCondition_bundle2UpdatesExistingResourceToNotMatchConditionThenCreatesBySameCondition_shouldPass() {
// setup
IParser parser = myFhirContext.newJsonParser();
String idToReplace = "/Task/11852"; // in bundle 2
String identifierSystem = "https://tempuri.org";
Bundle bundle1;
Bundle bundle2;
{
@Language("JSON")
String bundleStr = """
{
"resourceType": "Bundle",
"type": "transaction",
"entry": [
{
"fullUrl": "urn:uuid:1fee7dea-c2a8-47b1-80a9-a681457cc44f",
"resource": {
"resourceType": "Task",
"identifier": [
{
"system": "https://tempuri.org",
"value": "t1"
}
]
},
"request": {
"method": "POST",
"url": "/Task",
"ifNoneExist": "identifier=https://tempuri.org|t1"
}
}
]
}
""";
bundle1 = parser.parseResource(Bundle.class, bundleStr);
}
{
@Language("JSON")
String bundleStr = """
{
"resourceType": "Bundle",
"type": "transaction",
"entry": [
{
"fullUrl": "urn:uuid:1fee7dea-c2a8-47b1-80a9-a681457cc44f",
"resource": {
"resourceType": "Task",
"identifier": [
{
"system": "https://tempuri.org",
"value": "t1"
}
]
},
"request": {
"method": "POST",
"url": "/Task",
"ifNoneExist": "identifier=https://tempuri.org|t1"
}
},
{
"fullUrl": "http://localhost:8000/Task/11852",
"resource": {
"resourceType": "Task",
"identifier": [
{
"system": "https://tempuri.org",
"value": "t2"
}
]
},
"request": {
"method": "PUT",
"url": "/Task/11852"
}
}
]
}
""";
bundle2 = parser.parseResource(Bundle.class, bundleStr);
}
IFhirSystemDao<Bundle, ?> systemDao = myDaoRegistry.getSystemDao();
RequestDetails reqDets = new SystemRequestDetails();
Bundle createdBundle;
String id;
{
// create bundle1
createdBundle = systemDao.transaction(reqDets, bundle1);
assertNotNull(createdBundle);
assertFalse(createdBundle.getEntry().isEmpty());
Optional<BundleEntryComponent> entry = createdBundle.getEntry()
.stream().filter(e -> e.getResponse() != null && e.getResponse().getStatus().contains("201 Created"))
.findFirst();
assertTrue(entry.isPresent());
String idAndVersion = entry.get().getResponse().getLocation();
if (idAndVersion.contains("/_history")) {
IIdType idt = new IdType(idAndVersion);
id = idt.toVersionless().getValue();
} else {
id = idAndVersion;
}
// verify task creation
Task task = getTaskForId(id, identifierSystem);
assertTrue(task.getIdentifier().stream().anyMatch(i -> i.getSystem().equals(identifierSystem) && i.getValue().equals("t1")));
}
// update second bundle to use already-saved-Task's id
Optional<BundleEntryComponent> entryComponent = bundle2.getEntry().stream()
.filter(e -> e.getRequest().getUrl().equals(idToReplace))
.findFirst();
assertTrue(entryComponent.isPresent());
BundleEntryComponent entry = entryComponent.get();
entry.getRequest().setUrl(id);
entry.setFullUrl(entry.getFullUrl().replace(idToReplace, id));
{
// post second bundle first time
createdBundle = systemDao.transaction(reqDets, bundle2);
assertNotNull(createdBundle);
// check that the Task is recognized, but not recreated
assertFalse(createdBundle.getEntry().stream().anyMatch(e -> e.getResponse() != null && e.getResponse().getStatus().contains("201")));
// but the Task should have been updated
// (changing it to not match the identifier anymore)
assertTrue(createdBundle.getEntry().stream().anyMatch(e -> e.getResponse() != null && e.getResponse().getLocation().equals(id + "/_history/2")));
// verify task update
Task task = getTaskForId(id, identifierSystem);
assertTrue(task.getIdentifier().stream().anyMatch(i -> i.getSystem().equals(identifierSystem) && i.getValue().equals("t2")));
// post again; should succeed (not throw)
createdBundle = systemDao.transaction(reqDets, bundle2);
assertNotNull(createdBundle);
// should have created the second task
assertTrue(createdBundle.getEntry().stream()
.anyMatch(e -> e.getResponse() != null && e.getResponse().getStatus().contains("201 Created")));
}
}
private Task getTaskForId(String theId, String theIdentifier) {
Task task = myTaskDao.read(new IdType(theId), new SystemRequestDetails());
assertNotNull(task);
assertFalse(task.getIdentifier().isEmpty());
assertTrue(task.getIdentifier().stream().anyMatch(i -> i.getSystem().equals(theIdentifier)));
return task;
}
public static void assertConflictException(String theResourceType, ResourceVersionConflictException e) {
assertThat(e.getMessage()).matches(Msg.code(550) + Msg.code(515) + "Unable to delete [a-zA-Z]+/[0-9]+ because at least one resource has a reference to this resource. First reference found was resource " + theResourceType + "/[0-9]+ in path [a-zA-Z]+.[a-zA-Z]+");

View File

@ -0,0 +1,211 @@
package ca.uhn.fhir.jpa.dao.r4;
import ca.uhn.fhir.interceptor.api.Hook;
import ca.uhn.fhir.interceptor.api.Pointcut;
import ca.uhn.fhir.interceptor.model.ReadPartitionIdRequestDetails;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.dao.tx.HapiTransactionService;
import ca.uhn.fhir.rest.api.Constants;
import ca.uhn.fhir.rest.server.exceptions.InternalErrorException;
import ca.uhn.fhir.rest.server.exceptions.InvalidRequestException;
import ca.uhn.fhir.rest.server.exceptions.ResourceGoneException;
import ca.uhn.fhir.util.BundleBuilder;
import jakarta.annotation.Nonnull;
import org.hl7.fhir.instance.model.api.IBaseResource;
import org.hl7.fhir.instance.model.api.IIdType;
import org.hl7.fhir.r4.model.Bundle;
import org.hl7.fhir.r4.model.CodeType;
import org.hl7.fhir.r4.model.IdType;
import org.hl7.fhir.r4.model.Observation;
import org.hl7.fhir.r4.model.Parameters;
import org.hl7.fhir.r4.model.Patient;
import org.hl7.fhir.r4.model.StringType;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.CsvSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.transaction.annotation.Propagation;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertThrows;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class PartitionedStrictTransactionR4Test extends BasePartitioningR4Test {
@Autowired
private HapiTransactionService myTransactionService;
@Override
public void before() throws Exception {
super.before();
myTransactionService.setTransactionPropagationWhenChangingPartitions(Propagation.REQUIRES_NEW);
}
@Override
public void after() {
super.after();
myTransactionService.setTransactionPropagationWhenChangingPartitions(HapiTransactionService.DEFAULT_TRANSACTION_PROPAGATION_WHEN_CHANGING_PARTITIONS);
myInterceptorRegistry.unregisterInterceptorsIf(t -> t instanceof MyPartitionSelectorInterceptor);
}
/**
* We manually register {@link MyPartitionSelectorInterceptor} for this test class
* as the partition interceptor
*/
@Override
protected void registerPartitionInterceptor() {
myInterceptorRegistry.registerInterceptor(new MyPartitionSelectorInterceptor());
}
@Override
protected void assertNoRemainingPartitionIds() {
// We don't use the superclass to manage partition IDs
}
@ParameterizedTest
@CsvSource({
"batch , 2",
"transaction , 1",
})
public void testSinglePartitionCreate(String theBundleType, int theExpectedCommitCount) {
BundleBuilder bb = new BundleBuilder(myFhirContext);
bb.addTransactionCreateEntry(newPatient());
bb.addTransactionCreateEntry(newPatient());
bb.setType(theBundleType);
Bundle input = bb.getBundleTyped();
// Test
myCaptureQueriesListener.clear();
Bundle output = mySystemDao.transaction(mySrd, input);
// Verify
assertEquals(theExpectedCommitCount, myCaptureQueriesListener.countCommits());
assertEquals(0, myCaptureQueriesListener.countRollbacks());
IdType id = new IdType(output.getEntry().get(0).getResponse().getLocation());
Patient actualPatient = myPatientDao.read(id, mySrd);
RequestPartitionId actualPartitionId = (RequestPartitionId) actualPatient.getUserData(Constants.RESOURCE_PARTITION_ID);
assertThat(actualPartitionId.getPartitionIds()).containsExactly(myPartitionId);
}
@Test
public void testSinglePartitionDelete() {
createPatient(withId("A"), withActiveTrue());
BundleBuilder bb = new BundleBuilder(myFhirContext);
bb.addTransactionDeleteEntry(new IdType("Patient/A"));
Bundle input = bb.getBundleTyped();
// Test
myCaptureQueriesListener.clear();
Bundle output = mySystemDao.transaction(mySrd, input);
// Verify
assertEquals(1, myCaptureQueriesListener.countCommits());
assertEquals(0, myCaptureQueriesListener.countRollbacks());
IdType id = new IdType(output.getEntry().get(0).getResponse().getLocation());
assertEquals("2", id.getVersionIdPart());
assertThrows(ResourceGoneException.class, () -> myPatientDao.read(id.toUnqualifiedVersionless(), mySrd));
}
@Test
public void testSinglePartitionPatch() {
IIdType id = createPatient(withId("A"), withActiveTrue());
assertTrue(myPatientDao.read(id.toUnqualifiedVersionless(), mySrd).getActive());
Parameters patch = new Parameters();
Parameters.ParametersParameterComponent operation = patch.addParameter();
operation.setName("operation");
operation
.addPart()
.setName("type")
.setValue(new CodeType("replace"));
operation
.addPart()
.setName("path")
.setValue(new StringType("Patient.active"));
operation
.addPart()
.setName("name")
.setValue(new CodeType("false"));
BundleBuilder bb = new BundleBuilder(myFhirContext);
bb.addTransactionFhirPatchEntry(new IdType("Patient/A"), patch);
Bundle input = bb.getBundleTyped();
// Test
myCaptureQueriesListener.clear();
Bundle output = mySystemDao.transaction(mySrd, input);
// Verify
assertEquals(1, myCaptureQueriesListener.countCommits());
assertEquals(0, myCaptureQueriesListener.countRollbacks());
id = new IdType(output.getEntry().get(0).getResponse().getLocation());
assertEquals("2", id.getVersionIdPart());
assertFalse(myPatientDao.read(id.toUnqualifiedVersionless(), mySrd).getActive());
}
@Test
public void testMultipleNonMatchingPartitions() {
BundleBuilder bb = new BundleBuilder(myFhirContext);
bb.addTransactionCreateEntry(newPatient());
bb.addTransactionCreateEntry(newObservation());
Bundle input = bb.getBundleTyped();
// Test
var e = assertThrows(InvalidRequestException.class, () -> mySystemDao.transaction(mySrd, input));
assertThat(e.getMessage()).contains("HAPI-2541: Can not process transaction with 2 entries: Entries require access to multiple/conflicting partitions");
}
private static @Nonnull Patient newPatient() {
Patient patient = new Patient();
patient.setActive(true);
return patient;
}
private static @Nonnull Observation newObservation() {
Observation observation = new Observation();
observation.setStatus(Observation.ObservationStatus.FINAL);
return observation;
}
public class MyPartitionSelectorInterceptor {
@Hook(Pointcut.STORAGE_PARTITION_IDENTIFY_CREATE)
public RequestPartitionId selectPartitionCreate(IBaseResource theResource) {
String resourceType = myFhirContext.getResourceType(theResource);
return selectPartition(resourceType);
}
@Hook(Pointcut.STORAGE_PARTITION_IDENTIFY_READ)
public RequestPartitionId selectPartitionRead(ReadPartitionIdRequestDetails theDetails) {
return selectPartition(theDetails.getResourceType());
}
@Nonnull
private RequestPartitionId selectPartition(String theResourceType) {
switch (theResourceType) {
case "Patient":
return RequestPartitionId.fromPartitionId(myPartitionId);
case "Observation":
return RequestPartitionId.fromPartitionId(myPartitionId2);
case "SearchParameter":
case "Organization":
return RequestPartitionId.defaultPartition();
default:
throw new InternalErrorException("Don't know how to handle resource type: " + theResourceType);
}
}
}
}

View File

@ -47,8 +47,7 @@ public class PartitioningNonNullDefaultPartitionR4Test extends BasePartitioningR
addCreateDefaultPartition();
// we need two read partition accesses for when the creation of the SP triggers a reindex of Patient
addReadDefaultPartition(); // one for search param validation
addReadDefaultPartition(); // one to rewrite the resource url
addReadDefaultPartition(); // and one for the job request itself
addReadDefaultPartition(); // and one for the reindex job
SearchParameter sp = new SearchParameter();
sp.addBase("Patient");
sp.setStatus(Enumerations.PublicationStatus.ACTIVE);
@ -80,8 +79,7 @@ public class PartitioningNonNullDefaultPartitionR4Test extends BasePartitioningR
addCreateDefaultPartition();
// we need two read partition accesses for when the creation of the SP triggers a reindex of Patient
addReadDefaultPartition(); // one for search param validation
addReadDefaultPartition(); // one to rewrite the resource url
addReadDefaultPartition(); // and one for the job request itself
addReadDefaultPartition(); // and one for the reindex job
SearchParameter sp = new SearchParameter();
sp.setId("SearchParameter/A");
sp.addBase("Patient");

View File

@ -3,6 +3,7 @@ package ca.uhn.fhir.jpa.dao.r4;
import ca.uhn.fhir.batch2.api.IJobCoordinator;
import ca.uhn.fhir.batch2.jobs.expunge.DeleteExpungeAppCtx;
import ca.uhn.fhir.batch2.jobs.expunge.DeleteExpungeJobParameters;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrl;
import ca.uhn.fhir.batch2.model.JobInstance;
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
import ca.uhn.fhir.context.RuntimeResourceDefinition;
@ -90,13 +91,11 @@ import java.util.stream.Collectors;
import static ca.uhn.fhir.util.TestUtil.sleepAtLeast;
import static org.apache.commons.lang3.StringUtils.countMatches;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.fail;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertNull;
import static org.junit.jupiter.api.Assertions.fail;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
@ -173,7 +172,7 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
// Look up the referenced subject/patient
String sql = selectQueries.get(0).getSql(true, false).toLowerCase();
assertThat(sql).contains(" from hfj_resource ");
assertEquals(0, StringUtils.countMatches(selectQueries.get(0).getSql(true, false).toLowerCase(), "partition"));
assertEquals(2, StringUtils.countMatches(selectQueries.get(0).getSql(true, false).toLowerCase(), "partition"));
runInTransaction(() -> {
List<ResourceLink> resLinks = myResourceLinkDao.findAll();
@ -181,6 +180,8 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
assertEquals(2, resLinks.size());
assertEquals(obsId.getIdPartAsLong(), resLinks.get(0).getSourceResourcePid());
assertEquals(patientId.getIdPartAsLong(), resLinks.get(0).getTargetResourcePid());
assertEquals(myPartitionId, resLinks.get(0).getTargetResourcePartitionId().getPartitionId());
assertLocalDateFromDbMatches(myPartitionDate, resLinks.get(0).getTargetResourcePartitionId().getPartitionDate());
});
}
@ -465,6 +466,8 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
assertEquals(1, resourceLinks.size());
assertEquals(myPartitionId, resourceLinks.get(0).getPartitionId().getPartitionId().intValue());
assertLocalDateFromDbMatches(myPartitionDate, resourceLinks.get(0).getPartitionId().getPartitionDate());
assertEquals(myPartitionId, resourceLinks.get(0).getTargetResourcePartitionId().getPartitionId().intValue());
assertLocalDateFromDbMatches(myPartitionDate, resourceLinks.get(0).getTargetResourcePartitionId().getPartitionDate());
// HFJ_RES_PARAM_PRESENT
List<SearchParamPresentEntity> presents = mySearchParamPresentDao.findAllForResource(resourceTable);
@ -658,6 +661,8 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
addCreatePartition(myPartitionId, myPartitionDate);
addCreatePartition(myPartitionId, myPartitionDate);
addCreatePartition(myPartitionId, myPartitionDate);
addCreatePartition(myPartitionId, myPartitionDate);
addCreatePartition(myPartitionId, myPartitionDate);
Bundle input = new Bundle();
input.setType(Bundle.BundleType.TRANSACTION);
@ -713,8 +718,10 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
assertEquals(1, myObservationDao.search(SearchParameterMap.newSynchronous(), mySrd).size());
DeleteExpungeJobParameters jobParameters = new DeleteExpungeJobParameters();
jobParameters.addUrl("Patient?_id=" + p1.getIdPart() + "," + p2.getIdPart());
jobParameters.setRequestPartitionId(RequestPartitionId.fromPartitionId(myPartitionId));
PartitionedUrl partitionedUrl = new PartitionedUrl()
.setUrl("Patient?_id=" + p1.getIdPart() + "," + p2.getIdPart())
.setRequestPartitionId(RequestPartitionId.fromPartitionId(myPartitionId));
jobParameters.addPartitionedUrl(partitionedUrl);
jobParameters.setCascade(true);
JobInstanceStartRequest startRequest = new JobInstanceStartRequest();
@ -1375,7 +1382,8 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
// Only the read columns should be used, no criteria use partition
assertThat(searchSql).as(searchSql).contains("PARTITION_ID IN ('1')");
assertThat(StringUtils.countMatches(searchSql, "PARTITION_ID")).as(searchSql).isEqualTo(1);
assertThat(StringUtils.countMatches(searchSql, "PARTITION_ID,")).as(searchSql).isEqualTo(1);
assertThat(StringUtils.countMatches(searchSql, "PARTITION_DATE")).as(searchSql).isEqualTo(1);
}
// Read in null Partition
@ -1428,7 +1436,8 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
String searchSql = myCaptureQueriesListener.getSelectQueriesForCurrentThread().get(0).getSql(true, false).toUpperCase();
ourLog.info("Search SQL:\n{}", searchSql);
assertThat(searchSql).as(searchSql).contains("PARTITION_ID IN ('1')");
assertThat(StringUtils.countMatches(searchSql, "PARTITION_ID")).as(searchSql).isEqualTo(1);
assertThat(StringUtils.countMatches(searchSql, "PARTITION_ID,")).as(searchSql).isEqualTo(1);
assertThat(StringUtils.countMatches(searchSql, "PARTITION_DATE")).as(searchSql).isEqualTo(1);
// Second SQL performs the search
searchSql = myCaptureQueriesListener.getSelectQueriesForCurrentThread().get(1).getSql(true, false).toUpperCase();
@ -2086,9 +2095,11 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
myCaptureQueriesListener.logSelectQueriesForCurrentThread(1);
assertThat(outcome.getResources(0, 1)).hasSize(1);
String searchSql = myCaptureQueriesListener.getSelectQueriesForCurrentThread().get(0).getSql(true, true);
String searchSql = myCaptureQueriesListener.getSelectQueriesForCurrentThread().get(0).getSql(true, false);
ourLog.info("Search SQL:\n{}", searchSql);
assertThat(StringUtils.countMatches(searchSql, "PARTITION_ID")).as(searchSql).isEqualTo(1);
assertThat(searchSql).as(searchSql).contains("PARTITION_ID in ('1')");
assertThat(StringUtils.countMatches(searchSql, "PARTITION_ID,")).as(searchSql).isEqualTo(1);
assertThat(StringUtils.countMatches(searchSql, "PARTITION_DATE")).as(searchSql).isEqualTo(1);
}
@ -2875,7 +2886,7 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
ourLog.info("About to start transaction");
for (int i = 0; i < 40; i++) {
for (int i = 0; i < 60; i++) {
addCreatePartition(1, null);
}
@ -2908,7 +2919,7 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
assertEquals(4, myCaptureQueriesListener.countInsertQueriesForCurrentThread());
myCaptureQueriesListener.logUpdateQueriesForCurrentThread();
assertEquals(8, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(4, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
/*
* Third time with mass ingestion mode enabled
@ -2987,8 +2998,8 @@ public class PartitioningSqlR4Test extends BasePartitioningR4Test {
assertEquals(26, myCaptureQueriesListener.countSelectQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countInsertQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(326, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(326, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(1, myCaptureQueriesListener.countCommits());
assertEquals(0, myCaptureQueriesListener.countRollbacks());

View File

@ -4,6 +4,7 @@ import static org.junit.jupiter.api.Assertions.assertEquals;
import ca.uhn.fhir.batch2.api.IJobDataSink;
import ca.uhn.fhir.batch2.api.VoidModel;
import ca.uhn.fhir.batch2.jobs.chunk.ResourceIdListWorkChunkJson;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrl;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexJobParameters;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexStep;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
@ -45,7 +46,7 @@ public class ReindexStepTest {
RequestPartitionId partitionId = RequestPartitionId.fromPartitionId(expectedPartitionId);
ResourceIdListWorkChunkJson data = new ResourceIdListWorkChunkJson(List.of(), partitionId);
ReindexJobParameters reindexJobParameters = new ReindexJobParameters();
reindexJobParameters.setRequestPartitionId(partitionId);
reindexJobParameters.addPartitionedUrl(new PartitionedUrl().setRequestPartitionId(partitionId));
when(myHapiTransactionService.withRequest(any())).thenCallRealMethod();
when(myHapiTransactionService.buildExecutionBuilder(any())).thenCallRealMethod();

View File

@ -0,0 +1,88 @@
package ca.uhn.fhir.jpa.interceptor;
import ca.uhn.fhir.batch2.model.StatusEnum;
import ca.uhn.fhir.interceptor.api.Hook;
import ca.uhn.fhir.interceptor.api.Pointcut;
import ca.uhn.fhir.interceptor.model.ReadPartitionIdRequestDetails;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.entity.PartitionEntity;
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
import ca.uhn.fhir.jpa.provider.BaseResourceProviderR4Test;
import ca.uhn.fhir.rest.server.provider.ProviderConstants;
import jakarta.annotation.Nonnull;
import org.hl7.fhir.instance.model.api.IBaseResource;
import org.hl7.fhir.instance.model.api.IIdType;
import org.hl7.fhir.r4.model.Parameters;
import org.hl7.fhir.r4.model.StringType;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.CsvSource;
import static org.assertj.core.api.Assertions.assertThat;
public class ResourceTypePartitionInterceptorR4Test extends BaseResourceProviderR4Test {
private final MyPartitionSelectorInterceptor myPartitionInterceptor = new MyPartitionSelectorInterceptor();
@Override
@BeforeEach
public void before() {
myPartitionSettings.setPartitioningEnabled(true);
myPartitionSettings.setAllowReferencesAcrossPartitions(PartitionSettings.CrossPartitionReferenceMode.ALLOWED_UNQUALIFIED);
myInterceptorRegistry.registerInterceptor(myPartitionInterceptor);
myPartitionConfigSvc.createPartition(new PartitionEntity().setId(1).setName("PART-1"), null);
myPartitionConfigSvc.createPartition(new PartitionEntity().setId(2).setName("PART-2"), null);
myPartitionConfigSvc.createPartition(new PartitionEntity().setId(3).setName("PART-3"), null);
}
@AfterEach
public void after() {
myPartitionSettings.setPartitioningEnabled(new PartitionSettings().isPartitioningEnabled());
myPartitionSettings.setAllowReferencesAcrossPartitions(new PartitionSettings().getAllowReferencesAcrossPartitions());
myInterceptorRegistry.unregisterInterceptor(myPartitionInterceptor);
}
@ParameterizedTest
@CsvSource(value = {"Patient?, 1", "Observation?, 1", ",3"})
public void reindex_withUrl_completesSuccessfully(String theUrl, int theExpectedIndexedResourceCount) {
IIdType patientId = createPatient(withGiven("John"));
createObservation(withSubject(patientId));
createEncounter();
Parameters input = new Parameters();
input.setParameter(ProviderConstants.OPERATION_REINDEX_PARAM_URL, theUrl);
Parameters response = myClient
.operation()
.onServer()
.named(ProviderConstants.OPERATION_REINDEX)
.withParameters(input)
.execute();
String jobId = ((StringType)response.getParameterValue(ProviderConstants.OPERATION_REINDEX_RESPONSE_JOB_ID)).getValue();
myBatch2JobHelper.awaitJobHasStatus(jobId, StatusEnum.COMPLETED);
assertThat(myBatch2JobHelper.getCombinedRecordsProcessed(jobId)).isEqualTo(theExpectedIndexedResourceCount);
}
public class MyPartitionSelectorInterceptor {
@Hook(Pointcut.STORAGE_PARTITION_IDENTIFY_CREATE)
public RequestPartitionId selectPartitionCreate(IBaseResource theResource) {
return selectPartition(myFhirContext.getResourceType(theResource));
}
@Hook(Pointcut.STORAGE_PARTITION_IDENTIFY_READ)
public RequestPartitionId selectPartitionRead(ReadPartitionIdRequestDetails theDetails) {
return selectPartition(theDetails.getResourceType());
}
@Nonnull
private static RequestPartitionId selectPartition(String resourceType) {
return switch (resourceType) {
case "Patient" -> RequestPartitionId.fromPartitionId(1);
case "Observation" -> RequestPartitionId.fromPartitionId(2);
default -> RequestPartitionId.fromPartitionId(3);
};
}
}
}

View File

@ -11,9 +11,9 @@ import ca.uhn.fhir.jpa.api.model.ExpungeOptions;
import ca.uhn.fhir.jpa.dao.r4.BasePartitioningR4Test;
import ca.uhn.fhir.jpa.entity.PartitionEntity;
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.jpa.subscription.BaseSubscriptionsR4Test;
import ca.uhn.fhir.jpa.subscription.resthook.RestHookTestR4Test;
import ca.uhn.fhir.jpa.model.config.SubscriptionSettings;
import ca.uhn.fhir.jpa.subscription.triggering.ISubscriptionTriggeringSvc;
import ca.uhn.fhir.jpa.subscription.triggering.SubscriptionTriggeringSvcImpl;
import ca.uhn.fhir.jpa.test.util.StoppableSubscriptionDeliveringRestHookSubscriber;
@ -26,7 +26,12 @@ import jakarta.servlet.ServletException;
import org.awaitility.core.ConditionTimeoutException;
import org.hl7.fhir.instance.model.api.IIdType;
import org.hl7.fhir.instance.model.api.IPrimitiveType;
import org.hl7.fhir.r4.model.*;
import org.hl7.fhir.r4.model.BooleanType;
import org.hl7.fhir.r4.model.Observation;
import org.hl7.fhir.r4.model.Parameters;
import org.hl7.fhir.r4.model.Patient;
import org.hl7.fhir.r4.model.Resource;
import org.hl7.fhir.r4.model.Subscription;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
@ -43,9 +48,9 @@ import java.util.List;
import java.util.stream.Stream;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.fail;
import static org.awaitility.Awaitility.await;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.fail;
public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4Test {
private static final Logger ourLog = LoggerFactory.getLogger(RestHookTestR4Test.class);
@ -60,6 +65,7 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
public static final RequestPartitionId REQ_PART_1 = RequestPartitionId.fromPartitionNames(PARTITION_1);
static final String PARTITION_2 = "PART-2";
public static final RequestPartitionId REQ_PART_2 = RequestPartitionId.fromPartitionNames(PARTITION_2);
public static final RequestPartitionId REQ_PART_DEFAULT = RequestPartitionId.defaultPartition();
protected MyReadWriteInterceptor myPartitionInterceptor;
protected LocalDate myPartitionDate;
@ -127,7 +133,7 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
String criteria1 = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
Subscription subscription = newSubscription(criteria1, payload);
assertEquals(mySrdInterceptorService.getAllRegisteredInterceptors().size(), 1);
assertThat(mySrdInterceptorService.getAllRegisteredInterceptors()).hasSize(1);
myDaoRegistry.getResourceDao("Subscription").create(subscription, mySrd);
@ -150,13 +156,13 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(theIsCrossPartitionEnabled);
String payload = "application/fhir+json";
String code = "1000000050";
String criteria1 = "Patient?active=true";
Subscription subscription = newSubscription(criteria1, payload);
subscription.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r4.model.BooleanType().setValue(true));
assertEquals(mySrdInterceptorService.getAllRegisteredInterceptors().size(), 1);
assertThat(mySrdInterceptorService.getAllRegisteredInterceptors()).hasSize(1);
myDaoRegistry.getResourceDao("Subscription").create(subscription, mySrd);
myDaoRegistry.getResourceDao("Subscription").create(subscription, new SystemRequestDetails().setRequestPartitionId(RequestPartitionId.defaultPartition()));
waitForActivatedSubscriptionCount(1);
@ -184,10 +190,8 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
}
}
@ParameterizedTest
@ValueSource(booleans = {true, false})
public void testManualTriggeredSubscriptionDoesNotCheckOutsideOfPartition(boolean theIsCrossPartitionEnabled) throws Exception {
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(theIsCrossPartitionEnabled);
@Test
public void testManualTriggeredSubscriptionDoesNotMatchOnAllPartitions() throws Exception {
String payload = "application/fhir+json";
String code = "1000000050";
String criteria1 = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
@ -201,8 +205,9 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
myPartitionInterceptor.setRequestPartitionId(REQ_PART_1);
IIdType observationIdPartitionOne = observation.create(createBaseObservation(code, "SNOMED-CT"), mySrd).getId();
//Given: We create a subscrioption on Partition 1
IIdType subscriptionId = myDaoRegistry.getResourceDao("Subscription").create(newSubscription(criteria1, payload), mySrd).getId();
//Given: We create a subscription on partition 1
Subscription subscription = newSubscription(criteria1, payload);
IIdType subscriptionId = myDaoRegistry.getResourceDao("Subscription").create(subscription, mySrd).getId();
waitForActivatedSubscriptionCount(1);
ArrayList<IPrimitiveType<String>> searchUrlList = new ArrayList<>();
@ -213,14 +218,49 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
waitForQueueToDrain();
List<Observation> resourceUpdates = BaseSubscriptionsR4Test.ourObservationProvider.getResourceUpdates();
if (theIsCrossPartitionEnabled) {
assertEquals(2, resourceUpdates.size());
assertEquals(Stream.of(observationIdPartitionOne, observationIdPartitionTwo).map(Object::toString).sorted().toList(),
resourceUpdates.stream().map(Resource::getId).sorted().toList());
} else {
assertEquals(1, resourceUpdates.size());
assertEquals(observationIdPartitionOne.toString(), resourceUpdates.get(0).getId());
}
assertEquals(1, resourceUpdates.size());
assertEquals(observationIdPartitionOne.toString(), resourceUpdates.get(0).getId());
String responseValue = resultParameters.getParameter().get(0).getValue().primitiveValue();
assertThat(responseValue).contains("Subscription triggering job submitted as JOB ID");
}
@Test
public void testManualTriggeredCrossPartitinedSubscriptionDoesMatchOnAllPartitions() throws Exception {
mySubscriptionSettings.setCrossPartitionSubscriptionEnabled(true);
String payload = "application/fhir+json";
String code = "1000000050";
String criteria1 = "Observation?code=SNOMED-CT|" + code + "&_format=xml";
//Given: We store a resource in partition 2
myPartitionInterceptor.setRequestPartitionId(REQ_PART_2);
final IFhirResourceDao observation = myDaoRegistry.getResourceDao("Observation");
IIdType observationIdPartitionTwo = observation.create(createBaseObservation(code, "SNOMED-CT"), mySrd).getId();
//Given: We store a similar resource in partition 1
myPartitionInterceptor.setRequestPartitionId(REQ_PART_1);
IIdType observationIdPartitionOne = observation.create(createBaseObservation(code, "SNOMED-CT"), mySrd).getId();
//Given: We create a subscription on the default partition
myPartitionInterceptor.setRequestPartitionId(REQ_PART_DEFAULT);
Subscription subscription = newSubscription(criteria1, payload);
subscription.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new org.hl7.fhir.r4.model.BooleanType().setValue(true));
IIdType subscriptionId = myDaoRegistry.getResourceDao("Subscription").create(subscription, mySrd).getId();
waitForActivatedSubscriptionCount(1);
ArrayList<IPrimitiveType<String>> searchUrlList = new ArrayList<>();
searchUrlList.add(new StringDt("Observation?"));
Parameters resultParameters = (Parameters) mySubscriptionTriggeringSvc.triggerSubscription(null, searchUrlList, subscriptionId, mySrd);
mySubscriptionTriggeringSvc.runDeliveryPass();
waitForQueueToDrain();
List<Observation> resourceUpdates = BaseSubscriptionsR4Test.ourObservationProvider.getResourceUpdates();
assertEquals(2, resourceUpdates.size());
assertEquals(Stream.of(observationIdPartitionOne, observationIdPartitionTwo).map(Object::toString).sorted().toList(),
resourceUpdates.stream().map(Resource::getId).sorted().toList());
String responseValue = resultParameters.getParameter().get(0).getValue().primitiveValue();
assertThat(responseValue).contains("Subscription triggering job submitted as JOB ID");
@ -240,17 +280,17 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
myPartitionInterceptor.setRequestPartitionId(REQ_PART_1);
myDaoRegistry.getResourceDao("Observation").create(createBaseObservation(code, "SNOMED-CT"), mySrd).getId();
//Given: We create a subscription on Partition 1
//Given: We create a subscription on default partition
Subscription theResource = newSubscription(criteria1, payload);
theResource.addExtension(HapiExtensions.EXTENSION_SUBSCRIPTION_CROSS_PARTITION, new BooleanType(Boolean.TRUE));
myPartitionInterceptor.setRequestPartitionId(RequestPartitionId.defaultPartition());
myPartitionInterceptor.setRequestPartitionId(REQ_PART_DEFAULT);
IIdType subscriptionId = myDaoRegistry.getResourceDao("Subscription").create(theResource, mySrd).getId();
waitForActivatedSubscriptionCount(1);
ArrayList<IPrimitiveType<String>> searchUrlList = new ArrayList<>();
searchUrlList.add(new StringDt("Observation?"));
myPartitionInterceptor.setRequestPartitionId(RequestPartitionId.defaultPartition());
myPartitionInterceptor.setRequestPartitionId(REQ_PART_DEFAULT);
mySubscriptionTriggeringSvc.triggerSubscription(null, searchUrlList, subscriptionId, mySrd);
mySubscriptionTriggeringSvc.runDeliveryPass();
@ -273,7 +313,7 @@ public class PartitionedSubscriptionTriggeringR4Test extends BaseSubscriptionsR4
// Create the subscription now
DaoMethodOutcome subscriptionOutcome = myDaoRegistry.getResourceDao("Subscription").create(newSubscription(criteria1, payload), mySrd);
assertEquals(mySrdInterceptorService.getAllRegisteredInterceptors().size(), 1);
assertThat(mySrdInterceptorService.getAllRegisteredInterceptors()).hasSize(1);
Subscription subscription = (Subscription) subscriptionOutcome.getResource();

View File

@ -7,8 +7,9 @@ import ca.uhn.fhir.interceptor.api.IPointcut;
import ca.uhn.fhir.interceptor.api.Pointcut;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
import ca.uhn.fhir.jpa.delete.job.ReindexTestHelper;
import ca.uhn.fhir.jpa.model.entity.ResourceIndexedSearchParamToken;
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
import ca.uhn.fhir.jpa.reindex.ReindexTestHelper;
import ca.uhn.fhir.rest.api.CacheControlDirective;
import ca.uhn.fhir.rest.api.server.RequestDetails;
import ca.uhn.fhir.rest.server.provider.ProviderConstants;
@ -74,10 +75,10 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
public void testDeleteExpungeOperation() {
// Create patients
IIdType idAT = createPatient(withTenant(TENANT_A), withActiveTrue());
IIdType idAF = createPatient(withTenant(TENANT_A), withActiveFalse());
IIdType idBT = createPatient(withTenant(TENANT_B), withActiveTrue());
IIdType idBF = createPatient(withTenant(TENANT_B), withActiveFalse());
createPatient(withTenant(TENANT_A), withActiveTrue());
createPatient(withTenant(TENANT_A), withActiveFalse());
createPatient(withTenant(TENANT_B), withActiveTrue());
createPatient(withTenant(TENANT_B), withActiveFalse());
// validate setup
assertEquals(2, getAllPatientsInTenant(TENANT_A).getTotal());
@ -103,7 +104,7 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
String jobId = BatchHelperR4.jobIdFromBatch2Parameters(response);
myBatch2JobHelper.awaitJobCompletion(jobId);
assertThat(interceptor.requestPartitionIds).hasSize(4);
assertThat(interceptor.requestPartitionIds).hasSize(3);
RequestPartitionId partitionId = interceptor.requestPartitionIds.get(0);
assertEquals(TENANT_B_ID, partitionId.getFirstPartitionIdOrNull());
assertEquals(TENANT_B, partitionId.getFirstPartitionNameOrNull());
@ -127,20 +128,20 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
IIdType obsFinalA = doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension());
myTenantClientInterceptor.setTenantId(TENANT_B);
IIdType obsFinalB = doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension());
doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension());
myTenantClientInterceptor.setTenantId(DEFAULT_PARTITION_NAME);
IIdType obsFinalD = doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension());
doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension());
reindexTestHelper.createAlleleSearchParameter();
// The searchparam value is on the observation, but it hasn't been indexed yet
myTenantClientInterceptor.setTenantId(TENANT_A);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
myTenantClientInterceptor.setTenantId(TENANT_B);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
myTenantClientInterceptor.setTenantId(DEFAULT_PARTITION_NAME);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
// setup
Parameters input = new Parameters();
@ -163,13 +164,13 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
// validate
runInTransaction(()->{
runInTransaction(() -> {
long indexedSps = myResourceIndexedSearchParamTokenDao
.findAll()
.stream()
.filter(t->t.getParamName().equals("alleleName"))
.count();
assertEquals(1, indexedSps, ()->"Token indexes:\n * " + myResourceIndexedSearchParamTokenDao.findAll().stream().filter(t->t.getParamName().equals("alleleName")).map(ResourceIndexedSearchParamToken::toString).collect(Collectors.joining("\n * ")));
assertEquals(1, indexedSps, () -> "Token indexes:\n * " + myResourceIndexedSearchParamTokenDao.findAll().stream().filter(t->t.getParamName().equals("alleleName")).map(ResourceIndexedSearchParamToken::toString).collect(Collectors.joining("\n * ")));
});
List<String> alleleObservationIds = reindexTestHelper.getAlleleObservationIds(myClient);
@ -178,9 +179,9 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(1);
assertEquals(obsFinalA.getIdPart(), alleleObservationIds.get(0));
myTenantClientInterceptor.setTenantId(TENANT_B);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
myTenantClientInterceptor.setTenantId(DEFAULT_PARTITION_NAME);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
// Reindex default partition
myTenantClientInterceptor.setTenantId(DEFAULT_PARTITION_NAME);
@ -198,13 +199,13 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
ourLog.info("Search params: {}", mySearchParamRegistry.getActiveSearchParams("Observation").getSearchParamNames());
logAllTokenIndexes();
runInTransaction(()->{
runInTransaction(() -> {
long indexedSps = myResourceIndexedSearchParamTokenDao
.findAll()
.stream()
.filter(t->t.getParamName().equals("alleleName"))
.filter(t -> t.getParamName().equals("alleleName"))
.count();
assertEquals(3, indexedSps, ()->"Resources:\n * " + myResourceTableDao.findAll().stream().map(t->t.toString()).collect(Collectors.joining("\n * ")));
assertEquals(2, indexedSps, () -> "Resources:\n * " + myResourceTableDao.findAll().stream().map(ResourceTable::toString).collect(Collectors.joining("\n * ")));
});
myTenantClientInterceptor.setTenantId(DEFAULT_PARTITION_NAME);
@ -216,20 +217,20 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
ReindexTestHelper reindexTestHelper = new ReindexTestHelper(myFhirContext, myDaoRegistry, mySearchParamRegistry);
myTenantClientInterceptor.setTenantId(TENANT_A);
IIdType obsFinalA = doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension(Observation.ObservationStatus.FINAL));
IIdType obsCancelledA = doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension(Observation.ObservationStatus.CANCELLED));
doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension(Observation.ObservationStatus.CANCELLED));
myTenantClientInterceptor.setTenantId(TENANT_B);
IIdType obsFinalB = doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension(Observation.ObservationStatus.FINAL));
IIdType obsCancelledB = doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension(Observation.ObservationStatus.CANCELLED));
doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension(Observation.ObservationStatus.FINAL));
doCreateResource(reindexTestHelper.buildObservationWithAlleleExtension(Observation.ObservationStatus.CANCELLED));
reindexTestHelper.createAlleleSearchParameter();
ourLog.info("Search params: {}", mySearchParamRegistry.getActiveSearchParams("Observation").getSearchParamNames());
// The searchparam value is on the observation, but it hasn't been indexed yet
myTenantClientInterceptor.setTenantId(TENANT_A);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
myTenantClientInterceptor.setTenantId(TENANT_B);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
// setup
Parameters input = new Parameters();
@ -259,7 +260,7 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(1);
assertEquals(obsFinalA.getIdPart(), alleleObservationIds.get(0));
myTenantClientInterceptor.setTenantId(TENANT_B);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).hasSize(0);
assertThat(reindexTestHelper.getAlleleObservationIds(myClient)).isEmpty();
}
private Bundle getAllPatientsInTenant(String theTenantId) {

View File

@ -347,7 +347,7 @@ public class TerminologyUploaderProviderR4Test extends BaseResourceProviderR4Tes
"\"reference\": \"CodeSystem/"
);
assertHierarchyContains(
assertHierarchyContainsExactly(
"CHEM seq=0",
" HB seq=0",
" NEUT seq=1",
@ -387,7 +387,7 @@ public class TerminologyUploaderProviderR4Test extends BaseResourceProviderR4Tes
"\"reference\": \"CodeSystem/"
);
assertHierarchyContains(
assertHierarchyContainsExactly(
"CHEM seq=0",
" HB seq=0",
" NEUT seq=1",
@ -457,7 +457,7 @@ public class TerminologyUploaderProviderR4Test extends BaseResourceProviderR4Tes
"\"reference\": \"CodeSystem/"
);
assertHierarchyContains(
assertHierarchyContainsExactly(
"1111222233 seq=0",
" 1111222234 seq=0"
);
@ -527,13 +527,45 @@ public class TerminologyUploaderProviderR4Test extends BaseResourceProviderR4Tes
"\"reference\": \"CodeSystem/"
);
assertHierarchyContains(
assertHierarchyContainsExactly(
"CHEM seq=0",
" HB seq=0",
" HBA seq=0"
);
}
@Test
public void testApplyDeltaAdd_UsingCodeSystem_NoDisplaySetOnConcepts() throws IOException {
CodeSystem codeSystem = new CodeSystem();
codeSystem.setUrl("http://foo/cs");
// setting codes are enough, no need to call setDisplay etc
codeSystem.addConcept().setCode("Code1");
codeSystem.addConcept().setCode("Code2");
LoggingInterceptor interceptor = new LoggingInterceptor(true);
myClient.registerInterceptor(interceptor);
Parameters outcome = myClient
.operation()
.onType(CodeSystem.class)
.named(JpaConstants.OPERATION_APPLY_CODESYSTEM_DELTA_ADD)
.withParameter(Parameters.class, TerminologyUploaderProvider.PARAM_SYSTEM, new UriType("http://foo/cs"))
.andParameter(TerminologyUploaderProvider.PARAM_CODESYSTEM, codeSystem)
.prettyPrint()
.execute();
myClient.unregisterInterceptor(interceptor);
String encoded = myFhirContext.newJsonParser().setPrettyPrint(true).encodeResourceToString(outcome);
ourLog.info(encoded);
assertThat(encoded).contains("\"valueInteger\": 2");
// assert other codes remain, and HB and NEUT is removed
assertHierarchyContainsExactly(
"Code1 seq=0",
"Code2 seq=0"
);
}
@Test
public void testApplyDeltaAdd_MissingSystem() throws IOException {
String conceptsCsv = loadResource("/custom_term/concepts.csv");
@ -582,30 +614,12 @@ public class TerminologyUploaderProviderR4Test extends BaseResourceProviderR4Tes
}
@Test
public void testApplyDeltaRemove() throws IOException {
String conceptsCsv = loadResource("/custom_term/concepts.csv");
Attachment conceptsAttachment = new Attachment()
.setData(conceptsCsv.getBytes(Charsets.UTF_8))
.setContentType("text/csv")
.setUrl("file:/foo/concepts.csv");
String hierarchyCsv = loadResource("/custom_term/hierarchy.csv");
Attachment hierarchyAttachment = new Attachment()
.setData(hierarchyCsv.getBytes(Charsets.UTF_8))
.setContentType("text/csv")
.setUrl("file:/foo/hierarchy.csv");
public void testApplyDeltaRemove_UsingCsvFiles_RemoveAllCodes() throws IOException {
// Add the codes
myClient
.operation()
.onType(CodeSystem.class)
.named(JpaConstants.OPERATION_APPLY_CODESYSTEM_DELTA_ADD)
.withParameter(Parameters.class, TerminologyUploaderProvider.PARAM_SYSTEM, new UriType("http://foo/cs"))
.andParameter(TerminologyUploaderProvider.PARAM_FILE, conceptsAttachment)
.andParameter(TerminologyUploaderProvider.PARAM_FILE, hierarchyAttachment)
.prettyPrint()
.execute();
applyDeltaAddCustomTermCodes();
// And remove them
// And remove all of them using the same set of csv files
LoggingInterceptor interceptor = new LoggingInterceptor(true);
myClient.registerInterceptor(interceptor);
Parameters outcome = myClient
@ -613,8 +627,8 @@ public class TerminologyUploaderProviderR4Test extends BaseResourceProviderR4Tes
.onType(CodeSystem.class)
.named(JpaConstants.OPERATION_APPLY_CODESYSTEM_DELTA_REMOVE)
.withParameter(Parameters.class, TerminologyUploaderProvider.PARAM_SYSTEM, new UriType("http://foo/cs"))
.andParameter(TerminologyUploaderProvider.PARAM_FILE, conceptsAttachment)
.andParameter(TerminologyUploaderProvider.PARAM_FILE, hierarchyAttachment)
.andParameter(TerminologyUploaderProvider.PARAM_FILE, getCustomTermConceptsAttachment())
.andParameter(TerminologyUploaderProvider.PARAM_FILE, getCustomTermHierarchyAttachment())
.prettyPrint()
.execute();
myClient.unregisterInterceptor(interceptor);
@ -622,8 +636,129 @@ public class TerminologyUploaderProviderR4Test extends BaseResourceProviderR4Tes
String encoded = myFhirContext.newJsonParser().setPrettyPrint(true).encodeResourceToString(outcome);
ourLog.info(encoded);
assertThat(encoded).contains("\"valueInteger\": 5");
// providing no arguments, since there should be no code left
assertHierarchyContainsExactly();
}
@Test
public void testApplyDeltaRemove_UsingConceptsCsvFileOnly() throws IOException {
//add some concepts
applyDeltaAddCustomTermCodes();
// And remove 2 of them, providing values for DISPLAY is not necessary
String conceptsToRemoveCsvData = """
CODE,DISPLAY
HB,
NEUT,
""";
Attachment conceptsAttachment = createCsvAttachment(conceptsToRemoveCsvData, "file:/concepts.csv");
LoggingInterceptor interceptor = new LoggingInterceptor(true);
myClient.registerInterceptor(interceptor);
Parameters outcome = myClient
.operation()
.onType(CodeSystem.class)
.named(JpaConstants.OPERATION_APPLY_CODESYSTEM_DELTA_REMOVE)
.withParameter(Parameters.class, TerminologyUploaderProvider.PARAM_SYSTEM, new UriType("http://foo/cs"))
// submitting concepts is enough (no need to submit hierarchy)
.andParameter(TerminologyUploaderProvider.PARAM_FILE, conceptsAttachment)
.prettyPrint()
.execute();
myClient.unregisterInterceptor(interceptor);
String encoded = myFhirContext.newJsonParser().setPrettyPrint(true).encodeResourceToString(outcome);
ourLog.info(encoded);
assertThat(encoded).contains("\"valueInteger\": 2");
// assert other codes remain, and HB and NEUT is removed
assertHierarchyContainsExactly(
"CHEM seq=0",
"MICRO seq=0",
" C&S seq=0"
);
}
@Test
public void testApplyDeltaRemove_UsingCodeSystemPayload() throws IOException {
// add some custom codes
applyDeltaAddCustomTermCodes();
// remove 2 of them using CodeSystemPayload
CodeSystem codeSystem = new CodeSystem();
codeSystem.setUrl("http://foo/cs");
// setting codes are enough for remove, no need to call setDisplay etc
codeSystem.addConcept().setCode("HB");
codeSystem.addConcept().setCode("NEUT");
LoggingInterceptor interceptor = new LoggingInterceptor(true);
myClient.registerInterceptor(interceptor);
Parameters outcome = myClient
.operation()
.onType(CodeSystem.class)
.named(JpaConstants.OPERATION_APPLY_CODESYSTEM_DELTA_REMOVE)
.withParameter(Parameters.class, TerminologyUploaderProvider.PARAM_SYSTEM, new UriType("http://foo/cs"))
.andParameter(TerminologyUploaderProvider.PARAM_CODESYSTEM, codeSystem)
.prettyPrint()
.execute();
myClient.unregisterInterceptor(interceptor);
String encoded = myFhirContext.newJsonParser().setPrettyPrint(true).encodeResourceToString(outcome);
ourLog.info(encoded);
assertThat(encoded).contains("\"valueInteger\": 2");
// assert other codes remain, and HB and NEUT is removed
assertHierarchyContainsExactly(
"CHEM seq=0",
"MICRO seq=0",
" C&S seq=0"
);
}
private Attachment createCsvAttachment(String theData, String theUrl) {
return new Attachment()
.setData(theData.getBytes(Charsets.UTF_8))
.setContentType("text/csv")
.setUrl(theUrl);
}
private Attachment getCustomTermConceptsAttachment() throws IOException {
String conceptsCsv = loadResource("/custom_term/concepts.csv");
return createCsvAttachment(conceptsCsv, "file:/foo/concepts.csv");
}
private Attachment getCustomTermHierarchyAttachment() throws IOException {
String hierarchyCsv = loadResource("/custom_term/hierarchy.csv");
return createCsvAttachment(hierarchyCsv, "file:/foo/hierarchy.csv");
}
private void applyDeltaAddCustomTermCodes() throws IOException {
myClient
.operation()
.onType(CodeSystem.class)
.named(JpaConstants.OPERATION_APPLY_CODESYSTEM_DELTA_ADD)
.withParameter(Parameters.class, TerminologyUploaderProvider.PARAM_SYSTEM, new UriType("http://foo/cs"))
.andParameter(TerminologyUploaderProvider.PARAM_FILE, getCustomTermConceptsAttachment())
.andParameter(TerminologyUploaderProvider.PARAM_FILE, getCustomTermHierarchyAttachment())
.prettyPrint()
.execute();
assertHierarchyContainsExactly(
"CHEM seq=0",
" HB seq=0",
" NEUT seq=1",
"MICRO seq=0",
" C&S seq=0"
);
}
private static void addFile(ZipOutputStream theZos, String theFileName) throws IOException {
theZos.putNextEntry(new ZipEntry(theFileName));

View File

@ -1,127 +0,0 @@
package ca.uhn.fhir.jpa.reindex;
import static org.junit.jupiter.api.Assertions.assertEquals;
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
import ca.uhn.fhir.jpa.api.pid.IResourcePidStream;
import ca.uhn.fhir.jpa.api.svc.IBatch2DaoSvc;
import ca.uhn.fhir.jpa.dao.tx.IHapiTransactionService;
import ca.uhn.fhir.jpa.searchparam.MatchUrlService;
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
import ca.uhn.fhir.model.primitive.IdDt;
import ca.uhn.fhir.parser.DataFormatException;
import ca.uhn.fhir.rest.server.exceptions.InternalErrorException;
import org.hl7.fhir.instance.model.api.IIdType;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
import org.springframework.beans.factory.annotation.Autowired;
import jakarta.annotation.Nonnull;
import java.time.LocalDate;
import java.time.Month;
import java.time.ZoneId;
import java.util.Date;
import java.util.List;
import java.util.stream.IntStream;
import java.util.stream.Stream;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertThrows;
class Batch2DaoSvcImplTest extends BaseJpaR4Test {
private static final Date PREVIOUS_MILLENNIUM = toDate(LocalDate.of(1999, Month.DECEMBER, 31));
private static final Date TOMORROW = toDate(LocalDate.now().plusDays(1));
private static final String URL_PATIENT_EXPUNGE_TRUE = "Patient?_expunge=true";
private static final String PATIENT = "Patient";
@Autowired
private MatchUrlService myMatchUrlService;
@Autowired
private IHapiTransactionService myIHapiTransactionService ;
private IBatch2DaoSvc mySubject;
@BeforeEach
void beforeEach() {
mySubject = new Batch2DaoSvcImpl(myResourceTableDao, myMatchUrlService, myDaoRegistry, myFhirContext, myIHapiTransactionService);
}
// TODO: LD this test won't work with the nonUrl variant yet: error: No existing transaction found for transaction marked with propagation 'mandatory'
@Test
void fetchResourcesByUrlEmptyUrl() {
final InternalErrorException exception =
assertThrows(
InternalErrorException.class,
() -> mySubject.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, RequestPartitionId.defaultPartition(), "")
.visitStream(Stream::toList));
assertEquals("HAPI-2422: this should never happen: URL is missing a '?'", exception.getMessage());
}
@Test
void fetchResourcesByUrlSingleQuestionMark() {
final IllegalArgumentException exception = assertThrows(IllegalArgumentException.class, () -> mySubject.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, RequestPartitionId.defaultPartition(), "?").visitStream(Stream::toList));
assertEquals("theResourceName must not be blank", exception.getMessage());
}
@Test
void fetchResourcesByUrlNonsensicalResource() {
final DataFormatException exception = assertThrows(DataFormatException.class, () -> mySubject.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, RequestPartitionId.defaultPartition(), "Banana?_expunge=true").visitStream(Stream::toList));
assertEquals("HAPI-1684: Unknown resource name \"Banana\" (this name is not known in FHIR version \"R4\")", exception.getMessage());
}
@ParameterizedTest
@ValueSource(ints = {0, 9, 10, 11, 21, 22, 23, 45})
void fetchResourcesByUrl(int expectedNumResults) {
final List<IIdType> patientIds = IntStream.range(0, expectedNumResults)
.mapToObj(num -> createPatient())
.toList();
final IResourcePidStream resourcePidList = mySubject.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, RequestPartitionId.defaultPartition(), URL_PATIENT_EXPUNGE_TRUE);
final List<? extends IIdType> actualPatientIds =
resourcePidList.visitStream(s-> s.map(typePid -> new IdDt(typePid.resourceType, (Long) typePid.id.getId()))
.toList());
assertIdsEqual(patientIds, actualPatientIds);
}
@ParameterizedTest
@ValueSource(ints = {0, 9, 10, 11, 21, 22, 23, 45})
void fetchResourcesNoUrl(int expectedNumResults) {
final int pageSizeWellBelowThreshold = 2;
final List<IIdType> patientIds = IntStream.range(0, expectedNumResults)
.mapToObj(num -> createPatient())
.toList();
final IResourcePidStream resourcePidList = mySubject.fetchResourceIdStream(PREVIOUS_MILLENNIUM, TOMORROW, RequestPartitionId.defaultPartition(), null);
final List<? extends IIdType> actualPatientIds =
resourcePidList.visitStream(s-> s.map(typePid -> new IdDt(typePid.resourceType, (Long) typePid.id.getId()))
.toList());
assertIdsEqual(patientIds, actualPatientIds);
}
private static void assertIdsEqual(List<IIdType> expectedResourceIds, List<? extends IIdType> actualResourceIds) {
assertThat(actualResourceIds).hasSize(expectedResourceIds.size());
for (int index = 0; index < expectedResourceIds.size(); index++) {
final IIdType expectedIdType = expectedResourceIds.get(index);
final IIdType actualIdType = actualResourceIds.get(index);
assertEquals(expectedIdType.getResourceType(), actualIdType.getResourceType());
assertEquals(expectedIdType.getIdPartAsLong(), actualIdType.getIdPartAsLong());
}
}
@Nonnull
private static Date toDate(LocalDate theLocalDate) {
return Date.from(theLocalDate.atStartOfDay(ZoneId.systemDefault()).toInstant());
}
}

View File

@ -1,6 +1,5 @@
package ca.uhn.fhir.jpa.delete.job;
package ca.uhn.fhir.jpa.reindex;
import static org.junit.jupiter.api.Assertions.assertTrue;
import ca.uhn.fhir.batch2.api.IJobCoordinator;
import ca.uhn.fhir.batch2.api.IJobPersistence;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexAppCtx;
@ -41,17 +40,17 @@ import java.util.List;
import java.util.stream.Stream;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.fail;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertNull;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.junit.jupiter.api.Assertions.fail;
@SuppressWarnings("SqlDialectInspection")
public class ReindexJobTest extends BaseJpaR4Test {
@Autowired
private IJobCoordinator myJobCoordinator;
@Autowired
private IJobPersistence myJobPersistence;

View File

@ -1,7 +1,8 @@
package ca.uhn.fhir.jpa.delete.job;
package ca.uhn.fhir.jpa.reindex;
import ca.uhn.fhir.batch2.api.IJobCoordinator;
import ca.uhn.fhir.batch2.jobs.parameters.JobParameters;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrlJobParameters;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrl;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexAppCtx;
import ca.uhn.fhir.batch2.model.JobInstance;
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
@ -11,7 +12,6 @@ import ca.uhn.fhir.jpa.entity.PartitionEntity;
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
import ca.uhn.fhir.jpa.test.BaseJpaR4Test;
import ca.uhn.fhir.rest.api.server.SystemRequestDetails;
import ca.uhn.fhir.rest.server.interceptor.partition.RequestTenantPartitionInterceptor;
import org.hl7.fhir.r4.model.Observation;
import org.hl7.fhir.r4.model.Patient;
import org.junit.jupiter.api.AfterEach;
@ -28,15 +28,13 @@ import java.util.stream.Stream;
import static org.assertj.core.api.Assertions.assertThat;
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class ReindexJobWithPartitioningTest extends BaseJpaR4Test {
@Autowired
private IJobCoordinator myJobCoordinator;
private final RequestTenantPartitionInterceptor myPartitionInterceptor = new RequestTenantPartitionInterceptor();
@BeforeEach
public void before() {
myInterceptorRegistry.registerInterceptor(myPartitionInterceptor);
myPartitionSettings.setPartitioningEnabled(true);
myPartitionConfigSvc.createPartition(new PartitionEntity().setId(1).setName("TestPartition1"), null);
myPartitionConfigSvc.createPartition(new PartitionEntity().setId(2).setName("TestPartition2"), null);
@ -61,47 +59,77 @@ public class ReindexJobWithPartitioningTest extends BaseJpaR4Test {
@AfterEach
public void after() {
myInterceptorRegistry.unregisterInterceptor(myPartitionInterceptor);
myPartitionSettings.setPartitioningEnabled(new PartitionSettings().isPartitioningEnabled());
}
public static Stream<Arguments> getReindexParameters() {
List<RequestPartitionId> twoPartitions = List.of(RequestPartitionId.fromPartitionId(1), RequestPartitionId.fromPartitionId(2));
List<RequestPartitionId> partition1 = List.of(RequestPartitionId.fromPartitionId(1));
List<RequestPartitionId> allPartitions = List.of(RequestPartitionId.allPartitions());
RequestPartitionId partition1 = RequestPartitionId.fromPartitionId(1);
RequestPartitionId partition2 = RequestPartitionId.fromPartitionId(2);
RequestPartitionId allPartitions = RequestPartitionId.allPartitions();
return Stream.of(
// includes all resources from all partitions - partition 1, partition 2 and default partition
Arguments.of(List.of(), List.of(), false, 6),
// includes all Observations
Arguments.of(List.of("Observation?"), twoPartitions, false, 3),
// includes all Observations
Arguments.of(List.of("Observation?"), allPartitions, false, 3),
Arguments.of(List.of("Observation?"), List.of(), false, 0),
// includes Observations in partition 1
Arguments.of(List.of("Observation?"), partition1, true, 2),
// includes all Patients from all partitions - partition 1, partition 2 and default partition
Arguments.of(List.of("Patient?"), allPartitions, false, 3),
// includes Patients and Observations in partitions 1 and 2
Arguments.of(List.of("Observation?", "Patient?"), twoPartitions, false, 5),
// includes Observations from partition 1 and Patients from partition 2
Arguments.of(List.of("Observation?", "Patient?"), twoPartitions, true, 3),
// includes final Observations and Patients from partitions 1 and 2
Arguments.of(List.of("Observation?status=final", "Patient?"), twoPartitions, false, 4),
// includes final Observations from partition 1 and Patients from partition 2
Arguments.of(List.of("Observation?status=final", "Patient?"), twoPartitions, true, 2),
// includes final Observations and Patients from partitions 1
Arguments.of(List.of("Observation?status=final", "Patient?"), partition1, false, 2)
// 1. includes all resources
Arguments.of(List.of(), 6),
// 2. includes all resources from partition 1
Arguments.of(List.of(new PartitionedUrl().setRequestPartitionId(partition1)), 3),
// 3. includes all resources in all partitions
Arguments.of(List.of(new PartitionedUrl().setUrl("").setRequestPartitionId(allPartitions)), 6),
// 4. includes all Observations in partition 1 and partition 2
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Observation?").setRequestPartitionId(partition1),
new PartitionedUrl().setUrl("Observation?").setRequestPartitionId(partition2)
), 3),
// 5. includes all Observations in all partitions (partition 1, partition 2 and default partition)
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Observation?").setRequestPartitionId(allPartitions)),
3),
// 6. includes all Observations in partition 1
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Observation?").setRequestPartitionId(partition1)),
2),
// 7. includes all Patients from all partitions
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Patient?").setRequestPartitionId(allPartitions)
), 3),
// 8. includes Patients and Observations in partitions 1 and 2
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Observation?").setRequestPartitionId(partition1),
new PartitionedUrl().setUrl("Patient?").setRequestPartitionId(partition2)
), 3),
// 9. includes final Observations and Patients from partitions 1 and 2
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Observation?").setRequestPartitionId(partition1),
new PartitionedUrl().setUrl("Observation?").setRequestPartitionId(partition2),
new PartitionedUrl().setUrl("Patient?").setRequestPartitionId(partition1),
new PartitionedUrl().setUrl("Patient?").setRequestPartitionId(partition2)
), 5),
// 10. includes final Observations from partition 1 and Patients from partition 2
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Observation?status=final").setRequestPartitionId(partition1),
new PartitionedUrl().setUrl("Observation?status=final").setRequestPartitionId(partition2),
new PartitionedUrl().setUrl("Patient?").setRequestPartitionId(partition2)
), 3),
// 11. includes final Observations and Patients from partitions 1
Arguments.of(
List.of(
new PartitionedUrl().setUrl("Observation?status=final").setRequestPartitionId(partition1),
new PartitionedUrl().setUrl("Patient?").setRequestPartitionId(partition1)
), 2)
);
}
@ParameterizedTest
@MethodSource(value = "getReindexParameters")
public void testReindex_byMultipleUrlsAndPartitions_indexesMatchingResources(List<String> theUrls,
List<RequestPartitionId> thePartitions,
boolean theShouldAssignPartitionToUrl,
int theExpectedIndexedResourceCount) {
JobParameters parameters = JobParameters.from(theUrls, thePartitions, theShouldAssignPartitionToUrl);
public void testReindex_withPartitionedUrls_indexesMatchingResources(List<PartitionedUrl> thePartitionedUrls,
int theExpectedIndexedResourceCount) {
PartitionedUrlJobParameters parameters = new PartitionedUrlJobParameters();
thePartitionedUrls.forEach(parameters::addPartitionedUrl);
// execute
JobInstanceStartRequest startRequest = new JobInstanceStartRequest();

View File

@ -1,4 +1,4 @@
package ca.uhn.fhir.jpa.delete.job;
package ca.uhn.fhir.jpa.reindex;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.jpa.api.dao.DaoRegistry;

View File

@ -61,6 +61,7 @@ import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.Answers;
import org.mockito.Mock;
import org.mockito.Spy;
import org.mockito.junit.jupiter.MockitoExtension;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

View File

@ -61,7 +61,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
delta.addRootConcept("RootA", "Root A");
delta.addRootConcept("RootB", "Root B");
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
"RootB seq=0"
);
@ -70,7 +70,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
delta.addRootConcept("RootC", "Root C");
delta.addRootConcept("RootD", "Root D");
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
"RootB seq=0",
"RootC seq=0",
@ -104,7 +104,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
ourLog.info("Starting testAddHierarchyConcepts");
createNotPresentCodeSystem();
assertHierarchyContains();
assertHierarchyContainsExactly();
ourLog.info("Have created code system");
runInTransaction(() -> {
@ -117,7 +117,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
delta.addRootConcept("RootA", "Root A");
delta.addRootConcept("RootB", "Root B");
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
"RootB seq=0"
);
@ -139,7 +139,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
myCaptureQueriesListener.logAllQueriesForCurrentThread();
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
" ChildAA seq=0",
" ChildAB seq=1",
@ -151,7 +151,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
@Test
public void testAddMoveConceptFromOneParentToAnother() {
createNotPresentCodeSystem();
assertHierarchyContains();
assertHierarchyContainsExactly();
UploadStatistics outcome;
CustomTerminologySet delta;
@ -162,7 +162,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
.addChild(TermConceptParentChildLink.RelationshipTypeEnum.ISA).setCode("ChildAAA").setDisplay("Child AAA");
delta.addRootConcept("RootB", "Root B");
outcome = myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
" ChildAA seq=0",
" ChildAAA seq=0",
@ -174,7 +174,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
delta.addRootConcept("RootB", "Root B")
.addChild(TermConceptParentChildLink.RelationshipTypeEnum.ISA).setCode("ChildAA").setDisplay("Child AA");
outcome = myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
" ChildAA seq=0",
" ChildAAA seq=0",
@ -195,7 +195,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
@Test
public void testReAddingConceptsDoesntRecreateExistingLinks() {
createNotPresentCodeSystem();
assertHierarchyContains();
assertHierarchyContainsExactly();
UploadStatistics outcome;
CustomTerminologySet delta;
@ -206,7 +206,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
delta.addRootConcept("RootA", "Root A")
.addChild(TermConceptParentChildLink.RelationshipTypeEnum.ISA).setCode("ChildAA").setDisplay("Child AA");
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
" ChildAA seq=0"
);
@ -223,7 +223,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
.addChild(TermConceptParentChildLink.RelationshipTypeEnum.ISA).setCode("ChildAA").setDisplay("Child AA")
.addChild(TermConceptParentChildLink.RelationshipTypeEnum.ISA).setCode("ChildAAA").setDisplay("Child AAA");
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
" ChildAA seq=0",
" ChildAAA seq=0"
@ -242,7 +242,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
.addChild(TermConceptParentChildLink.RelationshipTypeEnum.ISA).setCode("ChildAAA").setDisplay("Child AAA")
.addChild(TermConceptParentChildLink.RelationshipTypeEnum.ISA).setCode("ChildAAAA").setDisplay("Child AAAA");
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo/cs", delta);
assertHierarchyContains(
assertHierarchyContainsExactly(
"RootA seq=0",
" ChildAA seq=0",
" ChildAAA seq=0",
@ -293,7 +293,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo", set);
// Check so far
assertHierarchyContains(
assertHierarchyContainsExactly(
"ParentA seq=0",
" ChildA seq=0"
);
@ -306,7 +306,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo", set);
// Check so far
assertHierarchyContains(
assertHierarchyContainsExactly(
"ParentA seq=0",
" ChildA seq=0",
" ChildAA seq=0"
@ -331,7 +331,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo", set);
// Check so far
assertHierarchyContains(
assertHierarchyContainsExactly(
"ParentA seq=0",
" ChildA seq=0"
);
@ -344,7 +344,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
myTermCodeSystemStorageSvc.applyDeltaCodeSystemsAdd("http://foo", set);
// Check so far
assertHierarchyContains(
assertHierarchyContainsExactly(
"ParentA seq=0",
" ChildA seq=0",
" ChildAA seq=0"
@ -416,7 +416,7 @@ public class TerminologySvcDeltaR4Test extends BaseJpaR4Test {
expectedHierarchy.add(expected);
}
assertHierarchyContains(expectedHierarchy.toArray(new String[0]));
assertHierarchyContainsExactly(expectedHierarchy.toArray(new String[0]));
}

View File

@ -149,8 +149,8 @@ public class FhirSystemDaoTransactionR5Test extends BaseJpaR5Test {
assertEquals(theMatchUrlCacheEnabled ? 3 : 4, myCaptureQueriesListener.countSelectQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countInsertQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(0, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(4, myCaptureQueriesListener.countUpdateQueriesForCurrentThread());
assertEquals(4, myCaptureQueriesListener.countDeleteQueriesForCurrentThread());
assertEquals(1, myCaptureQueriesListener.countCommits());
assertEquals(0, myCaptureQueriesListener.countRollbacks());

View File

@ -1,5 +1,6 @@
package ca.uhn.fhir.jpa.provider.r5;
import ca.uhn.fhir.batch2.jobs.parameters.PartitionedUrl;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexAppCtx;
import ca.uhn.fhir.batch2.jobs.reindex.ReindexJobParameters;
import ca.uhn.fhir.batch2.model.JobInstanceStartRequest;
@ -371,7 +372,7 @@ public class ResourceProviderR5Test extends BaseResourceProviderR5Test {
// do a reindex
ReindexJobParameters jobParameters = new ReindexJobParameters();
jobParameters.setRequestPartitionId(RequestPartitionId.allPartitions());
jobParameters.addPartitionedUrl(new PartitionedUrl().setRequestPartitionId(RequestPartitionId.allPartitions()));
JobInstanceStartRequest request = new JobInstanceStartRequest();
request.setJobDefinitionId(ReindexAppCtx.JOB_REINDEX);
request.setParameters(jobParameters);

View File

@ -1,6 +1,5 @@
package ca.uhn.fhir.jpa.search.reindex;
import static org.junit.jupiter.api.Assertions.assertEquals;
import ca.uhn.fhir.context.FhirContext;
import ca.uhn.fhir.jpa.api.config.JpaStorageSettings;
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
@ -17,22 +16,22 @@ import ca.uhn.fhir.jpa.model.entity.ResourceTable;
import ca.uhn.fhir.jpa.model.entity.SearchParamPresentEntity;
import ca.uhn.fhir.jpa.searchparam.extractor.ResourceIndexedSearchParams;
import ca.uhn.fhir.test.utilities.HtmlUtil;
import org.htmlunit.html.HtmlPage;
import org.htmlunit.html.HtmlTable;
import jakarta.annotation.Nonnull;
import org.hl7.fhir.r4.model.IdType;
import org.hl7.fhir.r4.model.Parameters;
import org.hl7.fhir.r4.model.StringType;
import org.htmlunit.html.HtmlPage;
import org.htmlunit.html.HtmlTable;
import org.junit.jupiter.api.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import jakarta.annotation.Nonnull;
import java.io.IOException;
import java.math.BigDecimal;
import java.util.Collections;
import java.util.Date;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertEquals;
/**
* Tests the narrative generation in {@link InstanceReindexServiceImpl}. This is a separate test
@ -135,7 +134,7 @@ public class InstanceReindexServiceImplNarrativeR5Test {
public void testIndexResourceLink() throws IOException {
// Setup
ResourceIndexedSearchParams newParams = newParams();
newParams.myLinks.add(ResourceLink.forLocalReference("Observation.subject", myEntity, "Patient", 123L, "123", new Date(), 555L));
newParams.myLinks.add(getResourceLinkForLocalReference());
// Test
Parameters outcome = mySvc.buildIndexResponse(newParams(), newParams, true, Collections.emptyList());
@ -311,4 +310,19 @@ public class InstanceReindexServiceImplNarrativeR5Test {
return ResourceIndexedSearchParams.withSets();
}
private ResourceLink getResourceLinkForLocalReference(){
ResourceLink.ResourceLinkForLocalReferenceParams params = ResourceLink.ResourceLinkForLocalReferenceParams
.instance()
.setSourcePath("Observation.subject")
.setSourceResource(myEntity)
.setTargetResourceType("Patient")
.setTargetResourcePid(123L)
.setTargetResourceId("123")
.setUpdated(new Date())
.setTargetResourceVersion(555L);
return ResourceLink.forLocalReference(params);
}
}

View File

@ -71,7 +71,7 @@ public class InstanceReindexServiceImplR5Test extends BaseJpaR5Test {
.map(t -> t.getName() + " " + getPartValue("Action", t) + " " + getPartValue("Type", t) + " " + getPartValue("Missing", t))
.sorted()
.toList();
assertThat(indexInstances).as(indexInstances.toString()).containsExactly("_id NO_CHANGE Token true", "active NO_CHANGE Token true", "address NO_CHANGE String true", "address-city NO_CHANGE String true", "address-country NO_CHANGE String true", "address-postalcode NO_CHANGE String true", "address-state NO_CHANGE String true", "address-use NO_CHANGE Token true", "birthdate NO_CHANGE Date true", "death-date NO_CHANGE Date true", "email NO_CHANGE Token true", "gender NO_CHANGE Token true", "general-practitioner NO_CHANGE Reference true", "identifier NO_CHANGE Token true", "language NO_CHANGE Token true", "link NO_CHANGE Reference true", "organization NO_CHANGE Reference true", "part-agree NO_CHANGE Reference true", "phone NO_CHANGE Token true", "telecom NO_CHANGE Token true");
assertThat(indexInstances).as(indexInstances.toString()).containsExactly("active NO_CHANGE Token true", "address NO_CHANGE String true", "address-city NO_CHANGE String true", "address-country NO_CHANGE String true", "address-postalcode NO_CHANGE String true", "address-state NO_CHANGE String true", "address-use NO_CHANGE Token true", "birthdate NO_CHANGE Date true", "death-date NO_CHANGE Date true", "email NO_CHANGE Token true", "gender NO_CHANGE Token true", "general-practitioner NO_CHANGE Reference true", "identifier NO_CHANGE Token true", "language NO_CHANGE Token true", "link NO_CHANGE Reference true", "organization NO_CHANGE Reference true", "part-agree NO_CHANGE Reference true", "phone NO_CHANGE Token true", "telecom NO_CHANGE Token true");
}

View File

@ -106,10 +106,15 @@ public class HapiEmbeddedDatabasesExtension implements AfterAllCallback {
try {
myDatabaseInitializerHelper.insertPersistenceTestData(getEmbeddedDatabase(theDriverType), theVersionEnum);
} catch (Exception theE) {
ourLog.info(
"Could not insert persistence test data most likely because we don't have any for version {} and driver {}",
theVersionEnum,
theDriverType);
if (theE.getMessage().contains("Error loading file: migration/releases/")) {
ourLog.info(
"Could not insert persistence test data most likely because we don't have any for version {} and driver {}",
theVersionEnum,
theDriverType);
} else {
// throw sql execution Exceptions
throw theE;
}
}
}

View File

@ -656,7 +656,11 @@ public abstract class QuantitySearchParameterTestCases implements ITestDataBuild
.getIdPart(); // 70_000
// this search is not freetext because there is no freetext-known parameter name
List<String> allIds = myTestDaoSearch.searchForIds("/Observation?_sort=value-quantity");
// search by value quantity was added here because empty search params would cause the search to go
// through jpa search which does not
// support normalized quantity sorting.
List<String> allIds =
myTestDaoSearch.searchForIds("/Observation?value-quantity=ge0&_sort=value-quantity");
assertThat(allIds).containsExactly(idAlpha2, idAlpha1, idAlpha3);
}
}

View File

@ -740,7 +740,7 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
dao.update(resourceParsed);
}
protected void assertHierarchyContains(String... theStrings) {
protected void assertHierarchyContainsExactly(String... theStrings) {
List<String> hierarchy = runInTransaction(() -> {
List<String> hierarchyHolder = new ArrayList<>();
TermCodeSystem codeSystem = myTermCodeSystemDao.findAll().iterator().next();

View File

@ -150,6 +150,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static ca.uhn.fhir.rest.api.Constants.HEADER_CACHE_CONTROL;
import static ca.uhn.fhir.util.TestUtil.doRandomizeLocaleAndTimezone;
import static java.util.stream.Collectors.joining;
import static org.awaitility.Awaitility.await;
@ -428,6 +429,7 @@ public abstract class BaseJpaTest extends BaseTest {
when(mySrd.getInterceptorBroadcaster()).thenReturn(mySrdInterceptorService);
when(mySrd.getUserData()).thenReturn(new HashMap<>());
when(mySrd.getHeaders(eq(JpaConstants.HEADER_META_SNAPSHOT_MODE))).thenReturn(new ArrayList<>());
when(mySrd.getHeaders(eq(HEADER_CACHE_CONTROL))).thenReturn(new ArrayList<>());
// TODO enforce strict mocking everywhere
lenient().when(mySrd.getServer().getDefaultPageSize()).thenReturn(null);
lenient().when(mySrd.getServer().getMaximumPageSize()).thenReturn(null);

Some files were not shown because too many files have changed in this diff Show More