Multiple improvements (#2997)
* Add failing test * Tidy test for new tokenparam style * Add removeByQualifier to SearchParameterMap * Add new property binder for search terms * Member move, add test for new parser * whoops missed an add * Passing test. Introduced SearchParamTextWrapper to give a type for the binder. Enhance FulltextSearchSvcImpl to understand TokenParams as well as StringParams. * Add more to test, change analyzer for template * Optimize imports * Minor refactoring * added eq and gt test cases * added missing gt search param tests * Search all CodeableConcepts * Extract new LuceneRuntimeSearchParam to mediate indexing and searching. * Fix up broken test with mock * Add support for identifier type * Bump for special maven package * fix-me to todo * Prevent version check * FIx version for test * DSTU2 does not support fluent path * Don't accidentally strip SPs in DSTU2 * added lt tests * Add some high-bound test cases * More test cases and new compressed format * More test cases * More test cases * Re-order columns for easier reading * Comments * Cleanup and comments * Make support :text for all token SPs * Make support :text for all token SPs * Fixed incomplete date issue * Set to the last millisecond 23:59:59.999 * Disabled 4 failed JVM/TZ related test cases * add test for failing batch bug, remove bounding on queue * remove parallelism from non-get batch operations * Fix count for test * New index layout * Notes and cleanup * Notes and cleanup * remove the jetbrains * Demote FIXME to WIP * Change :text search to be lucene simple search and use standardAnalyzer. * Change :text to require * for prefix match. * add prefix negative test * Bump version * Added changelog * Add more search parameters to lucene indexing (#3000) * dirty the branch * Add oversight to changelog * Start using jpa indexing for es * Correct negative search test * Update templates, index tokens, write new test. TODO gotta modify SB * Add test for code token search * add comment * Flesh out token search * minor cleanup and comments * Extract index handling from FulltextSearchSvcImpl * D'oh. Actually use elastic to search. * Move supported query check closer to query builder * String search test before activating lucene * Add string:exact to hibernate search. * Add string:contains search * cleanup * demote fixmes to allow build to run. * Add unmodified string search. * empty stubs for quantity and reference * Ignore magic search params in lucene index * Support full-text on all FHIR releases * Include reference lookups in ES * Fix and/or logic in _text * Test for string with space * Cherry-pick the NoFT fix * Disable advanced indexing in a bunch of tests of emitted sql. * Stub changelog * Fix :missing flag * Move DaoConfig up to BaseJpaTest to share teardown * Disable new indexing for partition test * Add a different analyzer and field for default string search vs text * checkstyle for a pre-build * Index full prefix for string search when using Lucene * Update hapi-fhir-docs/src/main/resources/ca/uhn/hapi/fhir/changelog/5_6_0/3000-Extend-lucene-indexing.yaml Co-authored-by: Tadgh <tadgh@cs.toronto.edu> * Update hapi-fhir-jpaserver-base/src/main/java/ca/uhn/fhir/jpa/dao/HibernateSearchQueryBuilder.java Co-authored-by: Tadgh <tadgh@cs.toronto.edu> * Review feedback. Co-authored-by: Tadgh <garygrantgraham@gmail.com> Co-authored-by: Tadgh <tadgh@cs.toronto.edu> * Remove double-check * Version bump * Fix typo Co-authored-by: Michael Buckley <michael.buckley@smilecdr.com> Co-authored-by: Long Ma <longma@Longs-MacBook-Pro.local> Co-authored-by: Frank Tao <frankjtao@gmail.com> Co-authored-by: michaelabuckley <michaelabuckley@gmail.com>
This commit is contained in:
parent
05f470db18
commit
8a7b5c1b2f
|
@ -51,7 +51,7 @@ public abstract class BaseParam implements IQueryParameterType {
|
|||
|
||||
@Override
|
||||
public final String getQueryParameterQualifier() {
|
||||
if (myMissing != null && myMissing.booleanValue()) {
|
||||
if (myMissing != null) {
|
||||
return Constants.PARAMQUALIFIER_MISSING;
|
||||
}
|
||||
return doGetQueryParameterQualifier();
|
||||
|
@ -74,7 +74,7 @@ public abstract class BaseParam implements IQueryParameterType {
|
|||
|
||||
/**
|
||||
* If set to non-null value, indicates that this parameter has been populated
|
||||
* with a "[name]:missing=true" or "[name]:missing=false" vale instead of a
|
||||
* with a "[name]:missing=true" or "[name]:missing=false" value instead of a
|
||||
* normal value
|
||||
*
|
||||
* @return Returns a reference to <code>this</code> for easier method chaining
|
||||
|
|
|
@ -20,6 +20,8 @@ package ca.uhn.fhir.rest.param;
|
|||
* #L%
|
||||
*/
|
||||
|
||||
import ca.uhn.fhir.rest.api.Constants;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
|
@ -55,7 +57,7 @@ public enum TokenParamModifier {
|
|||
/**
|
||||
* :text
|
||||
*/
|
||||
TEXT(":text"),
|
||||
TEXT(Constants.PARAMQUALIFIER_TOKEN_TEXT),
|
||||
|
||||
/**
|
||||
* :of-type
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
package ca.uhn.fhir.jpa.util;
|
||||
package ca.uhn.fhir.util;
|
||||
|
||||
import com.google.common.collect.ArrayListMultimap;
|
||||
import com.google.common.collect.ImmutableSet;
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
type: add
|
||||
issue: 2841
|
||||
title: "The [:text](https://www.hl7.org/fhir/search.html#text) Search Parameter modifier now searches by word boundary of the text content
|
||||
as opposed to only searching at the start of the text. Add * to match word prefixes (e.g. weig* will match weight)."
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
type: add
|
||||
issue: 2999
|
||||
title: "Lucene/Elasticsearch indexing has been extended to string and token parameters. This can be controlled by the new `setAdvancedLuceneIndexing()` property of DaoConfig."
|
|
@ -45,6 +45,7 @@ import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
|||
import ca.uhn.fhir.jpa.model.entity.ResourceTag;
|
||||
import ca.uhn.fhir.jpa.model.entity.TagDefinition;
|
||||
import ca.uhn.fhir.jpa.model.entity.TagTypeEnum;
|
||||
import ca.uhn.fhir.jpa.model.search.ExtendedLuceneIndexData;
|
||||
import ca.uhn.fhir.jpa.model.search.SearchStatusEnum;
|
||||
import ca.uhn.fhir.jpa.model.search.StorageProcessingMessage;
|
||||
import ca.uhn.fhir.jpa.model.util.JpaConstants;
|
||||
|
@ -1214,7 +1215,7 @@ public abstract class BaseHapiFhirDao<T extends IBaseResource> extends BaseStora
|
|||
}
|
||||
|
||||
if (myFulltextSearchSvc != null && !myFulltextSearchSvc.isDisabled()) {
|
||||
populateFullTextFields(myContext, theResource, entity);
|
||||
populateFullTextFields(myContext, theResource, entity, newParams);
|
||||
}
|
||||
|
||||
} else {
|
||||
|
@ -1663,13 +1664,17 @@ public abstract class BaseHapiFhirDao<T extends IBaseResource> extends BaseStora
|
|||
return retVal.toString();
|
||||
}
|
||||
|
||||
public static void populateFullTextFields(final FhirContext theContext, final IBaseResource theResource, ResourceTable theEntity) {
|
||||
public void populateFullTextFields(final FhirContext theContext, final IBaseResource theResource, ResourceTable theEntity, ResourceIndexedSearchParams theNewParams) {
|
||||
if (theEntity.getDeleted() != null) {
|
||||
theEntity.setNarrativeText(null);
|
||||
theEntity.setContentText(null);
|
||||
} else {
|
||||
theEntity.setNarrativeText(parseNarrativeTextIntoWords(theResource));
|
||||
theEntity.setContentText(parseContentTextIntoWords(theContext, theResource));
|
||||
if (myDaoConfig.isAdvancedLuceneIndexing()) {
|
||||
ExtendedLuceneIndexData luceneIndexData = myFulltextSearchSvc.extractLuceneIndexData(theContext, theResource, theNewParams);
|
||||
theEntity.setLuceneIndexData(luceneIndexData);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -20,35 +20,44 @@ package ca.uhn.fhir.jpa.dao;
|
|||
* #L%
|
||||
*/
|
||||
|
||||
import ca.uhn.fhir.context.FhirContext;
|
||||
import ca.uhn.fhir.context.RuntimeSearchParam;
|
||||
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
||||
import ca.uhn.fhir.jpa.dao.data.IForcedIdDao;
|
||||
import ca.uhn.fhir.jpa.dao.index.IdHelperService;
|
||||
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
|
||||
import ca.uhn.fhir.jpa.model.entity.ResourceLink;
|
||||
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
||||
import ca.uhn.fhir.jpa.partition.IRequestPartitionHelperSvc;
|
||||
import ca.uhn.fhir.jpa.model.search.ExtendedLuceneIndexData;
|
||||
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
||||
import ca.uhn.fhir.jpa.searchparam.extractor.ResourceIndexedSearchParams;
|
||||
import ca.uhn.fhir.model.api.IQueryParameterType;
|
||||
import ca.uhn.fhir.rest.api.Constants;
|
||||
import ca.uhn.fhir.rest.api.server.RequestDetails;
|
||||
import ca.uhn.fhir.rest.api.server.storage.ResourcePersistentId;
|
||||
import ca.uhn.fhir.rest.param.QuantityParam;
|
||||
import ca.uhn.fhir.rest.param.ReferenceParam;
|
||||
import ca.uhn.fhir.rest.param.StringParam;
|
||||
import ca.uhn.fhir.rest.param.TokenParam;
|
||||
import ca.uhn.fhir.rest.server.util.ISearchParamRegistry;
|
||||
import com.google.common.collect.Lists;
|
||||
import com.google.common.collect.Sets;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.hibernate.search.engine.search.predicate.dsl.BooleanPredicateClausesStep;
|
||||
import org.hibernate.search.engine.search.predicate.dsl.SearchPredicateFactory;
|
||||
import org.hibernate.search.mapper.orm.Search;
|
||||
import org.hibernate.search.mapper.orm.session.SearchSession;
|
||||
import org.hl7.fhir.instance.model.api.IAnyResource;
|
||||
import org.hl7.fhir.instance.model.api.IBaseResource;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.transaction.PlatformTransactionManager;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
import org.springframework.transaction.support.TransactionTemplate;
|
||||
|
||||
import javax.annotation.Nonnull;
|
||||
import javax.persistence.EntityManager;
|
||||
import javax.persistence.PersistenceContext;
|
||||
import javax.persistence.PersistenceContextType;
|
||||
import java.util.HashSet;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
|
@ -56,6 +65,7 @@ import static org.apache.commons.lang3.StringUtils.isNotBlank;
|
|||
|
||||
public class FulltextSearchSvcImpl implements IFulltextSearchSvc {
|
||||
private static final org.slf4j.Logger ourLog = org.slf4j.LoggerFactory.getLogger(FulltextSearchSvcImpl.class);
|
||||
public static final String EMPTY_MODIFIER = "";
|
||||
@Autowired
|
||||
protected IForcedIdDao myForcedIdDao;
|
||||
@PersistenceContext(type = PersistenceContextType.TRANSACTION)
|
||||
|
@ -63,13 +73,14 @@ public class FulltextSearchSvcImpl implements IFulltextSearchSvc {
|
|||
@Autowired
|
||||
private PlatformTransactionManager myTxManager;
|
||||
@Autowired
|
||||
private IdHelperService myIdHelperService;
|
||||
private FhirContext myFhirContext;
|
||||
@Autowired
|
||||
private ISearchParamRegistry mySearchParamRegistry;
|
||||
@Autowired
|
||||
private DaoConfig myDaoConfig;
|
||||
|
||||
|
||||
private Boolean ourDisabled;
|
||||
@Autowired
|
||||
private IRequestPartitionHelperSvc myRequestPartitionHelperService;
|
||||
@Autowired
|
||||
private PartitionSettings myPartitionSettings;
|
||||
|
||||
/**
|
||||
* Constructor
|
||||
|
@ -78,45 +89,116 @@ public class FulltextSearchSvcImpl implements IFulltextSearchSvc {
|
|||
super();
|
||||
}
|
||||
|
||||
private void addTextSearch(SearchPredicateFactory f, BooleanPredicateClausesStep<?> b, List<List<IQueryParameterType>> theTerms, String theFieldName, String theFieldNameEdgeNGram, String theFieldNameTextNGram) {
|
||||
if (theTerms == null) {
|
||||
return;
|
||||
public ExtendedLuceneIndexData extractLuceneIndexData(FhirContext theContext, IBaseResource theResource, ResourceIndexedSearchParams theNewParams) {
|
||||
ExtendedLuceneIndexData retVal = new ExtendedLuceneIndexData(myFhirContext);
|
||||
|
||||
theNewParams.myStringParams.forEach(nextParam ->
|
||||
retVal.addStringIndexData(nextParam.getParamName(), nextParam.getValueExact()));
|
||||
|
||||
theNewParams.myTokenParams.forEach(nextParam ->
|
||||
retVal.addTokenIndexData(nextParam.getParamName(), nextParam.getSystem(), nextParam.getValue()));
|
||||
|
||||
if (!theNewParams.myLinks.isEmpty()) {
|
||||
Map<String, ResourceLink> spNameToLinkMap = buildSpNameToLinkMap(theResource, theNewParams);
|
||||
|
||||
spNameToLinkMap.entrySet()
|
||||
.forEach(nextEntry -> {
|
||||
ResourceLink resourceLink = nextEntry.getValue();
|
||||
String qualifiedTargetResourceId = resourceLink.getTargetResourceType() + "/" + resourceLink.getTargetResourceId();
|
||||
retVal.addResourceLinkIndexData(nextEntry.getKey(), qualifiedTargetResourceId);
|
||||
});
|
||||
|
||||
}
|
||||
return retVal;
|
||||
}
|
||||
|
||||
private Map<String, ResourceLink> buildSpNameToLinkMap(IBaseResource theResource, ResourceIndexedSearchParams theNewParams) {
|
||||
String resourceType = myFhirContext.getResourceType(theResource);
|
||||
|
||||
Map<String, RuntimeSearchParam> paramNameToRuntimeParam =
|
||||
theNewParams.getPopulatedResourceLinkParameters().stream()
|
||||
.collect(Collectors.toMap(
|
||||
(theParam) -> theParam,
|
||||
(theParam) -> mySearchParamRegistry.getActiveSearchParam(resourceType, theParam)));
|
||||
|
||||
Map<String, ResourceLink> paramNameToIndexedLink = new HashMap<>();
|
||||
for ( Map.Entry<String, RuntimeSearchParam> entry :paramNameToRuntimeParam.entrySet()) {
|
||||
ResourceLink link = theNewParams.myLinks.stream().filter(resourceLink ->
|
||||
entry.getValue().getPathsSplit().stream()
|
||||
.anyMatch(path -> path.equalsIgnoreCase(resourceLink.getSourcePath())))
|
||||
.findFirst().orElse(null);
|
||||
paramNameToIndexedLink.put(entry.getKey(), link);
|
||||
}
|
||||
return paramNameToIndexedLink;
|
||||
}
|
||||
|
||||
/**
|
||||
* These params have complicated semantics, or are best resolved at the JPA layer for now.
|
||||
*/
|
||||
static final Set<String> ourUnsafeSearchParmeters = Sets.newHashSet("_id", "_tag", "_meta");
|
||||
|
||||
@Override
|
||||
public boolean supportsSomeOf(SearchParameterMap myParams) {
|
||||
// keep this in sync with the guts of doSearch
|
||||
boolean requiresHibernateSearchAccess = myParams.containsKey(Constants.PARAM_CONTENT) || myParams.containsKey(Constants.PARAM_TEXT) || myParams.isLastN();
|
||||
|
||||
requiresHibernateSearchAccess |=
|
||||
myDaoConfig.isAdvancedLuceneIndexing() &&
|
||||
myParams.entrySet().stream()
|
||||
.filter(e -> !ourUnsafeSearchParmeters.contains(e.getKey()))
|
||||
// each and clause may have a different modifier, so split down to the ORs
|
||||
.flatMap(andList -> andList.getValue().stream())
|
||||
.flatMap(Collection::stream)
|
||||
.anyMatch(this::isParamSupported);
|
||||
|
||||
return requiresHibernateSearchAccess;
|
||||
}
|
||||
|
||||
private boolean isParamSupported(IQueryParameterType param) {
|
||||
String modifier = StringUtils.defaultString(param.getQueryParameterQualifier(), EMPTY_MODIFIER);
|
||||
if (param instanceof TokenParam) {
|
||||
switch (modifier) {
|
||||
case Constants.PARAMQUALIFIER_TOKEN_TEXT:
|
||||
case "":
|
||||
// we support plain token and token:text
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
} else if (param instanceof StringParam) {
|
||||
switch (modifier) {
|
||||
// we support string:text, string:contains, string:exact, and unmodified string.
|
||||
case Constants.PARAMQUALIFIER_TOKEN_TEXT:
|
||||
case Constants.PARAMQUALIFIER_STRING_EXACT:
|
||||
case Constants.PARAMQUALIFIER_STRING_CONTAINS:
|
||||
case EMPTY_MODIFIER:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
} else if (param instanceof QuantityParam) {
|
||||
return false;
|
||||
} else if (param instanceof ReferenceParam) {
|
||||
//We cannot search by chain.
|
||||
if (((ReferenceParam) param).getChain() != null) {
|
||||
return false;
|
||||
}
|
||||
switch (modifier) {
|
||||
case EMPTY_MODIFIER:
|
||||
return true;
|
||||
case Constants.PARAMQUALIFIER_MDM:
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
for (List<? extends IQueryParameterType> nextAnd : theTerms) {
|
||||
Set<String> terms = extractOrStringParams(nextAnd);
|
||||
if (terms.size() == 1) {
|
||||
b.must(f.phrase()
|
||||
.field(theFieldName)
|
||||
.boost(4.0f)
|
||||
.matching(terms.iterator().next().toLowerCase())
|
||||
.slop(2));
|
||||
} else if (terms.size() > 1) {
|
||||
String joinedTerms = StringUtils.join(terms, ' ');
|
||||
b.must(f.match().field(theFieldName).matching(joinedTerms));
|
||||
} else {
|
||||
ourLog.debug("No Terms found in query parameter {}", nextAnd);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@Nonnull
|
||||
private Set<String> extractOrStringParams(List<? extends IQueryParameterType> nextAnd) {
|
||||
Set<String> terms = new HashSet<>();
|
||||
for (IQueryParameterType nextOr : nextAnd) {
|
||||
StringParam nextOrString = (StringParam) nextOr;
|
||||
String nextValueTrimmed = StringUtils.defaultString(nextOrString.getValue()).trim();
|
||||
if (isNotBlank(nextValueTrimmed)) {
|
||||
terms.add(nextValueTrimmed);
|
||||
}
|
||||
}
|
||||
return terms;
|
||||
}
|
||||
|
||||
private List<ResourcePersistentId> doSearch(String theResourceName, SearchParameterMap theParams, ResourcePersistentId theReferencingPid) {
|
||||
|
||||
private List<ResourcePersistentId> doSearch(String theResourceType, SearchParameterMap theParams, ResourcePersistentId theReferencingPid) {
|
||||
// keep this in sync with supportsSomeOf();
|
||||
SearchSession session = Search.session(myEntityManager);
|
||||
List<List<IQueryParameterType>> contentAndTerms = theParams.remove(Constants.PARAM_CONTENT);
|
||||
List<List<IQueryParameterType>> textAndTerms = theParams.remove(Constants.PARAM_TEXT);
|
||||
|
||||
List<Long> longPids = session.search(ResourceTable.class)
|
||||
//Selects are replacements for projection and convert more cleanly than the old implementation.
|
||||
|
@ -125,24 +207,81 @@ public class FulltextSearchSvcImpl implements IFulltextSearchSvc {
|
|||
)
|
||||
.where(
|
||||
f -> f.bool(b -> {
|
||||
HibernateSearchQueryBuilder builder = new HibernateSearchQueryBuilder(myFhirContext, b, f);
|
||||
|
||||
/*
|
||||
* Handle _content parameter (resource body content)
|
||||
*/
|
||||
addTextSearch(f, b, contentAndTerms, "myContentText", "mycontentTextEdgeNGram", "myContentTextNGram");
|
||||
List<List<IQueryParameterType>> contentAndTerms = theParams.remove(Constants.PARAM_CONTENT);
|
||||
builder.addStringTextSearch(Constants.PARAM_CONTENT, contentAndTerms);
|
||||
/*
|
||||
* Handle _text parameter (resource narrative content)
|
||||
*/
|
||||
addTextSearch(f, b, textAndTerms, "myNarrativeText", "myNarrativeTextEdgeNGram", "myNarrativeTextNGram");
|
||||
List<List<IQueryParameterType>> textAndTerms = theParams.remove(Constants.PARAM_TEXT);
|
||||
builder.addStringTextSearch(Constants.PARAM_TEXT, textAndTerms);
|
||||
|
||||
if (theReferencingPid != null) {
|
||||
b.must(f.match().field("myResourceLinksField").matching(theReferencingPid.toString()));
|
||||
}
|
||||
|
||||
if (isNotBlank(theResourceType)) {
|
||||
b.must(f.match().field("myResourceType").matching(theResourceType));
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle other supported parameters
|
||||
*/
|
||||
if (myDaoConfig.isAdvancedLuceneIndexing()) {
|
||||
// copy the keys to avoid concurrent modification error
|
||||
ArrayList<String> paramNames = Lists.newArrayList(theParams.keySet());
|
||||
for(String nextParam: paramNames) {
|
||||
if (ourUnsafeSearchParmeters.contains(nextParam)) {
|
||||
continue;
|
||||
}
|
||||
RuntimeSearchParam activeParam = mySearchParamRegistry.getActiveSearchParam(theResourceType, nextParam);
|
||||
if (activeParam == null) {
|
||||
// ignore magic params handled in JPA
|
||||
continue;
|
||||
}
|
||||
switch (activeParam.getParamType()) {
|
||||
case TOKEN:
|
||||
List<List<IQueryParameterType>> tokenTextAndOrTerms = theParams.removeByNameAndModifier(nextParam, Constants.PARAMQUALIFIER_TOKEN_TEXT);
|
||||
builder.addStringTextSearch(nextParam, tokenTextAndOrTerms);
|
||||
|
||||
List<List<IQueryParameterType>> tokenUnmodifiedAndOrTerms = theParams.removeByNameUnmodified(nextParam);
|
||||
builder.addTokenUnmodifiedSearch(nextParam, tokenUnmodifiedAndOrTerms);
|
||||
|
||||
break;
|
||||
case STRING:
|
||||
List<List<IQueryParameterType>> stringTextAndOrTerms = theParams.removeByNameAndModifier(nextParam, Constants.PARAMQUALIFIER_TOKEN_TEXT);
|
||||
builder.addStringTextSearch(nextParam, stringTextAndOrTerms);
|
||||
|
||||
List<List<IQueryParameterType>> stringExactAndOrTerms = theParams.removeByNameAndModifier(nextParam, Constants.PARAMQUALIFIER_STRING_EXACT);
|
||||
builder.addStringExactSearch(nextParam, stringExactAndOrTerms);
|
||||
|
||||
List<List<IQueryParameterType>> stringContainsAndOrTerms = theParams.removeByNameAndModifier(nextParam, Constants.PARAMQUALIFIER_STRING_CONTAINS);
|
||||
builder.addStringContainsSearch(nextParam, stringContainsAndOrTerms);
|
||||
|
||||
List<List<IQueryParameterType>> stringAndOrTerms = theParams.removeByNameUnmodified(nextParam);
|
||||
builder.addStringUnmodifiedSearch(nextParam, stringAndOrTerms);
|
||||
break;
|
||||
|
||||
case QUANTITY:
|
||||
break;
|
||||
|
||||
case REFERENCE:
|
||||
List<List<IQueryParameterType>> referenceAndOrTerms = theParams.removeByNameUnmodified(nextParam);
|
||||
builder.addReferenceUnchainedSearch(nextParam, referenceAndOrTerms);
|
||||
break;
|
||||
|
||||
default:
|
||||
// ignore unsupported param types/modifiers. They will be processed up in SearchBuilder.
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
//DROP EARLY HERE IF BOOL IS EMPTY?
|
||||
|
||||
if (isNotBlank(theResourceName)) {
|
||||
b.must(f.match().field("myResourceType").matching(theResourceName));
|
||||
}
|
||||
})
|
||||
).fetchAllHits();
|
||||
|
||||
|
@ -151,7 +290,7 @@ public class FulltextSearchSvcImpl implements IFulltextSearchSvc {
|
|||
|
||||
private List<ResourcePersistentId> convertLongsToResourcePersistentIds(List<Long> theLongPids) {
|
||||
return theLongPids.stream()
|
||||
.map(pid -> new ResourcePersistentId(pid))
|
||||
.map(ResourcePersistentId::new)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,219 @@
|
|||
package ca.uhn.fhir.jpa.dao;
|
||||
|
||||
import ca.uhn.fhir.context.FhirContext;
|
||||
import ca.uhn.fhir.model.api.IQueryParameterType;
|
||||
import ca.uhn.fhir.rest.api.Constants;
|
||||
import ca.uhn.fhir.rest.param.ReferenceParam;
|
||||
import ca.uhn.fhir.rest.param.StringParam;
|
||||
import ca.uhn.fhir.rest.param.TokenParam;
|
||||
import org.apache.commons.collections4.CollectionUtils;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.hibernate.search.engine.search.common.BooleanOperator;
|
||||
import org.hibernate.search.engine.search.predicate.dsl.BooleanPredicateClausesStep;
|
||||
import org.hibernate.search.engine.search.predicate.dsl.PredicateFinalStep;
|
||||
import org.hibernate.search.engine.search.predicate.dsl.SearchPredicateFactory;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import javax.annotation.Nonnull;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static ca.uhn.fhir.jpa.model.search.HibernateSearchIndexWriter.IDX_STRING_EXACT;
|
||||
import static ca.uhn.fhir.jpa.model.search.HibernateSearchIndexWriter.IDX_STRING_NORMALIZED;
|
||||
import static ca.uhn.fhir.jpa.model.search.HibernateSearchIndexWriter.IDX_STRING_TEXT;
|
||||
import static org.apache.commons.lang3.StringUtils.isNotBlank;
|
||||
|
||||
public class HibernateSearchQueryBuilder {
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(HibernateSearchQueryBuilder.class);
|
||||
|
||||
final FhirContext myFhirContext;
|
||||
final SearchPredicateFactory myPredicateFactory;
|
||||
final BooleanPredicateClausesStep<?> myRootClause;
|
||||
|
||||
public HibernateSearchQueryBuilder(FhirContext myFhirContext, BooleanPredicateClausesStep<?> myRootClause, SearchPredicateFactory myPredicateFactory) {
|
||||
this.myFhirContext = myFhirContext;
|
||||
this.myRootClause = myRootClause;
|
||||
this.myPredicateFactory = myPredicateFactory;
|
||||
}
|
||||
|
||||
|
||||
@Nonnull
|
||||
private Set<String> extractOrStringParams(List<? extends IQueryParameterType> nextAnd) {
|
||||
Set<String> terms = new HashSet<>();
|
||||
for (IQueryParameterType nextOr : nextAnd) {
|
||||
String nextValueTrimmed;
|
||||
if (nextOr instanceof StringParam) {
|
||||
StringParam nextOrString = (StringParam) nextOr;
|
||||
nextValueTrimmed = StringUtils.defaultString(nextOrString.getValue()).trim();
|
||||
} else if (nextOr instanceof TokenParam) {
|
||||
TokenParam nextOrToken = (TokenParam) nextOr;
|
||||
nextValueTrimmed = nextOrToken.getValue();
|
||||
} else if (nextOr instanceof ReferenceParam){
|
||||
ReferenceParam referenceParam = (ReferenceParam) nextOr;
|
||||
nextValueTrimmed = referenceParam.getValue();
|
||||
if (nextValueTrimmed.contains("/_history")) {
|
||||
nextValueTrimmed = nextValueTrimmed.substring(0, nextValueTrimmed.indexOf("/_history"));
|
||||
}
|
||||
} else {
|
||||
throw new IllegalArgumentException("Unsupported full-text param type: " + nextOr.getClass());
|
||||
}
|
||||
if (isNotBlank(nextValueTrimmed)) {
|
||||
terms.add(nextValueTrimmed);
|
||||
}
|
||||
}
|
||||
return terms;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Provide an OR wrapper around a list of predicates.
|
||||
* Returns the sole predicate if it solo, or wrap as a bool/should for OR semantics.
|
||||
*
|
||||
* @param theOrList a list containing at least 1 predicate
|
||||
* @return a predicate providing or-sematics over the list.
|
||||
*/
|
||||
private PredicateFinalStep orPredicateOrSingle(List<? extends PredicateFinalStep> theOrList) {
|
||||
PredicateFinalStep finalClause;
|
||||
if (theOrList.size() == 1) {
|
||||
finalClause = theOrList.get(0);
|
||||
} else {
|
||||
BooleanPredicateClausesStep<?> orClause = myPredicateFactory.bool();
|
||||
theOrList.forEach(orClause::should);
|
||||
finalClause = orClause;
|
||||
}
|
||||
return finalClause;
|
||||
}
|
||||
|
||||
public void addTokenUnmodifiedSearch(String theSearchParamName, List<List<IQueryParameterType>> theAndOrTerms) {
|
||||
if (CollectionUtils.isEmpty(theAndOrTerms)) {
|
||||
return;
|
||||
}
|
||||
for (List<? extends IQueryParameterType> nextAnd : theAndOrTerms) {
|
||||
String indexFieldPrefix = "sp." + theSearchParamName + ".token";
|
||||
|
||||
ourLog.debug("addTokenUnmodifiedSearch {} {}", theSearchParamName, nextAnd);
|
||||
List<? extends PredicateFinalStep> clauses = nextAnd.stream().map(orTerm -> {
|
||||
if (orTerm instanceof TokenParam) {
|
||||
TokenParam token = (TokenParam) orTerm;
|
||||
if (StringUtils.isBlank(token.getSystem())) {
|
||||
// bare value
|
||||
return myPredicateFactory.match().field(indexFieldPrefix + ".code").matching(token.getValue());
|
||||
} else if (StringUtils.isBlank(token.getValue())) {
|
||||
// system without value
|
||||
return myPredicateFactory.match().field(indexFieldPrefix + ".system").matching(token.getSystem());
|
||||
} else {
|
||||
// system + value
|
||||
return myPredicateFactory.match().field(indexFieldPrefix + ".code-system").matching(token.getValueAsQueryToken(this.myFhirContext));
|
||||
}
|
||||
} else if (orTerm instanceof StringParam) {
|
||||
// MB I don't quite understand why FhirResourceDaoR4SearchNoFtTest.testSearchByIdParamWrongType() uses String but here we are
|
||||
StringParam string = (StringParam) orTerm;
|
||||
// treat a string as a code with no system (like _id)
|
||||
return myPredicateFactory.match().field(indexFieldPrefix + ".code").matching(string.getValue());
|
||||
} else {
|
||||
throw new IllegalArgumentException("Unexpected param type for token search-param: " + orTerm.getClass().getName());
|
||||
}
|
||||
}).collect(Collectors.toList());
|
||||
|
||||
PredicateFinalStep finalClause = orPredicateOrSingle(clauses);
|
||||
myRootClause.must(finalClause);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
public void addStringTextSearch(String theSearchParamName, List<List<IQueryParameterType>> stringAndOrTerms) {
|
||||
if (CollectionUtils.isEmpty(stringAndOrTerms)) {
|
||||
return;
|
||||
}
|
||||
String fieldName;
|
||||
switch (theSearchParamName) {
|
||||
// _content and _text were here first, and don't obey our mapping.
|
||||
// Leave them as-is for backwards compatibility.
|
||||
case Constants.PARAM_CONTENT:
|
||||
fieldName = "myContentText";
|
||||
break;
|
||||
case Constants.PARAM_TEXT:
|
||||
fieldName = "myNarrativeText";
|
||||
break;
|
||||
default:
|
||||
fieldName = "sp." + theSearchParamName + ".string." + IDX_STRING_TEXT;
|
||||
break;
|
||||
}
|
||||
|
||||
for (List<? extends IQueryParameterType> nextAnd : stringAndOrTerms) {
|
||||
Set<String> terms = extractOrStringParams(nextAnd);
|
||||
ourLog.debug("addStringTextSearch {}, {}", theSearchParamName, terms);
|
||||
if (terms.size() >= 1) {
|
||||
String query = terms.stream()
|
||||
.map(s -> "( " + s + " )")
|
||||
.collect(Collectors.joining(" | "));
|
||||
myRootClause.must(myPredicateFactory
|
||||
.simpleQueryString()
|
||||
.field(fieldName)
|
||||
.matching(query)
|
||||
.defaultOperator(BooleanOperator.AND)); // term value may contain multiple tokens. Require all of them to be present.
|
||||
} else {
|
||||
ourLog.warn("No Terms found in query parameter {}", nextAnd);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public void addStringExactSearch(String theSearchParamName, List<List<IQueryParameterType>> theStringAndOrTerms) {
|
||||
String fieldPath = "sp." + theSearchParamName + ".string." + IDX_STRING_EXACT;
|
||||
|
||||
for (List<? extends IQueryParameterType> nextAnd : theStringAndOrTerms) {
|
||||
Set<String> terms = extractOrStringParams(nextAnd);
|
||||
ourLog.debug("addStringExactSearch {} {}", theSearchParamName, terms);
|
||||
List<? extends PredicateFinalStep> orTerms = terms.stream()
|
||||
.map(s -> myPredicateFactory.match().field(fieldPath).matching(s))
|
||||
.collect(Collectors.toList());
|
||||
|
||||
myRootClause.must(orPredicateOrSingle(orTerms));
|
||||
}
|
||||
}
|
||||
|
||||
public void addStringContainsSearch(String theSearchParamName, List<List<IQueryParameterType>> theStringAndOrTerms) {
|
||||
String fieldPath = "sp." + theSearchParamName + ".string." + IDX_STRING_NORMALIZED;
|
||||
for (List<? extends IQueryParameterType> nextAnd : theStringAndOrTerms) {
|
||||
Set<String> terms = extractOrStringParams(nextAnd);
|
||||
ourLog.debug("addStringContainsSearch {} {}", theSearchParamName, terms);
|
||||
List<? extends PredicateFinalStep> orTerms = terms.stream()
|
||||
.map(s ->
|
||||
myPredicateFactory.wildcard().field(fieldPath).matching("*" + s + "*"))
|
||||
.collect(Collectors.toList());
|
||||
|
||||
myRootClause.must(orPredicateOrSingle(orTerms));
|
||||
}
|
||||
}
|
||||
|
||||
public void addStringUnmodifiedSearch(String theSearchParamName, List<List<IQueryParameterType>> theStringAndOrTerms) {
|
||||
String fieldPath = "sp." + theSearchParamName + ".string." + IDX_STRING_NORMALIZED;
|
||||
for (List<? extends IQueryParameterType> nextAnd : theStringAndOrTerms) {
|
||||
Set<String> terms = extractOrStringParams(nextAnd);
|
||||
ourLog.debug("addStringUnmodifiedSearch {} {}", theSearchParamName, terms);
|
||||
List<? extends PredicateFinalStep> orTerms = terms.stream()
|
||||
.map(s ->
|
||||
myPredicateFactory.wildcard().field(fieldPath).matching(s + "*"))
|
||||
.collect(Collectors.toList());
|
||||
|
||||
myRootClause.must(orPredicateOrSingle(orTerms));
|
||||
}
|
||||
}
|
||||
|
||||
public void addReferenceUnchainedSearch(String theSearchParamName, List<List<IQueryParameterType>> theReferenceAndOrTerms) {
|
||||
String fieldPath = "sp." + theSearchParamName + ".reference.value";
|
||||
for (List<? extends IQueryParameterType> nextAnd : theReferenceAndOrTerms) {
|
||||
Set<String> terms = extractOrStringParams(nextAnd);
|
||||
ourLog.trace("reference unchained search {}", terms);
|
||||
|
||||
List<? extends PredicateFinalStep> orTerms = terms.stream()
|
||||
.map(s -> myPredicateFactory.match().field(fieldPath).matching(s))
|
||||
.collect(Collectors.toList());
|
||||
|
||||
myRootClause.must(orPredicateOrSingle(orTerms));
|
||||
}
|
||||
}
|
||||
}
|
|
@ -22,17 +22,32 @@ package ca.uhn.fhir.jpa.dao;
|
|||
|
||||
import java.util.List;
|
||||
|
||||
import ca.uhn.fhir.context.FhirContext;
|
||||
import ca.uhn.fhir.jpa.model.search.ExtendedLuceneIndexData;
|
||||
import ca.uhn.fhir.jpa.searchparam.extractor.ResourceIndexedSearchParams;
|
||||
import ca.uhn.fhir.rest.api.server.storage.ResourcePersistentId;
|
||||
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
||||
import ca.uhn.fhir.rest.api.server.RequestDetails;
|
||||
import org.hl7.fhir.instance.model.api.IBaseResource;
|
||||
|
||||
public interface IFulltextSearchSvc {
|
||||
|
||||
|
||||
/**
|
||||
* Search the Lucene/Elastic index for pids using params supported in theParams,
|
||||
* consuming entries from theParams when used to query.
|
||||
*
|
||||
* @param theResourceName the resource name to restrict the query.
|
||||
* @param theParams the full query - modified to return only params unused by the index.
|
||||
* @return the pid list for the matchign resources.
|
||||
*/
|
||||
List<ResourcePersistentId> search(String theResourceName, SearchParameterMap theParams);
|
||||
|
||||
List<ResourcePersistentId> everything(String theResourceName, SearchParameterMap theParams, RequestDetails theRequest);
|
||||
|
||||
boolean isDisabled();
|
||||
|
||||
ExtendedLuceneIndexData extractLuceneIndexData(FhirContext theContext, IBaseResource theResource, ResourceIndexedSearchParams theNewParams);
|
||||
|
||||
boolean supportsSomeOf(SearchParameterMap myParams);
|
||||
}
|
||||
|
|
|
@ -24,6 +24,7 @@ import org.apache.lucene.analysis.core.KeywordTokenizerFactory;
|
|||
import org.apache.lucene.analysis.core.LowerCaseFilterFactory;
|
||||
import org.apache.lucene.analysis.core.StopFilterFactory;
|
||||
import org.apache.lucene.analysis.core.WhitespaceTokenizerFactory;
|
||||
import org.apache.lucene.analysis.miscellaneous.ASCIIFoldingFilterFactory;
|
||||
import org.apache.lucene.analysis.miscellaneous.WordDelimiterFilterFactory;
|
||||
import org.apache.lucene.analysis.ngram.EdgeNGramFilterFactory;
|
||||
import org.apache.lucene.analysis.ngram.NGramFilterFactory;
|
||||
|
@ -41,6 +42,10 @@ import org.springframework.stereotype.Component;
|
|||
@Component
|
||||
public class HapiLuceneAnalysisConfigurer implements LuceneAnalysisConfigurer {
|
||||
|
||||
public static final String STANDARD_ANALYZER = "standardAnalyzer";
|
||||
public static final String NORM_STRING_ANALYZER = "normStringAnalyzer";
|
||||
public static final String EXACT_ANALYZER = "exactAnalyzer";
|
||||
|
||||
@Override
|
||||
public void configure(LuceneAnalysisConfigurationContext theLuceneCtx) {
|
||||
theLuceneCtx.analyzer("autocompleteEdgeAnalyzer").custom()
|
||||
|
@ -73,11 +78,17 @@ public class HapiLuceneAnalysisConfigurer implements LuceneAnalysisConfigurer {
|
|||
.param("minGramSize", "3")
|
||||
.param("maxGramSize", "20");
|
||||
|
||||
theLuceneCtx.analyzer("standardAnalyzer").custom()
|
||||
theLuceneCtx.analyzer(STANDARD_ANALYZER).custom()
|
||||
.tokenizer(StandardTokenizerFactory.class)
|
||||
.tokenFilter(LowerCaseFilterFactory.class);
|
||||
.tokenFilter(LowerCaseFilterFactory.class)
|
||||
.tokenFilter(ASCIIFoldingFilterFactory.class);
|
||||
|
||||
theLuceneCtx.analyzer("exactAnalyzer").custom()
|
||||
theLuceneCtx.analyzer(NORM_STRING_ANALYZER).custom()
|
||||
.tokenizer(KeywordTokenizerFactory.class)
|
||||
.tokenFilter(LowerCaseFilterFactory.class)
|
||||
.tokenFilter(ASCIIFoldingFilterFactory.class);
|
||||
|
||||
theLuceneCtx.analyzer(EXACT_ANALYZER).custom()
|
||||
.tokenizer(KeywordTokenizerFactory.class);
|
||||
|
||||
theLuceneCtx.analyzer("conceptParentPidsAnalyzer").custom()
|
||||
|
|
|
@ -310,35 +310,13 @@ public class SearchBuilder implements ISearchBuilder {
|
|||
|
||||
List<ResourcePersistentId> pids = new ArrayList<>();
|
||||
|
||||
/*
|
||||
* Fulltext or lastn search
|
||||
*/
|
||||
if (myParams.containsKey(Constants.PARAM_CONTENT) || myParams.containsKey(Constants.PARAM_TEXT) || myParams.isLastN()) {
|
||||
if (myParams.containsKey(Constants.PARAM_CONTENT) || myParams.containsKey(Constants.PARAM_TEXT)) {
|
||||
if (myFulltextSearchSvc == null || myFulltextSearchSvc.isDisabled()) {
|
||||
if (myParams.containsKey(Constants.PARAM_TEXT)) {
|
||||
throw new InvalidRequestException("Fulltext search is not enabled on this service, can not process parameter: " + Constants.PARAM_TEXT);
|
||||
} else if (myParams.containsKey(Constants.PARAM_CONTENT)) {
|
||||
throw new InvalidRequestException("Fulltext search is not enabled on this service, can not process parameter: " + Constants.PARAM_CONTENT);
|
||||
}
|
||||
if (requiresHibernateSearchAccess()) {
|
||||
if (myParams.isLastN()) {
|
||||
pids = executeLastNAgainstIndex(theMaximumResults);
|
||||
} else {
|
||||
pids = queryLuceneForPIDs(theRequest);
|
||||
}
|
||||
|
||||
if (myParams.getEverythingMode() != null) {
|
||||
pids = myFulltextSearchSvc.everything(myResourceName, myParams, theRequest);
|
||||
} else {
|
||||
pids = myFulltextSearchSvc.search(myResourceName, myParams);
|
||||
}
|
||||
} else if (myParams.isLastN()) {
|
||||
if (myIElasticsearchSvc == null) {
|
||||
if (myParams.isLastN()) {
|
||||
throw new InvalidRequestException("LastN operation is not enabled on this service, can not process this request");
|
||||
}
|
||||
}
|
||||
List<String> lastnResourceIds = myIElasticsearchSvc.executeLastN(myParams, myContext, theMaximumResults);
|
||||
for (String lastnResourceId : lastnResourceIds) {
|
||||
pids.add(myIdHelperService.resolveResourcePersistentIds(myRequestPartitionId, myResourceName, lastnResourceId));
|
||||
}
|
||||
}
|
||||
if (theSearchRuntimeDetails != null) {
|
||||
theSearchRuntimeDetails.setFoundIndexMatchesCount(pids.size());
|
||||
HookParams params = new HookParams()
|
||||
|
@ -367,6 +345,48 @@ public class SearchBuilder implements ISearchBuilder {
|
|||
return queries;
|
||||
}
|
||||
|
||||
private boolean requiresHibernateSearchAccess() {
|
||||
boolean result = (myFulltextSearchSvc != null) &&
|
||||
!myFulltextSearchSvc.isDisabled() &&
|
||||
myFulltextSearchSvc.supportsSomeOf(myParams);
|
||||
|
||||
if (myParams.containsKey(Constants.PARAM_CONTENT) || myParams.containsKey(Constants.PARAM_TEXT)) {
|
||||
if (myFulltextSearchSvc == null || myFulltextSearchSvc.isDisabled()) {
|
||||
if (myParams.containsKey(Constants.PARAM_TEXT)) {
|
||||
throw new InvalidRequestException("Fulltext search is not enabled on this service, can not process parameter: " + Constants.PARAM_TEXT);
|
||||
} else if (myParams.containsKey(Constants.PARAM_CONTENT)) {
|
||||
throw new InvalidRequestException("Fulltext search is not enabled on this service, can not process parameter: " + Constants.PARAM_CONTENT);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private List<ResourcePersistentId> executeLastNAgainstIndex(Integer theMaximumResults) {
|
||||
validateLastNIsEnabled();
|
||||
|
||||
// TODO MB we can satisfy resources directly if we put the resources in elastic.
|
||||
List<String> lastnResourceIds = myIElasticsearchSvc.executeLastN(myParams, myContext, theMaximumResults);
|
||||
|
||||
return lastnResourceIds.stream()
|
||||
.map(lastnResourceId -> myIdHelperService.resolveResourcePersistentIds(myRequestPartitionId,myResourceName,lastnResourceId))
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
private List<ResourcePersistentId> queryLuceneForPIDs(RequestDetails theRequest) {
|
||||
validateFullTextSearchIsEnabled();
|
||||
|
||||
List<ResourcePersistentId> pids;
|
||||
if (myParams.getEverythingMode() != null) {
|
||||
pids = myFulltextSearchSvc.everything(myResourceName, myParams, theRequest);
|
||||
} else {
|
||||
pids = myFulltextSearchSvc.search(myResourceName, myParams);
|
||||
}
|
||||
return pids;
|
||||
}
|
||||
|
||||
|
||||
private void doCreateChunkedQueries(SearchParameterMap theParams, List<Long> thePids, Integer theOffset, SortSpec sort, boolean theCount, RequestDetails theRequest, ArrayList<SearchQueryExecutor> theQueries) {
|
||||
if (thePids.size() < getMaximumPageSize()) {
|
||||
normalizeIdListForLastNInClause(thePids);
|
||||
|
@ -1581,4 +1601,21 @@ public class SearchBuilder implements ISearchBuilder {
|
|||
return thePredicates.toArray(new Predicate[0]);
|
||||
}
|
||||
|
||||
private void validateLastNIsEnabled() {
|
||||
if (myIElasticsearchSvc == null) {
|
||||
throw new InvalidRequestException("LastN operation is not enabled on this service, can not process this request");
|
||||
}
|
||||
}
|
||||
|
||||
private void validateFullTextSearchIsEnabled() {
|
||||
if (myFulltextSearchSvc == null) {
|
||||
if (myParams.containsKey(Constants.PARAM_TEXT)) {
|
||||
throw new InvalidRequestException("Fulltext search is not enabled on this service, can not process parameter: " + Constants.PARAM_TEXT);
|
||||
} else if (myParams.containsKey(Constants.PARAM_CONTENT)) {
|
||||
throw new InvalidRequestException("Fulltext search is not enabled on this service, can not process parameter: " + Constants.PARAM_CONTENT);
|
||||
} else {
|
||||
throw new InvalidRequestException("Fulltext search is not enabled on this service, can not process qualifier :text");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -20,6 +20,7 @@ package ca.uhn.fhir.jpa.search.elastic;
|
|||
* #L%
|
||||
*/
|
||||
|
||||
import ca.uhn.fhir.jpa.search.HapiLuceneAnalysisConfigurer;
|
||||
import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurationContext;
|
||||
import org.hibernate.search.backend.elasticsearch.analysis.ElasticsearchAnalysisConfigurer;
|
||||
|
||||
|
@ -70,13 +71,18 @@ public class HapiElasticsearchAnalysisConfigurer implements ElasticsearchAnalysi
|
|||
.param("max_gram", "20");
|
||||
|
||||
|
||||
theConfigCtx.analyzer("standardAnalyzer").custom()
|
||||
theConfigCtx.analyzer(HapiLuceneAnalysisConfigurer.STANDARD_ANALYZER).custom()
|
||||
.tokenizer("standard")
|
||||
.tokenFilters("lowercase");
|
||||
.tokenFilters("lowercase", "asciifolding");
|
||||
|
||||
theConfigCtx.analyzer(HapiLuceneAnalysisConfigurer.NORM_STRING_ANALYZER).custom()
|
||||
.tokenizer("keyword") // We need the whole string to match, including whitespace.
|
||||
.tokenFilters("lowercase", "asciifolding");
|
||||
|
||||
theConfigCtx.analyzer("exactAnalyzer")
|
||||
.custom()
|
||||
.tokenizer("keyword");
|
||||
.tokenizer("keyword")
|
||||
.tokenFilters("unique");
|
||||
|
||||
theConfigCtx.analyzer("conceptParentPidsAnalyzer").custom()
|
||||
.tokenizer("whitespace");
|
||||
|
|
|
@ -35,6 +35,8 @@ public class TestR4ConfigWithElasticSearch extends TestR4Config {
|
|||
int httpPort = elasticContainer().getMappedPort(9200);//9200 is the HTTP port
|
||||
String host = elasticContainer().getHost();
|
||||
|
||||
ourLog.warn("Hibernate Search: using elasticsearch - host {} {}", host, httpPort);
|
||||
|
||||
new ElasticsearchHibernatePropertiesBuilder()
|
||||
.setDebugIndexSyncStrategy("read-sync")
|
||||
.setDebugPrettyPrintJsonLog(true)
|
||||
|
|
|
@ -148,6 +148,8 @@ public abstract class BaseJpaTest extends BaseTest {
|
|||
protected ServletRequestDetails mySrd;
|
||||
protected InterceptorService mySrdInterceptorService;
|
||||
@Autowired
|
||||
protected DaoConfig myDaoConfig = new DaoConfig();
|
||||
@Autowired
|
||||
protected DatabaseBackedPagingProvider myDatabaseBackedPagingProvider;
|
||||
@Autowired
|
||||
protected IInterceptorService myInterceptorRegistry;
|
||||
|
@ -209,6 +211,10 @@ public abstract class BaseJpaTest extends BaseTest {
|
|||
if (myFhirInstanceValidator != null) {
|
||||
myFhirInstanceValidator.invalidateCaches();
|
||||
}
|
||||
DaoConfig defaultConfig = new DaoConfig();
|
||||
myDaoConfig.setAdvancedLuceneIndexing(defaultConfig.isAdvancedLuceneIndexing());
|
||||
myDaoConfig.setAllowContainsSearches(defaultConfig.isAllowContainsSearches());
|
||||
|
||||
|
||||
}
|
||||
|
||||
|
@ -613,10 +619,10 @@ public abstract class BaseJpaTest extends BaseTest {
|
|||
public static Map<String, String> buildHibernateSearchProperties(boolean enableLucene) {
|
||||
Map<String, String> hibernateSearchProperties;
|
||||
if (enableLucene) {
|
||||
ourLog.warn("Hibernate Search is enabled");
|
||||
ourLog.info("Hibernate Search is enabled");
|
||||
hibernateSearchProperties = buildHeapLuceneHibernateSearchProperties();
|
||||
} else {
|
||||
ourLog.warn("Hibernate Search is disabled");
|
||||
ourLog.info("Hibernate Search is disabled");
|
||||
hibernateSearchProperties = new HashMap<>();
|
||||
hibernateSearchProperties.put("hibernate.search.enabled", "false");
|
||||
}
|
||||
|
@ -625,6 +631,7 @@ public abstract class BaseJpaTest extends BaseTest {
|
|||
|
||||
public static Map<String, String> buildHeapLuceneHibernateSearchProperties() {
|
||||
Map<String, String> props = new HashMap<>();
|
||||
ourLog.warn("Hibernate Search: using lucene - local-heap");
|
||||
props.put(BackendSettings.backendKey(BackendSettings.TYPE), "lucene");
|
||||
props.put(BackendSettings.backendKey(LuceneBackendSettings.ANALYSIS_CONFIGURER), HapiLuceneAnalysisConfigurer.class.getName());
|
||||
props.put(BackendSettings.backendKey(LuceneIndexSettings.DIRECTORY_TYPE), "local-heap");
|
||||
|
|
|
@ -114,8 +114,6 @@ public abstract class BaseJpaDstu2Test extends BaseJpaTest {
|
|||
@Qualifier("myConceptMapDaoDstu2")
|
||||
protected IFhirResourceDao<ConceptMap> myConceptMapDao;
|
||||
@Autowired
|
||||
protected DaoConfig myDaoConfig;
|
||||
@Autowired
|
||||
protected ModelConfig myModelConfig;
|
||||
@Autowired
|
||||
@Qualifier("myDeviceDaoDstu2")
|
||||
|
|
|
@ -2688,6 +2688,7 @@ public class FhirResourceDaoDstu2Test extends BaseJpaDstu2Test {
|
|||
|
||||
@Test
|
||||
public void testStringParamWhichIsTooLong() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
|
||||
Organization org = new Organization();
|
||||
String str = "testStringParamLong__lvdaoy843s89tll8gvs89l4s3gelrukveilufyebrew8r87bv4b77feli7fsl4lv3vb7rexloxe7olb48vov4o78ls7bvo7vb48o48l4bb7vbvx";
|
||||
|
|
|
@ -186,8 +186,6 @@ public abstract class BaseJpaDstu3Test extends BaseJpaTest {
|
|||
@Qualifier("myConditionDaoDstu3")
|
||||
protected IFhirResourceDao<Condition> myConditionDao;
|
||||
@Autowired
|
||||
protected DaoConfig myDaoConfig;
|
||||
@Autowired
|
||||
protected ModelConfig myModelConfig;
|
||||
@Autowired
|
||||
@Qualifier("myDeviceDaoDstu3")
|
||||
|
|
|
@ -291,8 +291,6 @@ public abstract class BaseJpaR4Test extends BaseJpaTest implements ITestDataBuil
|
|||
@Qualifier("myEpisodeOfCareDaoR4")
|
||||
protected IFhirResourceDao<EpisodeOfCare> myEpisodeOfCareDao;
|
||||
@Autowired
|
||||
protected DaoConfig myDaoConfig;
|
||||
@Autowired
|
||||
protected PartitionSettings myPartitionSettings;
|
||||
@Autowired
|
||||
protected ModelConfig myModelConfig;
|
||||
|
|
|
@ -53,7 +53,7 @@ import static org.mockito.Mockito.when;
|
|||
@ExtendWith(SpringExtension.class)
|
||||
@RequiresDocker
|
||||
@ContextConfiguration(classes = {TestR4ConfigWithElasticsearchClient.class})
|
||||
public class BaseR4SearchLastN extends BaseJpaTest {
|
||||
abstract public class BaseR4SearchLastN extends BaseJpaTest {
|
||||
|
||||
private static final Map<String, String> observationPatientMap = new HashMap<>();
|
||||
private static final Map<String, String> observationCategoryMap = new HashMap<>();
|
||||
|
@ -83,8 +83,6 @@ public class BaseR4SearchLastN extends BaseJpaTest {
|
|||
@Qualifier("myObservationDaoR4")
|
||||
protected IFhirResourceDaoObservation<Observation> myObservationDao;
|
||||
@Autowired
|
||||
protected DaoConfig myDaoConfig;
|
||||
@Autowired
|
||||
protected FhirContext myFhirCtx;
|
||||
@Autowired
|
||||
protected PlatformTransactionManager myPlatformTransactionManager;
|
||||
|
@ -113,7 +111,7 @@ public class BaseR4SearchLastN extends BaseJpaTest {
|
|||
// Using a static flag to ensure that test data and elasticsearch index is only created once.
|
||||
// Creating this data and the index is time consuming and as such want to avoid having to repeat for each test.
|
||||
// Normally would use a static @BeforeClass method for this purpose, but Autowired objects cannot be accessed in static methods.
|
||||
if (!dataLoaded) {
|
||||
if (!dataLoaded || patient0Id == null) {
|
||||
Patient pt = new Patient();
|
||||
pt.addName().setFamily("Lastn").addGiven("Arthur");
|
||||
patient0Id = myPatientDao.create(pt, mockSrd()).getId().toUnqualifiedVersionless();
|
||||
|
|
|
@ -362,6 +362,7 @@ public class FhirResourceDaoR4ComboUniqueParamTest extends BaseComboParamsR4Test
|
|||
|
||||
@Test
|
||||
public void testDoubleMatchingOnAnd_Search() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
createUniqueIndexPatientIdentifier();
|
||||
|
||||
Patient pt = new Patient();
|
||||
|
@ -1005,6 +1006,7 @@ public class FhirResourceDaoR4ComboUniqueParamTest extends BaseComboParamsR4Test
|
|||
|
||||
@Test
|
||||
public void testSearchSynchronousUsingUniqueComposite() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
createUniqueBirthdateAndGenderSps();
|
||||
|
||||
Patient pt1 = new Patient();
|
||||
|
@ -1147,6 +1149,7 @@ public class FhirResourceDaoR4ComboUniqueParamTest extends BaseComboParamsR4Test
|
|||
|
||||
@Test
|
||||
public void testUniqueValuesAreIndexed_Reference_UsingModifierSyntax() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
createUniqueNameAndManagingOrganizationSps();
|
||||
|
||||
Organization org = new Organization();
|
||||
|
@ -1509,6 +1512,7 @@ public class FhirResourceDaoR4ComboUniqueParamTest extends BaseComboParamsR4Test
|
|||
|
||||
@Test
|
||||
public void testReplaceOneWithAnother() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
createUniqueBirthdateAndGenderSps();
|
||||
|
||||
Patient pt1 = new Patient();
|
||||
|
|
|
@ -27,8 +27,14 @@ public class FhirResourceDaoR4SearchFtTest extends BaseJpaR4Test {
|
|||
@BeforeEach
|
||||
public void beforeDisableResultReuse() {
|
||||
myDaoConfig.setReuseCachedSearchResultsForMillis(null);
|
||||
myDaoConfig.setAllowContainsSearches(true);
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
}
|
||||
|
||||
/**
|
||||
* TODO mb Extract these tests and run on all: jpa, lucene, es, and mongo. {@link FhirResourceDaoR4SearchWithElasticSearchIT}
|
||||
* {@link FhirResourceDaoR4SearchWithElasticSearchIT#testStringSearch}
|
||||
*/
|
||||
@Test
|
||||
public void testCodeTextSearch() {
|
||||
Observation obs1 = new Observation();
|
||||
|
@ -97,7 +103,6 @@ public class FhirResourceDaoR4SearchFtTest extends BaseJpaR4Test {
|
|||
}
|
||||
|
||||
@Test
|
||||
@Disabled
|
||||
public void testStringTextSearch() {
|
||||
Observation obs1 = new Observation();
|
||||
obs1.getCode().setText("AAAAA");
|
||||
|
@ -114,8 +119,17 @@ public class FhirResourceDaoR4SearchFtTest extends BaseJpaR4Test {
|
|||
SearchParameterMap map;
|
||||
|
||||
map = new SearchParameterMap();
|
||||
map.add(Observation.SP_VALUE_STRING, new StringParam("sure").setContains(true));
|
||||
assertThat(toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), containsInAnyOrder(toValues(id1, id2)));
|
||||
map.add(Observation.SP_VALUE_STRING, new StringParam("Systol"));
|
||||
assertThat("Default search matches prefix", toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), containsInAnyOrder(toValues(id1)));
|
||||
|
||||
map = new SearchParameterMap();
|
||||
map.add(Observation.SP_VALUE_STRING, new StringParam("Systolic Blood"));
|
||||
assertThat("Default search matches prefix, even with space", toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), containsInAnyOrder(toValues(id1)));
|
||||
|
||||
// contains doesn't work
|
||||
// map = new SearchParameterMap();
|
||||
// map.add(Observation.SP_VALUE_STRING, new StringParam("sure").setContains(true));
|
||||
// assertThat("contains matches internal fragment", toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), containsInAnyOrder(toValues(id1, id2)));
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -1549,6 +1549,8 @@ public class FhirResourceDaoR4SearchNoFtTest extends BaseJpaR4Test {
|
|||
id2 = myOrganizationDao.create(patient, mySrd).getId().toUnqualifiedVersionless().getValue();
|
||||
}
|
||||
|
||||
// TODO: restore
|
||||
|
||||
int size;
|
||||
SearchParameterMap params = new SearchParameterMap();
|
||||
params.setLoadSynchronous(true);
|
||||
|
@ -3716,7 +3718,6 @@ public class FhirResourceDaoR4SearchNoFtTest extends BaseJpaR4Test {
|
|||
// assertEquals(searchQuery, 2, StringUtils.countMatches(searchQuery.toUpperCase(), "AND RESOURCETA0_.RES_UPDATED"));
|
||||
}
|
||||
|
||||
|
||||
@Test
|
||||
public void testSearchTokenParam() {
|
||||
Patient patient = new Patient();
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package ca.uhn.fhir.jpa.dao.r4;
|
||||
|
||||
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
||||
import ca.uhn.fhir.jpa.dao.BaseJpaTest;
|
||||
import ca.uhn.fhir.jpa.dao.data.ISearchDao;
|
||||
import ca.uhn.fhir.jpa.entity.TermValueSet;
|
||||
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
|
||||
|
@ -101,6 +102,7 @@ import org.junit.jupiter.api.BeforeEach;
|
|||
import org.junit.jupiter.api.Disabled;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.test.context.TestPropertySource;
|
||||
import org.springframework.transaction.TransactionDefinition;
|
||||
import org.springframework.transaction.TransactionStatus;
|
||||
import org.springframework.transaction.support.TransactionCallback;
|
||||
|
@ -151,6 +153,7 @@ public class FhirResourceDaoR4SearchNoHashesTest extends BaseJpaR4Test {
|
|||
public void beforeInitialize() {
|
||||
myDaoConfig.setReuseCachedSearchResultsForMillis(null);
|
||||
myDaoConfig.setDisableHashBasedSearches(true);
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
}
|
||||
|
||||
@Test
|
||||
|
|
|
@ -93,6 +93,7 @@ public class FhirResourceDaoR4SearchOptimizedTest extends BaseJpaR4Test {
|
|||
mySearchCoordinatorSvcImpl.setLoadingThrottleForUnitTests(null);
|
||||
mySearchCoordinatorSvcImpl.setSyncSizeForUnitTests(SearchCoordinatorSvcImpl.DEFAULT_SYNC_SIZE);
|
||||
myCaptureQueriesListener.setCaptureQueryStackTrace(true);
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package ca.uhn.fhir.jpa.dao.r4;
|
||||
|
||||
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
||||
import ca.uhn.fhir.jpa.dao.BaseJpaTest;
|
||||
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
||||
import ca.uhn.fhir.rest.api.Constants;
|
||||
import ca.uhn.fhir.rest.api.server.IBundleProvider;
|
||||
|
@ -12,9 +13,11 @@ import org.hl7.fhir.instance.model.api.IIdType;
|
|||
import org.hl7.fhir.r4.model.Patient;
|
||||
import org.hl7.fhir.r4.model.SearchParameter;
|
||||
import org.junit.jupiter.api.AfterEach;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.test.context.TestPropertySource;
|
||||
|
||||
import java.util.UUID;
|
||||
|
||||
|
@ -26,6 +29,11 @@ public class FhirResourceDaoR4SearchSqlTest extends BaseJpaR4Test {
|
|||
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(FhirResourceDaoR4SearchSqlTest.class);
|
||||
|
||||
@BeforeEach
|
||||
public void before() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
public void after() {
|
||||
myDaoConfig.setTagStorageMode(DaoConfig.DEFAULT_TAG_STORAGE_MODE);
|
||||
|
|
|
@ -1,49 +1,12 @@
|
|||
package ca.uhn.fhir.jpa.dao.r4;
|
||||
|
||||
import static org.hamcrest.MatcherAssert.assertThat;
|
||||
import static org.hamcrest.Matchers.containsInAnyOrder;
|
||||
import static org.hamcrest.Matchers.not;
|
||||
import static org.hamcrest.Matchers.stringContainsInOrder;
|
||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Properties;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import ca.uhn.fhir.context.support.ValueSetExpansionOptions;
|
||||
import ca.uhn.fhir.jpa.search.elastic.ElasticsearchHibernatePropertiesBuilder;
|
||||
import ca.uhn.fhir.test.utilities.docker.RequiresDocker;
|
||||
import org.hamcrest.Matchers;
|
||||
import org.hibernate.search.backend.elasticsearch.index.IndexStatus;
|
||||
import org.hibernate.search.mapper.orm.schema.management.SchemaManagementStrategyName;
|
||||
import org.hl7.fhir.instance.model.api.IIdType;
|
||||
import org.hl7.fhir.r4.model.Bundle;
|
||||
import org.hl7.fhir.r4.model.CodeSystem;
|
||||
import org.hl7.fhir.r4.model.CodeableConcept;
|
||||
import org.hl7.fhir.r4.model.Coding;
|
||||
import org.hl7.fhir.r4.model.Meta;
|
||||
import org.hl7.fhir.r4.model.Observation;
|
||||
import org.hl7.fhir.r4.model.Quantity;
|
||||
import org.hl7.fhir.r4.model.ValueSet;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.test.context.ContextConfiguration;
|
||||
import org.springframework.test.context.junit.jupiter.SpringExtension;
|
||||
import org.springframework.transaction.PlatformTransactionManager;
|
||||
|
||||
import ca.uhn.fhir.context.FhirContext;
|
||||
import ca.uhn.fhir.context.support.ValueSetExpansionOptions;
|
||||
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
||||
import ca.uhn.fhir.jpa.api.dao.IFhirResourceDao;
|
||||
import ca.uhn.fhir.jpa.api.dao.IFhirResourceDaoValueSet;
|
||||
import ca.uhn.fhir.jpa.api.dao.IFhirSystemDao;
|
||||
import ca.uhn.fhir.jpa.api.model.DaoMethodOutcome;
|
||||
import ca.uhn.fhir.jpa.api.svc.ISearchCoordinatorSvc;
|
||||
import ca.uhn.fhir.jpa.bulk.export.api.IBulkDataExportSvc;
|
||||
import ca.uhn.fhir.jpa.config.TestR4ConfigWithElasticSearch;
|
||||
|
@ -55,16 +18,58 @@ import ca.uhn.fhir.jpa.entity.TermConceptParentChildLink;
|
|||
import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
||||
import ca.uhn.fhir.jpa.search.reindex.IResourceReindexingSvc;
|
||||
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
||||
import ca.uhn.fhir.rest.server.util.ISearchParamRegistry;
|
||||
import ca.uhn.fhir.jpa.sp.ISearchParamPresenceSvc;
|
||||
import ca.uhn.fhir.jpa.term.api.ITermCodeSystemStorageSvc;
|
||||
import ca.uhn.fhir.jpa.term.api.ITermReadSvcR4;
|
||||
import ca.uhn.fhir.parser.IParser;
|
||||
import ca.uhn.fhir.rest.api.Constants;
|
||||
import ca.uhn.fhir.rest.api.server.storage.ResourcePersistentId;
|
||||
import ca.uhn.fhir.rest.param.ReferenceParam;
|
||||
import ca.uhn.fhir.rest.param.StringOrListParam;
|
||||
import ca.uhn.fhir.rest.param.StringParam;
|
||||
import ca.uhn.fhir.rest.param.TokenParam;
|
||||
import ca.uhn.fhir.rest.param.TokenParamModifier;
|
||||
import ca.uhn.fhir.rest.server.util.ISearchParamRegistry;
|
||||
import ca.uhn.fhir.test.utilities.docker.RequiresDocker;
|
||||
import ca.uhn.fhir.validation.FhirValidator;
|
||||
import ca.uhn.fhir.validation.ValidationResult;
|
||||
import org.hamcrest.Matchers;
|
||||
import org.hl7.fhir.instance.model.api.IIdType;
|
||||
import org.hl7.fhir.r4.model.Bundle;
|
||||
import org.hl7.fhir.r4.model.CodeSystem;
|
||||
import org.hl7.fhir.r4.model.CodeableConcept;
|
||||
import org.hl7.fhir.r4.model.Coding;
|
||||
import org.hl7.fhir.r4.model.Encounter;
|
||||
import org.hl7.fhir.r4.model.Identifier;
|
||||
import org.hl7.fhir.r4.model.Meta;
|
||||
import org.hl7.fhir.r4.model.Narrative;
|
||||
import org.hl7.fhir.r4.model.Observation;
|
||||
import org.hl7.fhir.r4.model.Patient;
|
||||
import org.hl7.fhir.r4.model.Quantity;
|
||||
import org.hl7.fhir.r4.model.Reference;
|
||||
import org.hl7.fhir.r4.model.StringType;
|
||||
import org.hl7.fhir.r4.model.ValueSet;
|
||||
import org.junit.jupiter.api.AfterEach;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.test.annotation.DirtiesContext;
|
||||
import org.springframework.test.context.ContextConfiguration;
|
||||
import org.springframework.test.context.junit.jupiter.SpringExtension;
|
||||
import org.springframework.transaction.PlatformTransactionManager;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.hamcrest.MatcherAssert.assertThat;
|
||||
import static org.hamcrest.Matchers.containsInAnyOrder;
|
||||
import static org.hamcrest.Matchers.not;
|
||||
import static org.hamcrest.Matchers.stringContainsInOrder;
|
||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||
|
||||
@ExtendWith(SpringExtension.class)
|
||||
@RequiresDocker
|
||||
|
@ -100,6 +105,12 @@ public class FhirResourceDaoR4SearchWithElasticSearchIT extends BaseJpaTest {
|
|||
@Qualifier("myObservationDaoR4")
|
||||
private IFhirResourceDao<Observation> myObservationDao;
|
||||
@Autowired
|
||||
@Qualifier("myPatientDaoR4")
|
||||
private IFhirResourceDao<Patient> myPatientDao;
|
||||
@Autowired
|
||||
@Qualifier("myEncounterDaoR4")
|
||||
private IFhirResourceDao<Encounter> myEncounterDao;
|
||||
@Autowired
|
||||
@Qualifier("mySystemDaoR4")
|
||||
private IFhirSystemDao<Bundle, Meta> mySystemDao;
|
||||
@Autowired
|
||||
|
@ -108,10 +119,12 @@ public class FhirResourceDaoR4SearchWithElasticSearchIT extends BaseJpaTest {
|
|||
private IBulkDataExportSvc myBulkDataExportSvc;
|
||||
@Autowired
|
||||
private ITermCodeSystemStorageSvc myTermCodeSystemStorageSvc;
|
||||
private boolean myContainsSettings;
|
||||
|
||||
@BeforeEach
|
||||
public void beforePurgeDatabase() {
|
||||
purgeDatabase(myDaoConfig, mySystemDao, myResourceReindexingSvc, mySearchCoordinatorSvc, mySearchParamRegistry, myBulkDataExportSvc);
|
||||
myDaoConfig.setAdvancedLuceneIndexing(true);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -124,6 +137,17 @@ public class FhirResourceDaoR4SearchWithElasticSearchIT extends BaseJpaTest {
|
|||
return myTxManager;
|
||||
}
|
||||
|
||||
@BeforeEach
|
||||
public void enableContains() {
|
||||
myContainsSettings = myDaoConfig.isAllowContainsSearches();
|
||||
myDaoConfig.setAllowContainsSearches(true);
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
public void restoreContains() {
|
||||
myDaoConfig.setAllowContainsSearches(myContainsSettings);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testResourceTextSearch() {
|
||||
Observation obs1 = new Observation();
|
||||
|
@ -148,7 +172,339 @@ public class FhirResourceDaoR4SearchWithElasticSearchIT extends BaseJpaTest {
|
|||
map = new SearchParameterMap();
|
||||
map.add(Constants.PARAM_CONTENT, new StringParam("blood"));
|
||||
assertThat(toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), containsInAnyOrder(toValues(id1, id2)));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testResourceReferenceSearch() {
|
||||
IIdType patId, encId, obsId;
|
||||
|
||||
{
|
||||
Patient patient = new Patient();
|
||||
DaoMethodOutcome outcome = myPatientDao.create(patient, mySrd);
|
||||
patId = outcome.getId();
|
||||
}
|
||||
{
|
||||
Encounter encounter = new Encounter();
|
||||
encounter.addIdentifier().setSystem("foo").setValue("bar");
|
||||
DaoMethodOutcome outcome = myEncounterDao.create(encounter);
|
||||
encId = outcome.getId();
|
||||
}
|
||||
{
|
||||
Observation obs2 = new Observation();
|
||||
obs2.getCode().setText("Body Weight");
|
||||
obs2.getCode().addCoding().setCode("obs2").setSystem("Some System").setDisplay("Body weight as measured by me");
|
||||
obs2.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs2.setValue(new Quantity(81));
|
||||
obs2.setSubject(new Reference(patId.toString()));
|
||||
obs2.setEncounter(new Reference(encId.toString()));
|
||||
obsId = myObservationDao.create(obs2, mySrd).getId().toUnqualifiedVersionless();
|
||||
//ourLog.info("Observation {}", myFhirCtx.newJsonParser().encodeResourceToString(obs2));
|
||||
}
|
||||
{
|
||||
//Search by chain
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("encounter", new ReferenceParam("foo|bar").setChain("identifier"));
|
||||
assertObservationSearchMatches("Search by encounter reference", map, obsId);
|
||||
|
||||
}
|
||||
|
||||
{
|
||||
// search by encounter
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("encounter", new ReferenceParam(encId));
|
||||
assertObservationSearchMatches("Search by encounter reference", map, obsId);
|
||||
}
|
||||
{
|
||||
// search by subject
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("subject", new ReferenceParam(patId));
|
||||
assertObservationSearchMatches("Search by subject reference", map, obsId);
|
||||
}
|
||||
{
|
||||
// search by patient
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("patient", new ReferenceParam(patId));
|
||||
assertObservationSearchMatches("Search by patient reference", map, obsId);
|
||||
}
|
||||
{
|
||||
// search by patient and encounter
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("subject", new ReferenceParam(patId));
|
||||
map.add("encounter", new ReferenceParam(encId));
|
||||
assertObservationSearchMatches("Search by encounter&&subject reference", map, obsId);
|
||||
}
|
||||
|
||||
}
|
||||
@Test
|
||||
public void testResourceCodeTokenSearch() {
|
||||
IIdType id1, id2, id2b, id3;
|
||||
|
||||
String system = "http://loinc.org";
|
||||
{
|
||||
Observation obs1 = new Observation();
|
||||
obs1.getCode().setText("Systolic Blood Pressure");
|
||||
obs1.getCode().addCoding().setCode("obs1").setSystem(system).setDisplay("Systolic Blood Pressure");
|
||||
obs1.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs1.setValue(new Quantity(123));
|
||||
obs1.getNoteFirstRep().setText("obs1");
|
||||
id1 = myObservationDao.create(obs1, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
{
|
||||
Observation obs2 = new Observation();
|
||||
obs2.getCode().setText("Body Weight");
|
||||
obs2.getCode().addCoding().setCode("obs2").setSystem(system).setDisplay("Body weight as measured by me");
|
||||
obs2.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs2.setValue(new Quantity(81));
|
||||
id2 = myObservationDao.create(obs2, mySrd).getId().toUnqualifiedVersionless();
|
||||
//ourLog.info("Observation {}", myFhirCtx.newJsonParser().encodeResourceToString(obs2));
|
||||
}
|
||||
{
|
||||
Observation obs2b = new Observation();
|
||||
obs2b.getCode().addCoding().setCode("obs2").setSystem("http://example.com").setDisplay("A trick system");
|
||||
obs2b.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs2b.setValue(new Quantity(81));
|
||||
id2b = myObservationDao.create(obs2b, mySrd).getId().toUnqualifiedVersionless();
|
||||
//ourLog.info("Observation {}", myFhirCtx.newJsonParser().encodeResourceToString(obs2));
|
||||
}
|
||||
{
|
||||
Observation obs3 = new Observation();
|
||||
obs3.getCode().addCoding().setCode("obs3").setSystem("http://example.com").setDisplay("A trick system");
|
||||
obs3.getCode().addCoding().setCode("obs3-multiple-code").setSystem("http://example.com").setDisplay("A trick system");
|
||||
obs3.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs3.setValue(new Quantity(81));
|
||||
id3 = myObservationDao.create(obs3, mySrd).getId().toUnqualifiedVersionless();
|
||||
//ourLog.info("Observation {}", myFhirCtx.newJsonParser().encodeResourceToString(obs2));
|
||||
}
|
||||
{
|
||||
// search just code
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam(null, "obs2"));
|
||||
assertObservationSearchMatches("Search by code", map, id2, id2b);
|
||||
}
|
||||
{
|
||||
// search just system
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam(system, null));
|
||||
assertObservationSearchMatches("Search by system", map, id1, id2);
|
||||
}
|
||||
{
|
||||
// search code and system
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam(system, "obs2"));
|
||||
assertObservationSearchMatches("Search by system and code", map, id2);
|
||||
}
|
||||
{
|
||||
// Multiple codes indexed
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("http://example.com", "obs3-multiple-code"));
|
||||
assertObservationSearchMatches("Search for one code", map, id3);
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testResourceCodeTextSearch() {
|
||||
IIdType id1,id2,id3,id4;
|
||||
|
||||
{
|
||||
Observation obs1 = new Observation();
|
||||
obs1.getCode().setText("Weight unique");
|
||||
obs1.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs1.setValue(new Quantity(123));
|
||||
obs1.getNoteFirstRep().setText("obs1");
|
||||
id1 = myObservationDao.create(obs1, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
|
||||
{
|
||||
Observation obs2 = new Observation();
|
||||
obs2.getCode().setText("Body Weight");
|
||||
obs2.getCode().addCoding().setCode("29463-7").setSystem("http://loinc.org").setDisplay("Body weight as measured by me");
|
||||
obs2.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs2.setValue(new Quantity(81));
|
||||
id2 = myObservationDao.create(obs2, mySrd).getId().toUnqualifiedVersionless();
|
||||
//ourLog.info("Observation {}", myFhirCtx.newJsonParser().encodeResourceToString(obs2));
|
||||
}
|
||||
{
|
||||
// don't look in the narrative when only searching code.
|
||||
Observation obs3 = new Observation();
|
||||
Narrative narrative = new Narrative();
|
||||
narrative.setDivAsString("<div>Body Weight</div>");
|
||||
obs3.setText(narrative);
|
||||
obs3.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs3.setValue(new Quantity(81));
|
||||
id3 = myObservationDao.create(obs3, mySrd).getId().toUnqualifiedVersionless();
|
||||
ourLog.trace("id3 is never found {}", id3);
|
||||
}
|
||||
|
||||
//:text should work for identifier types
|
||||
{
|
||||
Observation obs4 = new Observation();
|
||||
Identifier identifier = obs4.addIdentifier();
|
||||
CodeableConcept codeableConcept = new CodeableConcept();
|
||||
codeableConcept.setText("Random Identifier Typetest");
|
||||
identifier.setType(codeableConcept);
|
||||
id4 = myObservationDao.create(obs4, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
|
||||
{
|
||||
// first word
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("Body").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatches("Search by first word", map, id2);
|
||||
}
|
||||
|
||||
{
|
||||
// any word
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("weight").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatches("Search by any word", map, id1, id2);
|
||||
}
|
||||
|
||||
{
|
||||
// doesn't find internal fragment
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("ght").setModifier(TokenParamModifier.TEXT));
|
||||
assertThat("Search doesn't match middle of words", toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), Matchers.empty());
|
||||
}
|
||||
|
||||
{
|
||||
// prefix
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("Bod*").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatches("Search matches start of word", map, id2);
|
||||
}
|
||||
|
||||
{
|
||||
// prefix
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("Bod").setModifier(TokenParamModifier.TEXT));
|
||||
assertThat("Bare prefix does not match", toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), Matchers.empty());
|
||||
}
|
||||
|
||||
{
|
||||
// codeable.display
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("measured").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatches(":text matches code.display", map, id2);
|
||||
}
|
||||
|
||||
{
|
||||
// Identifier Type
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("identifier", new TokenParam("Random").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatches(":text matches identifier text", map, id4);
|
||||
}
|
||||
|
||||
{
|
||||
// multiple values means or
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("unique").setModifier(TokenParamModifier.TEXT));
|
||||
map.get("code").get(0).add(new TokenParam("measured").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatches("Multiple query values means or in :text", map, id1, id2);
|
||||
}
|
||||
|
||||
{
|
||||
// space means AND
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("Body Weight").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatches("Multiple terms in value means and for :text", map, id2);
|
||||
}
|
||||
|
||||
{
|
||||
// don't apply the n-gram analyzer to the query, just the text.
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("code", new TokenParam("Bodum").setModifier(TokenParamModifier.TEXT));
|
||||
assertObservationSearchMatchesNothing("search with shared prefix does not match", map);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@Test
|
||||
public void testStringSearch() {
|
||||
IIdType id1, id2, id3, id4, id5, id6;
|
||||
|
||||
{
|
||||
Observation obs1 = new Observation();
|
||||
obs1.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs1.setValue(new StringType("blue"));
|
||||
id1 = myObservationDao.create(obs1, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
{
|
||||
Observation obs2 = new Observation();
|
||||
obs2.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs2.setValue(new StringType("green"));
|
||||
id2 = myObservationDao.create(obs2, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
{
|
||||
Observation obs3 = new Observation();
|
||||
obs3.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs3.setValue(new StringType("bluegreenish"));
|
||||
id3 = myObservationDao.create(obs3, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
{
|
||||
Observation obs4 = new Observation();
|
||||
obs4.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs4.setValue(new StringType("blüe"));
|
||||
id4 = myObservationDao.create(obs4, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
{
|
||||
// upper case
|
||||
Observation obs5 = new Observation();
|
||||
obs5.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs5.setValue(new StringType("Blue"));
|
||||
id5 = myObservationDao.create(obs5, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
{
|
||||
Observation obs6 = new Observation();
|
||||
obs6.setStatus(Observation.ObservationStatus.FINAL);
|
||||
obs6.setValue(new StringType("blue green"));
|
||||
id6 = myObservationDao.create(obs6, mySrd).getId().toUnqualifiedVersionless();
|
||||
}
|
||||
|
||||
|
||||
|
||||
// run searches
|
||||
|
||||
{
|
||||
// default search matches prefix, ascii-normalized, case-insensitive
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("value-string", new StringParam("blu"));
|
||||
assertObservationSearchMatches("default search matches normalized prefix", map, id1, id3, id4, id5, id6);
|
||||
}
|
||||
{
|
||||
// normal search matches string with space
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("value-string", new StringParam("blue gre"));
|
||||
assertObservationSearchMatches("normal search matches string with space", map, id6);
|
||||
}
|
||||
{
|
||||
// exact search
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("value-string", new StringParam("blue").setExact(true));
|
||||
assertObservationSearchMatches("exact search only matches exact string", map, id1);
|
||||
}
|
||||
{
|
||||
// or matches both
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("value-string",
|
||||
new StringOrListParam()
|
||||
.addOr(new StringParam("blue").setExact(true))
|
||||
.addOr(new StringParam("green").setExact(true)));
|
||||
|
||||
assertObservationSearchMatches("contains search matches anywhere", map, id1, id2);
|
||||
}
|
||||
{
|
||||
// contains matches anywhere
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
map.add("value-string", new StringParam("reen").setContains(true));
|
||||
assertObservationSearchMatches("contains search matches anywhere", map, id2, id3, id6);
|
||||
}
|
||||
}
|
||||
|
||||
private void assertObservationSearchMatchesNothing(String message, SearchParameterMap map) {
|
||||
assertObservationSearchMatches(message,map);
|
||||
}
|
||||
private void assertObservationSearchMatches(String message, SearchParameterMap map, IIdType ...iIdTypes) {
|
||||
assertThat(message, toUnqualifiedVersionlessIdValues(myObservationDao.search(map)), containsInAnyOrder(toValues(iIdTypes)));
|
||||
}
|
||||
|
||||
@Test
|
||||
|
|
|
@ -3643,6 +3643,7 @@ public class FhirResourceDaoR4Test extends BaseJpaR4Test {
|
|||
@Test
|
||||
public void testSortByString01() {
|
||||
myDaoConfig.setIndexMissingFields(DaoConfig.IndexEnabledEnum.ENABLED);
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
|
||||
Patient p = new Patient();
|
||||
String string = "testSortByString01";
|
||||
|
|
|
@ -675,6 +675,7 @@ public class FhirSystemDaoR4Test extends BaseJpaR4SystemTest {
|
|||
|
||||
@Test
|
||||
public void testReindexingSingleStringHashValueIsDeleted() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
Patient p = new Patient();
|
||||
p.addName().setFamily("family1");
|
||||
final IIdType id = myPatientDao.create(p, mySrd).getId().toUnqualifiedVersionless();
|
||||
|
|
|
@ -7,6 +7,7 @@ import ca.uhn.fhir.interceptor.api.Pointcut;
|
|||
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
|
||||
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
||||
import ca.uhn.fhir.jpa.dao.BaseHapiFhirDao;
|
||||
import ca.uhn.fhir.jpa.dao.BaseJpaTest;
|
||||
import ca.uhn.fhir.jpa.model.config.PartitionSettings;
|
||||
import ca.uhn.fhir.jpa.model.entity.ForcedId;
|
||||
import ca.uhn.fhir.jpa.model.entity.PartitionablePartitionId;
|
||||
|
@ -59,11 +60,13 @@ import org.hl7.fhir.r4.model.PractitionerRole;
|
|||
import org.hl7.fhir.r4.model.Quantity;
|
||||
import org.hl7.fhir.r4.model.SearchParameter;
|
||||
import org.hl7.fhir.r4.model.ValueSet;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Disabled;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.mockito.ArgumentCaptor;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.test.context.TestPropertySource;
|
||||
|
||||
import java.util.Date;
|
||||
import java.util.List;
|
||||
|
@ -92,9 +95,15 @@ import static org.mockito.Mockito.times;
|
|||
import static org.mockito.Mockito.verify;
|
||||
|
||||
@SuppressWarnings({"unchecked", "ConstantConditions"})
|
||||
|
||||
public class PartitioningSqlR4Test extends BasePartitioningR4Test {
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(PartitioningSqlR4Test.class);
|
||||
|
||||
@BeforeEach
|
||||
public void disableAdvanceIndexing() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testCreateSearchParameter_DefaultPartition() {
|
||||
SearchParameter sp = new SearchParameter();
|
||||
|
|
|
@ -4,13 +4,11 @@ import ca.uhn.fhir.interceptor.api.IAnonymousInterceptor;
|
|||
import ca.uhn.fhir.interceptor.api.Pointcut;
|
||||
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
||||
import ca.uhn.fhir.jpa.util.SqlQueryList;
|
||||
import ca.uhn.fhir.jpa.util.TestUtil;
|
||||
import ca.uhn.fhir.rest.api.server.IBundleProvider;
|
||||
import ca.uhn.fhir.rest.api.server.RequestDetails;
|
||||
import ca.uhn.fhir.rest.param.TokenParam;
|
||||
import org.hl7.fhir.r4.model.Condition;
|
||||
import org.hl7.fhir.r4.model.Patient;
|
||||
import org.junit.jupiter.api.AfterAll;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import java.util.List;
|
||||
|
@ -27,6 +25,7 @@ public class SearchWithInterceptorR4Test extends BaseJpaR4Test {
|
|||
|
||||
@Test
|
||||
public void testRawSql_Search() {
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
|
||||
IAnonymousInterceptor interceptor = (pointcut, params) -> {
|
||||
RequestDetails requestDetails = params.get(RequestDetails.class);
|
||||
|
|
|
@ -235,8 +235,6 @@ public abstract class BaseJpaR5Test extends BaseJpaTest implements ITestDataBuil
|
|||
@Qualifier("myConditionDaoR5")
|
||||
protected IFhirResourceDao<Condition> myConditionDao;
|
||||
@Autowired
|
||||
protected DaoConfig myDaoConfig;
|
||||
@Autowired
|
||||
protected ModelConfig myModelConfig;
|
||||
@Autowired
|
||||
@Qualifier("myDeviceDaoR5")
|
||||
|
|
|
@ -7,7 +7,7 @@ import ca.uhn.fhir.jpa.model.entity.ResourceTable;
|
|||
import ca.uhn.fhir.jpa.partition.SystemRequestDetails;
|
||||
import ca.uhn.fhir.jpa.searchparam.SearchParameterMap;
|
||||
import ca.uhn.fhir.jpa.searchparam.extractor.ISearchParamExtractor;
|
||||
import ca.uhn.fhir.jpa.util.MultimapCollector;
|
||||
import ca.uhn.fhir.util.MultimapCollector;
|
||||
import ca.uhn.fhir.jpa.util.SqlQuery;
|
||||
import ca.uhn.fhir.model.api.Include;
|
||||
import ca.uhn.fhir.rest.api.server.IBundleProvider;
|
||||
|
|
|
@ -70,8 +70,6 @@ public class ResourceProviderSearchModifierDstu3Test extends BaseResourceProvide
|
|||
SearchParameterMap searchParameterMap = resourceSearch.getSearchParameterMap();
|
||||
IBundleProvider search = myObservationDao.search(searchParameterMap);
|
||||
assertThat(search.size(), is(equalTo(0)));
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
|
|
@ -7,6 +7,8 @@ import ca.uhn.fhir.interceptor.api.IPointcut;
|
|||
import ca.uhn.fhir.interceptor.api.Pointcut;
|
||||
import ca.uhn.fhir.interceptor.model.RequestPartitionId;
|
||||
import ca.uhn.fhir.jpa.api.config.DaoConfig;
|
||||
import ca.uhn.fhir.jpa.batch.BatchJobsConfig;
|
||||
import ca.uhn.fhir.jpa.dao.BaseJpaTest;
|
||||
import ca.uhn.fhir.jpa.batch.config.BatchConstants;
|
||||
import ca.uhn.fhir.jpa.delete.job.ReindexTestHelper;
|
||||
import ca.uhn.fhir.rest.api.CacheControlDirective;
|
||||
|
@ -27,6 +29,7 @@ import org.junit.jupiter.api.Test;
|
|||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.test.context.TestPropertySource;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
@ -52,6 +55,13 @@ public class MultitenantBatchOperationR4Test extends BaseMultitenantResourceProv
|
|||
myDaoConfig.setDeleteExpungeEnabled(true);
|
||||
}
|
||||
|
||||
@BeforeEach
|
||||
public void disableAdvanceIndexing() {
|
||||
// advanced indexing doesn't support partitions
|
||||
myDaoConfig.setAdvancedLuceneIndexing(false);
|
||||
}
|
||||
|
||||
|
||||
@AfterEach
|
||||
@Override
|
||||
public void after() throws Exception {
|
||||
|
|
|
@ -116,6 +116,7 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
|||
|
||||
}
|
||||
|
||||
|
||||
@Test
|
||||
public void testHighConcurrencyWorks() throws IOException, InterruptedException {
|
||||
List<Bundle> bundles = new ArrayList<>();
|
||||
|
@ -220,8 +221,10 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
|||
Bundle input = new Bundle();
|
||||
input.setType(BundleType.BATCH);
|
||||
|
||||
//1
|
||||
input.addEntry().getRequest().setMethod(HTTPVerb.GET).setUrl(ids.get(0));
|
||||
|
||||
//2
|
||||
Patient p = new Patient();
|
||||
p.setId("100");
|
||||
p.setGender(AdministrativeGender.MALE);
|
||||
|
@ -229,14 +232,19 @@ public class ResourceProviderR4BundleTest extends BaseResourceProviderR4Test {
|
|||
p.addName().setFamily("Smith");
|
||||
input.addEntry().setResource(p).getRequest().setMethod(HTTPVerb.POST);
|
||||
|
||||
//3
|
||||
input.addEntry().getRequest().setMethod(HTTPVerb.GET).setUrl(ids.get(1));
|
||||
//4
|
||||
input.addEntry().getRequest().setMethod(HTTPVerb.GET).setUrl(ids.get(2));
|
||||
|
||||
//5
|
||||
Condition c = new Condition();
|
||||
c.getSubject().setReference(ids.get(0));
|
||||
input.addEntry().setResource(c).getRequest().setMethod(HTTPVerb.POST);
|
||||
|
||||
//6
|
||||
input.addEntry().getRequest().setMethod(HTTPVerb.GET).setUrl(ids.get(3));
|
||||
//7
|
||||
input.addEntry().getRequest().setMethod(HTTPVerb.GET).setUrl(ids.get(4));
|
||||
|
||||
//ourLog.info("Bundle: \n" + myFhirCtx.newJsonParser().setPrettyPrint(true).encodeResourceToString(input));
|
||||
|
|
|
@ -60,8 +60,6 @@ public class ResourceReindexingSvcImplTest extends BaseJpaTest {
|
|||
@Mock
|
||||
private PlatformTransactionManager myTxManager;
|
||||
|
||||
private final DaoConfig myDaoConfig = new DaoConfig();
|
||||
|
||||
@Mock
|
||||
private DaoRegistry myDaoRegistry;
|
||||
@Mock
|
||||
|
|
|
@ -63,7 +63,11 @@
|
|||
<logger name="org.hibernate.search.elasticsearch.request" additivity="false" level="trace">
|
||||
<appender-ref ref="STDOUT" />
|
||||
</logger>
|
||||
|
||||
<!--
|
||||
<logger name="ca.uhn.fhir.jpa.model.search" additivity="false" level="debug">
|
||||
<appender-ref ref="STDOUT" />
|
||||
</logger>
|
||||
-->
|
||||
<logger name="org.springframework.test.context.cache" additivity="false" level="debug">
|
||||
<appender-ref ref="STDOUT" />
|
||||
</logger>
|
||||
|
|
|
@ -34,6 +34,7 @@ import javax.persistence.TemporalType;
|
|||
import javax.persistence.Transient;
|
||||
import java.util.Collection;
|
||||
import java.util.Date;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.apache.commons.lang3.StringUtils.defaultString;
|
||||
|
||||
|
|
|
@ -87,10 +87,6 @@ public class ResourceIndexedSearchParamString extends BaseResourceIndexedSearchP
|
|||
private ResourceTable myResourceTable;
|
||||
|
||||
@Column(name = "SP_VALUE_EXACT", length = MAX_LENGTH, nullable = true)
|
||||
// @FullTextField(name = "myValueText", searchable=Searchable.YES, projectable = Projectable.YES, analyzer = "standardAnalyzer")
|
||||
// @FullTextField(name = "myValueTextEdgeNGram", searchable=Searchable.YES, projectable = Projectable.NO, analyzer = "autocompleteEdgeAnalyzer")
|
||||
// @FullTextField(name = "myValueTextNGram", searchable=Searchable.YES, projectable = Projectable.NO, analyzer = "autocompleteNGramAnalyzer")
|
||||
// @FullTextField(name = "myValueTextPhonetic", searchable=Searchable.YES, projectable = Projectable.NO, analyzer = "autocompletePhoneticAnalyzer")
|
||||
private String myValueExact;
|
||||
|
||||
@Column(name = "SP_VALUE_NORMALIZED", length = MAX_LENGTH, nullable = true)
|
||||
|
|
|
@ -24,21 +24,24 @@ import ca.uhn.fhir.context.FhirContext;
|
|||
import ca.uhn.fhir.jpa.model.cross.IBasePersistedResource;
|
||||
import ca.uhn.fhir.jpa.model.cross.IResourceLookup;
|
||||
import ca.uhn.fhir.jpa.model.search.ResourceTableRoutingBinder;
|
||||
import ca.uhn.fhir.jpa.model.search.SearchParamTextPropertyBinder;
|
||||
import ca.uhn.fhir.jpa.model.search.ExtendedLuceneIndexData;
|
||||
import ca.uhn.fhir.model.primitive.IdDt;
|
||||
import ca.uhn.fhir.rest.api.Constants;
|
||||
import ca.uhn.fhir.rest.api.server.storage.ResourcePersistentId;
|
||||
import ca.uhn.fhir.rest.server.exceptions.UnprocessableEntityException;
|
||||
import org.apache.commons.lang3.builder.ToStringBuilder;
|
||||
import org.apache.commons.lang3.builder.ToStringStyle;
|
||||
import org.hibernate.annotations.OptimisticLock;
|
||||
import org.hibernate.search.engine.backend.types.Projectable;
|
||||
import org.hibernate.search.engine.backend.types.Searchable;
|
||||
import org.hibernate.search.mapper.pojo.bridge.mapping.annotation.PropertyBinderRef;
|
||||
import org.hibernate.search.mapper.pojo.bridge.mapping.annotation.RoutingBinderRef;
|
||||
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.FullTextField;
|
||||
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.GenericField;
|
||||
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.Indexed;
|
||||
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.IndexingDependency;
|
||||
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.ObjectPath;
|
||||
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.PropertyBinding;
|
||||
import org.hibernate.search.mapper.pojo.mapping.definition.annotation.PropertyValue;
|
||||
import org.hl7.fhir.instance.model.api.IIdType;
|
||||
|
||||
|
@ -118,6 +121,11 @@ public class ResourceTable extends BaseHasResource implements Serializable, IBas
|
|||
@IndexingDependency(derivedFrom = @ObjectPath(@PropertyValue(propertyName = "myVersion")))
|
||||
private String myNarrativeText;
|
||||
|
||||
@Transient
|
||||
@IndexingDependency(derivedFrom = @ObjectPath(@PropertyValue(propertyName = "myVersion")))
|
||||
@PropertyBinding(binder = @PropertyBinderRef(type = SearchParamTextPropertyBinder.class))
|
||||
private ExtendedLuceneIndexData myLuceneIndexData;
|
||||
|
||||
@OneToMany(mappedBy = "myResource", cascade = {}, fetch = FetchType.LAZY, orphanRemoval = false)
|
||||
@OptimisticLock(excluded = true)
|
||||
private Collection<ResourceIndexedSearchParamCoords> myParamsCoords;
|
||||
|
@ -747,4 +755,8 @@ public class ResourceTable extends BaseHasResource implements Serializable, IBas
|
|||
public String getCreatedByMatchUrl() {
|
||||
return myCreatedByMatchUrl;
|
||||
}
|
||||
|
||||
public void setLuceneIndexData(ExtendedLuceneIndexData theLuceneIndexData) {
|
||||
myLuceneIndexData = theLuceneIndexData;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,49 @@
|
|||
package ca.uhn.fhir.jpa.model.search;
|
||||
|
||||
import ca.uhn.fhir.context.FhirContext;
|
||||
import ca.uhn.fhir.rest.api.RestSearchParameterTypeEnum;
|
||||
import ca.uhn.fhir.rest.param.TokenParam;
|
||||
import com.google.common.collect.HashMultimap;
|
||||
import com.google.common.collect.SetMultimap;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.hibernate.search.engine.backend.document.DocumentElement;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
/**
|
||||
* Collects our lucene extended indexing data.
|
||||
*
|
||||
*/
|
||||
public class ExtendedLuceneIndexData {
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(ExtendedLuceneIndexData.class);
|
||||
|
||||
final FhirContext myFhirContext;
|
||||
final SetMultimap<String, String> mySearchParamStrings = HashMultimap.create();
|
||||
final SetMultimap<String, TokenParam> mySearchParamTokens = HashMultimap.create();
|
||||
final SetMultimap<String, String> mySearchParamLinks = HashMultimap.create();
|
||||
|
||||
public ExtendedLuceneIndexData(FhirContext theFhirContext) {
|
||||
this.myFhirContext = theFhirContext;
|
||||
}
|
||||
|
||||
public void writeIndexElements(DocumentElement theDocument) {
|
||||
HibernateSearchIndexWriter indexWriter = HibernateSearchIndexWriter.forRoot(myFhirContext, theDocument);
|
||||
|
||||
// TODO MB Use RestSearchParameterTypeEnum to define templates.
|
||||
mySearchParamStrings.forEach(indexWriter::writeStringIndex);
|
||||
mySearchParamTokens.forEach(indexWriter::writeTokenIndex);
|
||||
mySearchParamLinks.forEach(indexWriter::writeReferenceIndex);
|
||||
}
|
||||
|
||||
public void addStringIndexData(String theSpName, String theText) {
|
||||
mySearchParamStrings.put(theSpName, theText);
|
||||
}
|
||||
|
||||
public void addTokenIndexData(String theSpName, String theSystem, String theValue) {
|
||||
mySearchParamTokens.put(theSpName, new TokenParam(theSystem, theValue));
|
||||
}
|
||||
|
||||
public void addResourceLinkIndexData(String theSpName, String theTargetResourceId){
|
||||
mySearchParamLinks.put(theSpName, theTargetResourceId);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,65 @@
|
|||
package ca.uhn.fhir.jpa.model.search;
|
||||
|
||||
import org.hibernate.search.engine.backend.document.DocumentElement;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import javax.annotation.Nonnull;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Provide a lookup of created Hibernate Search DocumentElement entries.
|
||||
*
|
||||
* The Hibernate Search DocumentElement api only supports create - it does not support fetching an existing element.
|
||||
* This class demand-creates object elements for a given path.
|
||||
*/
|
||||
public class HibernateSearchElementCache {
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(HibernateSearchElementCache.class);
|
||||
private final DocumentElement myRoot;
|
||||
private final Map<String, DocumentElement> myCache = new HashMap<>();
|
||||
|
||||
/**
|
||||
* Create the helper rooted on the given DocumentElement
|
||||
* @param theRoot the document root
|
||||
*/
|
||||
public HibernateSearchElementCache(DocumentElement theRoot) {
|
||||
this.myRoot = theRoot;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch or create an Object DocumentElement with thePath from the root element.
|
||||
*
|
||||
* @param thePath the property names of the object path. E.g. "sp","code","token"
|
||||
* @return the existing or created element
|
||||
*/
|
||||
public DocumentElement getObjectElement(@Nonnull String... thePath) {
|
||||
return getObjectElement(Arrays.asList(thePath));
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch or create an Object DocumentElement with thePath from the root element.
|
||||
*
|
||||
* @param thePath the property names of the object path. E.g. "sp","code","token"
|
||||
* @return the existing or created element
|
||||
*/
|
||||
public DocumentElement getObjectElement(@Nonnull List<String> thePath) {
|
||||
if (thePath.size() == 0) {
|
||||
return myRoot;
|
||||
}
|
||||
String key = String.join(".", thePath);
|
||||
// re-implement computeIfAbsent since we're recursive, and it isn't rentrant.
|
||||
DocumentElement result = myCache.get(key);
|
||||
if (result == null) {
|
||||
DocumentElement parent = getObjectElement(thePath.subList(0, thePath.size() - 1));
|
||||
String lastSegment = thePath.get(thePath.size() - 1);
|
||||
assert (lastSegment.indexOf('.') == -1);
|
||||
result = parent.addObject(lastSegment);
|
||||
myCache.put(key, result);
|
||||
}
|
||||
ourLog.trace("getNode {}: {}", key, result);
|
||||
return result;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,55 @@
|
|||
package ca.uhn.fhir.jpa.model.search;
|
||||
|
||||
import ca.uhn.fhir.context.FhirContext;
|
||||
import ca.uhn.fhir.rest.param.TokenParam;
|
||||
import org.hibernate.search.engine.backend.document.DocumentElement;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public class HibernateSearchIndexWriter {
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(HibernateSearchIndexWriter.class);
|
||||
public static final String IDX_STRING_NORMALIZED = "norm";
|
||||
public static final String IDX_STRING_EXACT = "exact";
|
||||
public static final String IDX_STRING_TEXT = "text";
|
||||
final HibernateSearchElementCache myNodeCache;
|
||||
final FhirContext myFhirContext;
|
||||
|
||||
HibernateSearchIndexWriter(FhirContext theFhirContext, DocumentElement theRoot) {
|
||||
myFhirContext = theFhirContext;
|
||||
myNodeCache = new HibernateSearchElementCache(theRoot);
|
||||
}
|
||||
|
||||
public DocumentElement getSearchParamIndexNode(String theSearchParamName, String theIndexType) {
|
||||
return myNodeCache.getObjectElement("sp", theSearchParamName, theIndexType);
|
||||
|
||||
}
|
||||
|
||||
public static HibernateSearchIndexWriter forRoot(FhirContext theFhirContext, DocumentElement theDocument) {
|
||||
return new HibernateSearchIndexWriter(theFhirContext, theDocument);
|
||||
}
|
||||
|
||||
public void writeStringIndex(String theSearchParam, String theValue) {
|
||||
DocumentElement stringIndexNode = getSearchParamIndexNode(theSearchParam, "string");
|
||||
|
||||
stringIndexNode.addValue(IDX_STRING_NORMALIZED, theValue);// for default search
|
||||
stringIndexNode.addValue(IDX_STRING_EXACT, theValue);
|
||||
stringIndexNode.addValue(IDX_STRING_TEXT, theValue);
|
||||
ourLog.debug("Adding Search Param Text: {} -- {}", theSearchParam, theValue);
|
||||
}
|
||||
|
||||
public void writeTokenIndex(String theSearchParam, TokenParam theValue) {
|
||||
DocumentElement tokenIndexNode = getSearchParamIndexNode(theSearchParam, "token");
|
||||
// TODO mb we can use a token_filter with pattern_capture to generate all three off a single value. Do this next, after merge.
|
||||
tokenIndexNode.addValue("code", theValue.getValue());
|
||||
tokenIndexNode.addValue("system", theValue.getSystem());
|
||||
//This next one returns as system|value
|
||||
tokenIndexNode.addValue("code-system", theValue.getValueAsQueryToken(myFhirContext));
|
||||
ourLog.debug("Adding Search Param Token: {} -- {}", theSearchParam, theValue);
|
||||
}
|
||||
|
||||
public void writeReferenceIndex(String theSearchParam, String theValue) {
|
||||
DocumentElement referenceIndexNode = getSearchParamIndexNode(theSearchParam, "reference");
|
||||
referenceIndexNode.addValue("value", theValue);
|
||||
ourLog.trace("Adding Search Param Reference: {} -- {}", theSearchParam, theValue);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,141 @@
|
|||
package ca.uhn.fhir.jpa.model.search;
|
||||
|
||||
/*-
|
||||
* #%L
|
||||
* HAPI FHIR JPA Server
|
||||
* %%
|
||||
* Copyright (C) 2014 - 2021 Smile CDR, Inc.
|
||||
* %%
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
* #L%
|
||||
*/
|
||||
|
||||
import org.hibernate.search.engine.backend.document.DocumentElement;
|
||||
import org.hibernate.search.engine.backend.document.model.dsl.IndexSchemaElement;
|
||||
import org.hibernate.search.engine.backend.document.model.dsl.IndexSchemaObjectField;
|
||||
import org.hibernate.search.engine.backend.types.ObjectStructure;
|
||||
import org.hibernate.search.engine.backend.types.Projectable;
|
||||
import org.hibernate.search.engine.backend.types.dsl.IndexFieldTypeFactory;
|
||||
import org.hibernate.search.engine.backend.types.dsl.StringIndexFieldTypeOptionsStep;
|
||||
import org.hibernate.search.mapper.pojo.bridge.PropertyBridge;
|
||||
import org.hibernate.search.mapper.pojo.bridge.binding.PropertyBindingContext;
|
||||
import org.hibernate.search.mapper.pojo.bridge.mapping.programmatic.PropertyBinder;
|
||||
import org.hibernate.search.mapper.pojo.bridge.runtime.PropertyBridgeWriteContext;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import static ca.uhn.fhir.jpa.model.search.HibernateSearchIndexWriter.IDX_STRING_EXACT;
|
||||
import static ca.uhn.fhir.jpa.model.search.HibernateSearchIndexWriter.IDX_STRING_NORMALIZED;
|
||||
import static ca.uhn.fhir.jpa.model.search.HibernateSearchIndexWriter.IDX_STRING_TEXT;
|
||||
|
||||
/**
|
||||
* Allows hibernate search to index
|
||||
*
|
||||
* CodeableConcept.text
|
||||
* Coding.display
|
||||
* Identifier.type.text
|
||||
*
|
||||
*/
|
||||
public class SearchParamTextPropertyBinder implements PropertyBinder, PropertyBridge<ExtendedLuceneIndexData> {
|
||||
|
||||
public static final String SEARCH_PARAM_TEXT_PREFIX = "text-";
|
||||
private static final Logger ourLog = LoggerFactory.getLogger(SearchParamTextPropertyBinder.class);
|
||||
|
||||
@Override
|
||||
public void bind(PropertyBindingContext thePropertyBindingContext) {
|
||||
// TODO Is it safe to use object identity of the Map to track dirty?
|
||||
// N.B. GGG I would hazard that it is not, we could potentially use Version of the resource.
|
||||
thePropertyBindingContext.dependencies().use("mySearchParamStrings");
|
||||
|
||||
defineIndexingTemplate(thePropertyBindingContext);
|
||||
|
||||
thePropertyBindingContext.bridge(ExtendedLuceneIndexData.class, this);
|
||||
}
|
||||
|
||||
private void defineIndexingTemplate(PropertyBindingContext thePropertyBindingContext) {
|
||||
IndexSchemaElement indexSchemaElement = thePropertyBindingContext.indexSchemaElement();
|
||||
|
||||
//In order to support dynamic fields, we have to use field templates. We _must_ define the template at bootstrap time and cannot
|
||||
//create them adhoc. https://docs.jboss.org/hibernate/search/6.0/reference/en-US/html_single/#mapper-orm-bridge-index-field-dsl-dynamic
|
||||
//I _think_ im doing the right thing here by indicating that everything matching this template uses this analyzer.
|
||||
IndexFieldTypeFactory indexFieldTypeFactory = thePropertyBindingContext.typeFactory();
|
||||
StringIndexFieldTypeOptionsStep<?> standardAnalyzer =
|
||||
indexFieldTypeFactory.asString()
|
||||
// TODO mb Once Ken finishes extracting a common base, we can share these constants with HapiElasticsearchAnalysisConfigurer and HapiLuceneAnalysisConfigurer
|
||||
.analyzer("standardAnalyzer")
|
||||
.projectable(Projectable.NO);
|
||||
|
||||
StringIndexFieldTypeOptionsStep<?> exactAnalyzer =
|
||||
indexFieldTypeFactory.asString()
|
||||
.analyzer("exactAnalyzer") // default max-length is 256. Is that enough for code system uris?
|
||||
.projectable(Projectable.NO);
|
||||
|
||||
StringIndexFieldTypeOptionsStep<?> normStringAnalyzer =
|
||||
indexFieldTypeFactory.asString()
|
||||
.analyzer("normStringAnalyzer")
|
||||
.projectable(Projectable.NO);
|
||||
|
||||
|
||||
|
||||
// the old style for _text and _contains
|
||||
indexSchemaElement
|
||||
.fieldTemplate("SearchParamText", standardAnalyzer)
|
||||
.matchingPathGlob(SEARCH_PARAM_TEXT_PREFIX + "*");
|
||||
|
||||
// The following section is a bit ugly. We need to enforce order and dependency or the object matches will be too big.
|
||||
{
|
||||
IndexSchemaObjectField spfield = indexSchemaElement.objectField("sp", ObjectStructure.FLATTENED);
|
||||
spfield.toReference();
|
||||
|
||||
// TODO MB: the lucene/elastic independent api is hurting a bit here.
|
||||
// For lucene, we need a separate field for each analyzer. So we'll add string (for :exact), and text (for :text).
|
||||
// They aren't marked stored, so there's no space cost beyond the index for each.
|
||||
// But for elastic, I'd rather have a single field defined, with multi-field sub-fields. The index cost is the same,
|
||||
// but elastic will actually store all fields in the source document.
|
||||
// Something like this. But we'll need two index writers (lucene vs hibernate).
|
||||
// ElasticsearchNativeIndexFieldTypeMappingStep nativeStep = indexFieldTypeFactory.extension(ElasticsearchExtension.get()).asNative();
|
||||
// nativeStep.mapping()
|
||||
|
||||
// So triplicate the storage for now. :-(
|
||||
String stringPathGlob = "*.string";
|
||||
spfield.objectFieldTemplate("stringIndex", ObjectStructure.FLATTENED).matchingPathGlob(stringPathGlob);
|
||||
spfield.fieldTemplate("string-norm", normStringAnalyzer).matchingPathGlob(stringPathGlob + "." + IDX_STRING_NORMALIZED).multiValued();
|
||||
spfield.fieldTemplate("string-exact", exactAnalyzer).matchingPathGlob(stringPathGlob + "." + IDX_STRING_EXACT).multiValued();
|
||||
spfield.fieldTemplate("string-text", standardAnalyzer).matchingPathGlob(stringPathGlob + "." + IDX_STRING_TEXT).multiValued();
|
||||
|
||||
// token
|
||||
// Ideally, we'd store a single code-system string and use a custom tokenizer to
|
||||
// generate "system|" "|code" and "system|code" tokens to support all three.
|
||||
// But the standard tokenizers aren't that flexible. As second best, it would be nice to use elastic multi-fields
|
||||
// to apply three different tokenizers to a single value.
|
||||
// Instead, just be simple and expand into three full fields for now
|
||||
spfield.objectFieldTemplate("tokenIndex", ObjectStructure.FLATTENED).matchingPathGlob("*.token");
|
||||
spfield.fieldTemplate("token-code", exactAnalyzer).matchingPathGlob("*.token.code").multiValued();
|
||||
spfield.fieldTemplate("token-code-system", exactAnalyzer).matchingPathGlob("*.token.code-system").multiValued();
|
||||
spfield.fieldTemplate("token-system", exactAnalyzer).matchingPathGlob("*.token.system").multiValued();
|
||||
spfield.fieldTemplate("reference-value", exactAnalyzer).matchingPathGlob("*.reference.value").multiValued();
|
||||
|
||||
// last, since the globs are matched in declaration order, and * matches even nested nodes.
|
||||
spfield.objectFieldTemplate("spObject", ObjectStructure.FLATTENED).matchingPathGlob("*");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void write(DocumentElement theDocument, ExtendedLuceneIndexData theIndexData, PropertyBridgeWriteContext thePropertyBridgeWriteContext) {
|
||||
if (theIndexData != null) {
|
||||
ourLog.trace("Writing index data for {}", theIndexData);
|
||||
theIndexData.writeIndexElements(theDocument);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,80 @@
|
|||
package ca.uhn.fhir.jpa.model.search;
|
||||
|
||||
import org.hamcrest.Matchers;
|
||||
import org.hibernate.search.engine.backend.document.DocumentElement;
|
||||
import org.hibernate.search.engine.backend.document.IndexFieldReference;
|
||||
import org.hibernate.search.engine.backend.document.IndexObjectFieldReference;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import static org.hamcrest.MatcherAssert.assertThat;
|
||||
|
||||
class HibernateSearchElementCacheTest {
|
||||
static class TestDocumentElement implements DocumentElement {
|
||||
final TestDocumentElement myParent;
|
||||
|
||||
TestDocumentElement(TestDocumentElement myParent) {
|
||||
this.myParent = myParent;
|
||||
}
|
||||
|
||||
@Override
|
||||
public <F> void addValue(IndexFieldReference<F> fieldReference, F value) {
|
||||
// nop
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocumentElement addObject(IndexObjectFieldReference fieldReference) {
|
||||
// nop
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addNullObject(IndexObjectFieldReference fieldReference) {
|
||||
// not used
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addValue(String relativeFieldName, Object value) {
|
||||
// not used
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocumentElement addObject(String relativeFieldName) {
|
||||
return new TestDocumentElement(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addNullObject(String relativeFieldName) {
|
||||
// not used;
|
||||
}
|
||||
}
|
||||
|
||||
TestDocumentElement myRoot = new TestDocumentElement(null);
|
||||
HibernateSearchElementCache mySvc = new HibernateSearchElementCache(myRoot);
|
||||
|
||||
@Test
|
||||
public void emptyPathReturnsRoot() {
|
||||
assertThat(mySvc.getObjectElement(), Matchers.sameInstance(myRoot));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void simpleChildIsRemembered() {
|
||||
DocumentElement child = mySvc.getObjectElement("child");
|
||||
|
||||
assertThat(mySvc.getObjectElement("child"), Matchers.sameInstance(child));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void deeperPathRemembered() {
|
||||
DocumentElement child = mySvc.getObjectElement("child", "grandchild");
|
||||
|
||||
assertThat(mySvc.getObjectElement("child", "grandchild"), Matchers.sameInstance(child));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void grandchildParentIsChild() {
|
||||
DocumentElement child = mySvc.getObjectElement("child");
|
||||
TestDocumentElement grandChild = (TestDocumentElement) mySvc.getObjectElement("child", "grandchild");
|
||||
assertThat(grandChild.myParent, Matchers.sameInstance(child));
|
||||
}
|
||||
|
||||
}
|
|
@ -15,6 +15,7 @@ import ca.uhn.fhir.rest.param.DateParam;
|
|||
import ca.uhn.fhir.rest.param.DateRangeParam;
|
||||
import ca.uhn.fhir.rest.param.ParamPrefixEnum;
|
||||
import ca.uhn.fhir.rest.param.QuantityParam;
|
||||
import ca.uhn.fhir.rest.param.TokenParamModifier;
|
||||
import ca.uhn.fhir.util.ObjectUtil;
|
||||
import ca.uhn.fhir.util.UrlUtil;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
|
@ -22,6 +23,7 @@ import org.apache.commons.lang3.Validate;
|
|||
import org.apache.commons.lang3.builder.ToStringBuilder;
|
||||
import org.apache.commons.lang3.builder.ToStringStyle;
|
||||
|
||||
import javax.annotation.Nonnull;
|
||||
import java.io.Serializable;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
|
@ -623,6 +625,90 @@ public class SearchParameterMap implements Serializable {
|
|||
return mySearchParameterMap.remove(theName);
|
||||
}
|
||||
|
||||
/**
|
||||
* Variant of removeByNameAndModifier for unmodified params.
|
||||
*
|
||||
* @param theName
|
||||
* @return an And/Or List of Query Parameters matching the name with no modifier.
|
||||
*/
|
||||
public List<List<IQueryParameterType>> removeByNameUnmodified(String theName) {
|
||||
return this.removeByNameAndModifier(theName, "");
|
||||
}
|
||||
|
||||
/**
|
||||
* Given a search parameter name and modifier (e.g. :text),
|
||||
* get and remove all Search Parameters matching this name and modifier
|
||||
*
|
||||
* @param theName the query parameter key
|
||||
* @param theModifier the qualifier you want to remove - nullable for unmodified params.
|
||||
*
|
||||
* @return an And/Or List of Query Parameters matching the qualifier.
|
||||
*/
|
||||
public List<List<IQueryParameterType>> removeByNameAndModifier(String theName, String theModifier) {
|
||||
theModifier = StringUtils.defaultString(theModifier, "");
|
||||
|
||||
List<List<IQueryParameterType>> remainderParameters = new ArrayList<>();
|
||||
List<List<IQueryParameterType>> matchingParameters = new ArrayList<>();
|
||||
|
||||
// pull all of them out, partition by match against the qualifier
|
||||
List<List<IQueryParameterType>> andList = mySearchParameterMap.remove(theName);
|
||||
if (andList != null) {
|
||||
for (List<IQueryParameterType> orList : andList) {
|
||||
if (!orList.isEmpty() &&
|
||||
StringUtils.defaultString(orList.get(0).getQueryParameterQualifier(), "")
|
||||
.equals(theModifier)) {
|
||||
matchingParameters.add(orList);
|
||||
} else {
|
||||
remainderParameters.add(orList);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// put the unmatched back in.
|
||||
if (!remainderParameters.isEmpty()) {
|
||||
mySearchParameterMap.put(theName, remainderParameters);
|
||||
}
|
||||
return matchingParameters;
|
||||
|
||||
}
|
||||
|
||||
public List<List<IQueryParameterType>> removeByNameAndModifier(String theName, @Nonnull TokenParamModifier theModifier) {
|
||||
return removeByNameAndModifier(theName, theModifier.getValue());
|
||||
}
|
||||
|
||||
/**
|
||||
* For each search parameter in the map, extract any which have the given qualifier.
|
||||
* e.g. Take the url: Observation?code:text=abc&code=123&code:text=def&reason:text=somereason
|
||||
*
|
||||
* If we call this function with `:text`, it will return a map that looks like:
|
||||
*
|
||||
* code -> [[code:text=abc], [code:text=def]]
|
||||
* reason -> [[reason:text=somereason]]
|
||||
*
|
||||
* and the remaining search parameters in the map will be:
|
||||
*
|
||||
* code -> [[code=123]]
|
||||
*
|
||||
* @param theQualifier
|
||||
* @return
|
||||
*/
|
||||
public Map<String, List<List<IQueryParameterType>>> removeByQualifier(String theQualifier) {
|
||||
|
||||
Map<String, List<List<IQueryParameterType>>> retVal = new HashMap<>();
|
||||
Set<String> parameterNames = mySearchParameterMap.keySet();
|
||||
for (String parameterName : parameterNames) {
|
||||
List<List<IQueryParameterType>> paramsWithQualifier = removeByNameAndModifier(parameterName, theQualifier);
|
||||
retVal.put(parameterName, paramsWithQualifier);
|
||||
}
|
||||
|
||||
return retVal;
|
||||
|
||||
}
|
||||
|
||||
public Map<String, List<List<IQueryParameterType>>> removeByQualifier(@Nonnull TokenParamModifier theModifier) {
|
||||
return removeByQualifier(theModifier.getValue());
|
||||
}
|
||||
|
||||
public int size() {
|
||||
return mySearchParameterMap.size();
|
||||
}
|
||||
|
|
|
@ -1,9 +1,19 @@
|
|||
package ca.uhn.fhir.jpa.searchparam;
|
||||
|
||||
import ca.uhn.fhir.context.FhirContext;
|
||||
import ca.uhn.fhir.model.api.IQueryParameterType;
|
||||
import ca.uhn.fhir.rest.param.DateRangeParam;
|
||||
import ca.uhn.fhir.rest.param.TokenOrListParam;
|
||||
import ca.uhn.fhir.rest.param.TokenParam;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
import static ca.uhn.fhir.rest.param.TokenParamModifier.TEXT;
|
||||
import static org.hamcrest.CoreMatchers.is;
|
||||
import static org.hamcrest.CoreMatchers.nullValue;
|
||||
import static org.hamcrest.MatcherAssert.assertThat;
|
||||
import static org.hamcrest.Matchers.hasSize;
|
||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||
|
||||
class SearchParameterMapTest {
|
||||
|
@ -26,4 +36,68 @@ class SearchParameterMapTest {
|
|||
map.setLastUpdated(dateRangeParam);
|
||||
assertEquals("?_lastUpdated=le2021-05-31", map.toNormalizedQueryString(ourFhirContext));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testRemoveByModifier() {
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
|
||||
TokenOrListParam qualifiedTokenParam = new TokenOrListParam()
|
||||
.addOr(new TokenParam("weight-text-1").setModifier(TEXT))
|
||||
.addOr(new TokenParam("weight-text-2").setModifier(TEXT));
|
||||
|
||||
TokenParam unqualifiedTokenParam = new TokenParam("weight-no-text");
|
||||
|
||||
map.add("code", qualifiedTokenParam);
|
||||
map.add("code", unqualifiedTokenParam);
|
||||
List<List<IQueryParameterType>> andList = map.removeByNameAndModifier("code", TEXT);
|
||||
assertThat(andList, hasSize(1));
|
||||
List<IQueryParameterType> orList = andList.get(0);
|
||||
assertThat(orList, hasSize(2));
|
||||
|
||||
List<List<IQueryParameterType>> unqualifiedAnds = map.get("code");
|
||||
assertThat(unqualifiedAnds, hasSize(1));
|
||||
|
||||
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testRemoveByNullModifier() {
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
|
||||
TokenOrListParam unqualifiedTokenParam = new TokenOrListParam()
|
||||
.addOr(new TokenParam("http://example.com", "123"))
|
||||
.addOr(new TokenParam("http://example.com", "345"));
|
||||
|
||||
TokenParam qualifiedTokenParam = new TokenParam("weight-text").setModifier(TEXT);
|
||||
|
||||
map.add("code", unqualifiedTokenParam);
|
||||
map.add("code", qualifiedTokenParam);
|
||||
List<List<IQueryParameterType>> andList = map.removeByNameAndModifier("code", (String) null);
|
||||
assertThat(andList, hasSize(1));
|
||||
List<IQueryParameterType> orList = andList.get(0);
|
||||
assertThat(orList, hasSize(2));
|
||||
|
||||
List<List<IQueryParameterType>> qualifiedAnds = map.get("code");
|
||||
assertThat(qualifiedAnds, hasSize(1));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testRemoveByQualifierRemovesAll() {
|
||||
SearchParameterMap map = new SearchParameterMap();
|
||||
|
||||
TokenOrListParam qualifiedTokenParam = new TokenOrListParam()
|
||||
.addOr(new TokenParam("weight-text-1").setModifier(TEXT))
|
||||
.addOr(new TokenParam("weight-text-2").setModifier(TEXT));
|
||||
|
||||
map.add("code", qualifiedTokenParam);
|
||||
List<List<IQueryParameterType>> andList = map.removeByNameAndModifier("code", TEXT);
|
||||
assertThat(andList, hasSize(1));
|
||||
List<IQueryParameterType> orList = andList.get(0);
|
||||
assertThat(orList, hasSize(2));
|
||||
|
||||
List<List<IQueryParameterType>> unqualifiedAnds = map.remove("code");
|
||||
assertThat(unqualifiedAnds, is(nullValue()));
|
||||
|
||||
|
||||
}
|
||||
}
|
||||
|
|
|
@ -268,6 +268,15 @@ public class DaoConfig {
|
|||
private Integer myBundleBatchPoolSize = DEFAULT_BUNDLE_BATCH_POOL_SIZE;
|
||||
private Integer myBundleBatchMaxPoolSize = DEFAULT_BUNDLE_BATCH_MAX_POOL_SIZE;
|
||||
|
||||
/**
|
||||
* Activates the new Lucene/Elasticsearch indexing of search parameters.
|
||||
* When active, string, token, and reference parameters will be indexed and
|
||||
* queried within Hibernate Search.
|
||||
*
|
||||
* @since 5.6.0
|
||||
* TODO mb test more with this true
|
||||
*/
|
||||
private boolean myAdvancedLuceneIndexing = false;
|
||||
|
||||
/**
|
||||
* Constructor
|
||||
|
@ -2686,6 +2695,29 @@ public class DaoConfig {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Is lucene/hibernate indexing enabled beyond _contains or _text?
|
||||
*
|
||||
* @since 5.6.0
|
||||
*/
|
||||
public boolean isAdvancedLuceneIndexing() {
|
||||
return myAdvancedLuceneIndexing;
|
||||
}
|
||||
|
||||
/**
|
||||
* Enable/disable lucene/hibernate indexing enabled beyond _contains or _text.
|
||||
*
|
||||
* String, token, and reference parameters can be indexed in Lucene.
|
||||
* This extends token search to support :text searches, as well as supporting
|
||||
* :contains and :text on string parameters.
|
||||
*
|
||||
* @since 5.6.0
|
||||
*/
|
||||
public void setAdvancedLuceneIndexing(boolean theAdvancedLuceneIndexing) {
|
||||
this.myAdvancedLuceneIndexing = theAdvancedLuceneIndexing;
|
||||
}
|
||||
|
||||
|
||||
public enum IndexEnabledEnum {
|
||||
ENABLED,
|
||||
DISABLED
|
||||
|
|
|
@ -69,7 +69,7 @@ class Chef
|
|||
recursive true
|
||||
end
|
||||
|
||||
# FIXME: support user supplied template
|
||||
# TODO: support user supplied template
|
||||
template "#{prefix_dir}/etc/my.cnf" do
|
||||
if new_resource.parsed_template_source.nil?
|
||||
source "#{new_resource.parsed_version}/my.cnf.erb"
|
||||
|
|
Loading…
Reference in New Issue