parent/child: Removed the `top_children` query.
This commit is contained in:
parent
9e01dedef5
commit
acdd9a5dd9
|
@ -397,28 +397,6 @@ QueryBuilder qb = termsQuery("tags", <1>
|
|||
<2> values
|
||||
<3> how many terms must match at least
|
||||
|
||||
[[top-children]]
|
||||
=== Top Children Query
|
||||
|
||||
See {ref}/query-dsl-top-children-query.html[Top Children Query]
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
QueryBuilder qb = topChildrenQuery(
|
||||
"blog_tag", <1>
|
||||
termQuery("tag", "something") <2>
|
||||
)
|
||||
.score("max") <3>
|
||||
.factor(5) <4>
|
||||
.incrementalFactor(2); <5>
|
||||
--------------------------------------------------
|
||||
<1> field
|
||||
<2> query
|
||||
<3> `max`, `sum` or `avg`
|
||||
<4> how many hits are asked for in the first child query run (defaults to 5)
|
||||
<5> if not enough parents are found, and there are still more child docs to query, then the child search hits are
|
||||
expanded by multiplying by the incremental_factor (defaults to 2).
|
||||
|
||||
[[wildcard]]
|
||||
=== Wildcard Query
|
||||
|
||||
|
|
|
@ -144,4 +144,4 @@ is the same).
|
|||
[[limitations]]
|
||||
=== Limitations
|
||||
|
||||
The delete by query does not support the following queries and filters: `has_child`, `has_parent` and `top_children`.
|
||||
The delete by query does not support the following queries and filters: `has_child` and `has_parent`.
|
||||
|
|
|
@ -513,3 +513,10 @@ Client client = TransportClient.builder().settings(settings).build();
|
|||
|
||||
Log messages are now truncated at 10,000 characters. This can be changed in the
|
||||
`logging.yml` configuration file.
|
||||
|
||||
[float]
|
||||
=== Removed `top_children` query
|
||||
|
||||
The `top_children` query has been removed in favour of the `has_child` query. The `top_children` query wasn't always faster
|
||||
than the `has_child` query and the `top_children` query was often inaccurate. The total hits and any aggregations in the
|
||||
same search request will likely be off if `top_children` was used.
|
|
@ -19,11 +19,6 @@ an example:
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
An important difference with the `top_children` query is that this query
|
||||
is always executed in two iterations whereas the `top_children` query
|
||||
can be executed in one or more iteration. When using the `has_child`
|
||||
query the `total_hits` is always correct.
|
||||
|
||||
[float]
|
||||
=== Scoring capabilities
|
||||
|
||||
|
|
|
@ -82,8 +82,6 @@ include::term-query.asciidoc[]
|
|||
|
||||
include::terms-query.asciidoc[]
|
||||
|
||||
include::top-children-query.asciidoc[]
|
||||
|
||||
include::wildcard-query.asciidoc[]
|
||||
|
||||
include::minimum-should-match.asciidoc[]
|
||||
|
|
|
@ -1,86 +0,0 @@
|
|||
[[query-dsl-top-children-query]]
|
||||
== Top Children Query
|
||||
|
||||
deprecated[1.6.0, Use the `has_child` query instead]
|
||||
|
||||
The `top_children` query runs the child query with an estimated hits
|
||||
size, and out of the hit docs, aggregates it into parent docs. If there
|
||||
aren't enough parent docs matching the requested from/size search
|
||||
request, then it is run again with a wider (more hits) search.
|
||||
|
||||
The `top_children` also provide scoring capabilities, with the ability
|
||||
to specify `max`, `sum` or `avg` as the score type.
|
||||
|
||||
One downside of using the `top_children` is that if there are more child
|
||||
docs matching the required hits when executing the child query, then the
|
||||
`total_hits` result of the search response will be incorrect.
|
||||
|
||||
How many hits are asked for in the first child query run is controlled
|
||||
using the `factor` parameter (defaults to `5`). For example, when asking
|
||||
for 10 parent docs (with `from` set to 0), then the child query will
|
||||
execute with 50 hits expected. If not enough parents are found (in our
|
||||
example 10), and there are still more child docs to query, then the
|
||||
child search hits are expanded by multiplying by the
|
||||
`incremental_factor` (defaults to `2`).
|
||||
|
||||
The required parameters are the `query` and `type` (the child type to
|
||||
execute the query on). Here is an example with all different parameters,
|
||||
including the default values:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"top_children" : {
|
||||
"type": "blog_tag",
|
||||
"query" : {
|
||||
"term" : {
|
||||
"tag" : "something"
|
||||
}
|
||||
},
|
||||
"score" : "max",
|
||||
"factor" : 5,
|
||||
"incremental_factor" : 2
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
=== Scope
|
||||
|
||||
A `_scope` can be defined on the query allowing to run aggregations on the
|
||||
same scope name that will work against the child documents. For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"top_children" : {
|
||||
"_scope" : "my_scope",
|
||||
"type": "blog_tag",
|
||||
"query" : {
|
||||
"term" : {
|
||||
"tag" : "something"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
=== Memory Considerations
|
||||
|
||||
In order to support parent-child joins, all of the (string) parent IDs
|
||||
must be resident in memory (in the <<index-modules-fielddata,field data cache>>.
|
||||
Additionally, every child document is mapped to its parent using a long
|
||||
value (approximately). It is advisable to keep the string parent ID short
|
||||
in order to reduce memory usage.
|
||||
|
||||
You can check how much memory is being used by the ID cache using the
|
||||
<<indices-stats,indices stats>> or <<cluster-nodes-stats,nodes stats>>
|
||||
APIS, eg:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
curl -XGET "http://localhost:9200/_stats/id_cache?pretty&human"
|
||||
--------------------------------------------------
|
||||
|
||||
|
|
@ -491,7 +491,7 @@ the time the percolate API needs to run can be decreased.
|
|||
=== Important Notes
|
||||
|
||||
Because the percolator API is processing one document at a time, it doesn't support queries and filters that run
|
||||
against child documents such as `has_child`, `has_parent` and `top_children`.
|
||||
against child documents such as `has_child` and `has_parent`.
|
||||
|
||||
The `wildcard` and `regexp` query natively use a lot of memory and because the percolator keeps the queries into memory
|
||||
this can easily take up the available memory in the heap space. If possible try to use a `prefix` query or ngramming to
|
||||
|
|
|
@ -423,18 +423,6 @@ public abstract class QueryBuilders {
|
|||
return new MoreLikeThisQueryBuilder();
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs a new scoring child query, with the child type and the query to run on the child documents. The
|
||||
* results of this query are the parent docs that those child docs matched.
|
||||
*
|
||||
* @param type The child type.
|
||||
* @param query The query.
|
||||
*/
|
||||
@Deprecated
|
||||
public static TopChildrenQueryBuilder topChildrenQuery(String type, QueryBuilder query) {
|
||||
return new TopChildrenQueryBuilder(type, query);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs a new NON scoring child query, with the child type and the query to run on the child documents. The
|
||||
* results of this query are the parent docs that those child docs matched.
|
||||
|
|
|
@ -1,117 +0,0 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
@Deprecated
|
||||
public class TopChildrenQueryBuilder extends BaseQueryBuilder implements BoostableQueryBuilder<TopChildrenQueryBuilder> {
|
||||
|
||||
private final QueryBuilder queryBuilder;
|
||||
|
||||
private String childType;
|
||||
|
||||
private String score;
|
||||
|
||||
private float boost = 1.0f;
|
||||
|
||||
private int factor = -1;
|
||||
|
||||
private int incrementalFactor = -1;
|
||||
|
||||
private String queryName;
|
||||
|
||||
public TopChildrenQueryBuilder(String type, QueryBuilder queryBuilder) {
|
||||
this.childType = type;
|
||||
this.queryBuilder = queryBuilder;
|
||||
}
|
||||
|
||||
/**
|
||||
* How to compute the score. Possible values are: <tt>max</tt>, <tt>sum</tt>, or <tt>avg</tt>. Defaults
|
||||
* to <tt>max</tt>.
|
||||
*/
|
||||
public TopChildrenQueryBuilder score(String score) {
|
||||
this.score = score;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Controls the multiplication factor of the initial hits required from the child query over the main query request.
|
||||
* Defaults to 5.
|
||||
*/
|
||||
public TopChildrenQueryBuilder factor(int factor) {
|
||||
this.factor = factor;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets the incremental factor when the query needs to be re-run in order to fetch more results. Defaults to 2.
|
||||
*/
|
||||
public TopChildrenQueryBuilder incrementalFactor(int incrementalFactor) {
|
||||
this.incrementalFactor = incrementalFactor;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets the boost for this query. Documents matching this query will (in addition to the normal
|
||||
* weightings) have their score multiplied by the boost provided.
|
||||
*/
|
||||
@Override
|
||||
public TopChildrenQueryBuilder boost(float boost) {
|
||||
this.boost = boost;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets the query name for the filter that can be used when searching for matched_filters per hit.
|
||||
*/
|
||||
public TopChildrenQueryBuilder queryName(String queryName) {
|
||||
this.queryName = queryName;
|
||||
return this;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject(TopChildrenQueryParser.NAME);
|
||||
builder.field("query");
|
||||
queryBuilder.toXContent(builder, params);
|
||||
builder.field("type", childType);
|
||||
if (score != null) {
|
||||
builder.field("score", score);
|
||||
}
|
||||
if (boost != -1) {
|
||||
builder.field("boost", boost);
|
||||
}
|
||||
if (factor != -1) {
|
||||
builder.field("factor", factor);
|
||||
}
|
||||
if (incrementalFactor != -1) {
|
||||
builder.field("incremental_factor", incrementalFactor);
|
||||
}
|
||||
if (queryName != null) {
|
||||
builder.field("_name", queryName);
|
||||
}
|
||||
builder.endObject();
|
||||
}
|
||||
}
|
|
@ -1,140 +0,0 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.join.BitDocIdSetFilter;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;
|
||||
import org.elasticsearch.index.mapper.DocumentMapper;
|
||||
import org.elasticsearch.index.mapper.internal.ParentFieldMapper;
|
||||
import org.elasticsearch.index.query.support.XContentStructure;
|
||||
import org.elasticsearch.index.search.child.ScoreType;
|
||||
import org.elasticsearch.index.search.child.TopChildrenQuery;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
@Deprecated
|
||||
public class TopChildrenQueryParser implements QueryParser {
|
||||
|
||||
public static final String NAME = "top_children";
|
||||
|
||||
@Inject
|
||||
public TopChildrenQueryParser() {
|
||||
}
|
||||
|
||||
@Override
|
||||
public String[] names() {
|
||||
return new String[]{NAME, Strings.toCamelCase(NAME)};
|
||||
}
|
||||
|
||||
@Override
|
||||
public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
boolean queryFound = false;
|
||||
float boost = 1.0f;
|
||||
String childType = null;
|
||||
ScoreType scoreType = ScoreType.MAX;
|
||||
int factor = 5;
|
||||
int incrementalFactor = 2;
|
||||
String queryName = null;
|
||||
|
||||
String currentFieldName = null;
|
||||
XContentParser.Token token;
|
||||
XContentStructure.InnerQuery iq = null;
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
|
||||
if (token == XContentParser.Token.FIELD_NAME) {
|
||||
currentFieldName = parser.currentName();
|
||||
} else if (token == XContentParser.Token.START_OBJECT) {
|
||||
// Usually, the query would be parsed here, but the child
|
||||
// type may not have been extracted yet, so use the
|
||||
// XContentStructure.<type> facade to parse if available,
|
||||
// or delay parsing if not.
|
||||
if ("query".equals(currentFieldName)) {
|
||||
iq = new XContentStructure.InnerQuery(parseContext, childType == null ? null : new String[] {childType});
|
||||
queryFound = true;
|
||||
} else {
|
||||
throw new QueryParsingException(parseContext, "[top_children] query does not support [" + currentFieldName + "]");
|
||||
}
|
||||
} else if (token.isValue()) {
|
||||
if ("type".equals(currentFieldName)) {
|
||||
childType = parser.text();
|
||||
} else if ("score".equals(currentFieldName)) {
|
||||
scoreType = ScoreType.fromString(parser.text());
|
||||
} else if ("score_mode".equals(currentFieldName) || "scoreMode".equals(currentFieldName)) {
|
||||
scoreType = ScoreType.fromString(parser.text());
|
||||
} else if ("boost".equals(currentFieldName)) {
|
||||
boost = parser.floatValue();
|
||||
} else if ("factor".equals(currentFieldName)) {
|
||||
factor = parser.intValue();
|
||||
} else if ("incremental_factor".equals(currentFieldName) || "incrementalFactor".equals(currentFieldName)) {
|
||||
incrementalFactor = parser.intValue();
|
||||
} else if ("_name".equals(currentFieldName)) {
|
||||
queryName = parser.text();
|
||||
} else {
|
||||
throw new QueryParsingException(parseContext, "[top_children] query does not support [" + currentFieldName + "]");
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!queryFound) {
|
||||
throw new QueryParsingException(parseContext, "[top_children] requires 'query' field");
|
||||
}
|
||||
if (childType == null) {
|
||||
throw new QueryParsingException(parseContext, "[top_children] requires 'type' field");
|
||||
}
|
||||
|
||||
Query innerQuery = iq.asQuery(childType);
|
||||
|
||||
if (innerQuery == null) {
|
||||
return null;
|
||||
}
|
||||
|
||||
DocumentMapper childDocMapper = parseContext.mapperService().documentMapper(childType);
|
||||
if (childDocMapper == null) {
|
||||
throw new QueryParsingException(parseContext, "No mapping for for type [" + childType + "]");
|
||||
}
|
||||
ParentFieldMapper parentFieldMapper = childDocMapper.parentFieldMapper();
|
||||
if (!parentFieldMapper.active()) {
|
||||
throw new QueryParsingException(parseContext, "Type [" + childType + "] does not have parent mapping");
|
||||
}
|
||||
String parentType = childDocMapper.parentFieldMapper().type();
|
||||
|
||||
BitDocIdSetFilter nonNestedDocsFilter = null;
|
||||
if (childDocMapper.hasNestedObjects()) {
|
||||
nonNestedDocsFilter = parseContext.bitsetFilter(Queries.newNonNestedFilter());
|
||||
}
|
||||
|
||||
innerQuery.setBoost(boost);
|
||||
// wrap the query with type query
|
||||
innerQuery = Queries.filtered(innerQuery, childDocMapper.typeFilter());
|
||||
ParentChildIndexFieldData parentChildIndexFieldData = parseContext.getForField(parentFieldMapper);
|
||||
TopChildrenQuery query = new TopChildrenQuery(parentChildIndexFieldData, innerQuery, childType, parentType, scoreType, factor, incrementalFactor, nonNestedDocsFilter);
|
||||
if (queryName != null) {
|
||||
parseContext.addNamedQuery(queryName, query);
|
||||
}
|
||||
return query;
|
||||
}
|
||||
}
|
|
@ -1,406 +0,0 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.index.search.child;
|
||||
|
||||
import com.carrotsearch.hppc.IntObjectOpenHashMap;
|
||||
import com.carrotsearch.hppc.ObjectObjectOpenHashMap;
|
||||
|
||||
import org.apache.lucene.index.*;
|
||||
import org.apache.lucene.search.*;
|
||||
import org.apache.lucene.util.*;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.common.lease.Releasable;
|
||||
import org.elasticsearch.common.lucene.IndexCacheableQuery;
|
||||
import org.elasticsearch.common.lucene.search.EmptyScorer;
|
||||
import org.apache.lucene.search.join.BitDocIdSetFilter;
|
||||
import org.elasticsearch.index.fielddata.IndexParentChildFieldData;
|
||||
import org.elasticsearch.index.mapper.Uid;
|
||||
import org.elasticsearch.index.mapper.internal.UidFieldMapper;
|
||||
import org.elasticsearch.search.internal.SearchContext;
|
||||
import org.elasticsearch.search.internal.SearchContext.Lifetime;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Comparator;
|
||||
import java.util.Set;
|
||||
|
||||
/**
|
||||
* A query that evaluates the top matching child documents (based on the score) in order to determine what
|
||||
* parent documents to return. This query tries to find just enough child documents to return the the requested
|
||||
* number of parent documents (or less if no other child document can be found).
|
||||
* <p/>
|
||||
* This query executes several internal searches. In the first round it tries to find ((request offset + requested size) * factor)
|
||||
* child documents. The resulting child documents are mapped into their parent documents including the aggragted child scores.
|
||||
* If not enough parent documents could be resolved then a subsequent round is executed, requesting previous requested
|
||||
* documents times incremental_factor. This logic repeats until enough parent documents are resolved or until no more
|
||||
* child documents are available.
|
||||
* <p/>
|
||||
* This query is most of the times faster than the {@link ChildrenQuery}. Usually enough parent documents can be returned
|
||||
* in the first child document query round.
|
||||
*/
|
||||
@Deprecated
|
||||
public class TopChildrenQuery extends IndexCacheableQuery {
|
||||
|
||||
private static final ParentDocComparator PARENT_DOC_COMP = new ParentDocComparator();
|
||||
|
||||
private final IndexParentChildFieldData parentChildIndexFieldData;
|
||||
private final String parentType;
|
||||
private final String childType;
|
||||
private final ScoreType scoreType;
|
||||
private final int factor;
|
||||
private final int incrementalFactor;
|
||||
private Query childQuery;
|
||||
private final BitDocIdSetFilter nonNestedDocsFilter;
|
||||
|
||||
// Note, the query is expected to already be filtered to only child type docs
|
||||
public TopChildrenQuery(IndexParentChildFieldData parentChildIndexFieldData, Query childQuery, String childType, String parentType, ScoreType scoreType, int factor, int incrementalFactor, BitDocIdSetFilter nonNestedDocsFilter) {
|
||||
this.parentChildIndexFieldData = parentChildIndexFieldData;
|
||||
this.childQuery = childQuery;
|
||||
this.childType = childType;
|
||||
this.parentType = parentType;
|
||||
this.scoreType = scoreType;
|
||||
this.factor = factor;
|
||||
this.incrementalFactor = incrementalFactor;
|
||||
this.nonNestedDocsFilter = nonNestedDocsFilter;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Query rewrite(IndexReader reader) throws IOException {
|
||||
Query childRewritten = childQuery.rewrite(reader);
|
||||
if (childRewritten != childQuery) {
|
||||
Query rewritten = new TopChildrenQuery(parentChildIndexFieldData, childRewritten, childType, parentType, scoreType, factor, incrementalFactor, nonNestedDocsFilter);
|
||||
rewritten.setBoost(getBoost());
|
||||
return rewritten;
|
||||
}
|
||||
return super.rewrite(reader);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Weight doCreateWeight(IndexSearcher searcher, boolean needsScores) throws IOException {
|
||||
ObjectObjectOpenHashMap<Object, ParentDoc[]> parentDocs = new ObjectObjectOpenHashMap<>();
|
||||
SearchContext searchContext = SearchContext.current();
|
||||
|
||||
int parentHitsResolved;
|
||||
int requestedDocs = (searchContext.from() + searchContext.size());
|
||||
if (requestedDocs <= 0) {
|
||||
requestedDocs = 1;
|
||||
}
|
||||
int numChildDocs = requestedDocs * factor;
|
||||
|
||||
IndexSearcher indexSearcher = new IndexSearcher(searcher.getIndexReader());
|
||||
indexSearcher.setSimilarity(searcher.getSimilarity());
|
||||
indexSearcher.setQueryCache(null);
|
||||
while (true) {
|
||||
parentDocs.clear();
|
||||
TopDocs topChildDocs = indexSearcher.search(childQuery, numChildDocs);
|
||||
try {
|
||||
parentHitsResolved = resolveParentDocuments(topChildDocs, searchContext, parentDocs);
|
||||
} catch (Exception e) {
|
||||
throw new IOException(e);
|
||||
}
|
||||
|
||||
// check if we found enough docs, if so, break
|
||||
if (parentHitsResolved >= requestedDocs) {
|
||||
break;
|
||||
}
|
||||
// if we did not find enough docs, check if it make sense to search further
|
||||
if (topChildDocs.totalHits <= numChildDocs) {
|
||||
break;
|
||||
}
|
||||
// if not, update numDocs, and search again
|
||||
numChildDocs *= incrementalFactor;
|
||||
if (numChildDocs > topChildDocs.totalHits) {
|
||||
numChildDocs = topChildDocs.totalHits;
|
||||
}
|
||||
}
|
||||
|
||||
ParentWeight parentWeight = new ParentWeight(this, childQuery.createWeight(searcher, needsScores), parentDocs);
|
||||
searchContext.addReleasable(parentWeight, Lifetime.COLLECTION);
|
||||
return parentWeight;
|
||||
}
|
||||
|
||||
int resolveParentDocuments(TopDocs topDocs, SearchContext context, ObjectObjectOpenHashMap<Object, ParentDoc[]> parentDocs) throws Exception {
|
||||
int parentHitsResolved = 0;
|
||||
ObjectObjectOpenHashMap<Object, IntObjectOpenHashMap<ParentDoc>> parentDocsPerReader = new ObjectObjectOpenHashMap<>(context.searcher().getIndexReader().leaves().size());
|
||||
child_hits: for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
|
||||
int readerIndex = ReaderUtil.subIndex(scoreDoc.doc, context.searcher().getIndexReader().leaves());
|
||||
LeafReaderContext subContext = context.searcher().getIndexReader().leaves().get(readerIndex);
|
||||
SortedDocValues parentValues = parentChildIndexFieldData.load(subContext).getOrdinalsValues(parentType);
|
||||
int subDoc = scoreDoc.doc - subContext.docBase;
|
||||
|
||||
// find the parent id
|
||||
BytesRef parentId = parentValues.get(subDoc);
|
||||
if (parentId == null) {
|
||||
// no parent found
|
||||
continue;
|
||||
}
|
||||
// now go over and find the parent doc Id and reader tuple
|
||||
for (LeafReaderContext atomicReaderContext : context.searcher().getIndexReader().leaves()) {
|
||||
LeafReader indexReader = atomicReaderContext.reader();
|
||||
BitSet nonNestedDocs = null;
|
||||
if (nonNestedDocsFilter != null) {
|
||||
BitDocIdSet nonNestedDocIdSet = nonNestedDocsFilter.getDocIdSet(atomicReaderContext);
|
||||
if (nonNestedDocIdSet != null) {
|
||||
nonNestedDocs = nonNestedDocIdSet.bits();
|
||||
}
|
||||
}
|
||||
|
||||
Terms terms = indexReader.terms(UidFieldMapper.NAME);
|
||||
if (terms == null) {
|
||||
continue;
|
||||
}
|
||||
TermsEnum termsEnum = terms.iterator();
|
||||
if (!termsEnum.seekExact(Uid.createUidAsBytes(parentType, parentId))) {
|
||||
continue;
|
||||
}
|
||||
PostingsEnum docsEnum = termsEnum.postings(indexReader.getLiveDocs(), null, PostingsEnum.NONE);
|
||||
int parentDocId = docsEnum.nextDoc();
|
||||
if (nonNestedDocs != null && !nonNestedDocs.get(parentDocId)) {
|
||||
parentDocId = nonNestedDocs.nextSetBit(parentDocId);
|
||||
}
|
||||
if (parentDocId != DocIdSetIterator.NO_MORE_DOCS) {
|
||||
// we found a match, add it and break
|
||||
IntObjectOpenHashMap<ParentDoc> readerParentDocs = parentDocsPerReader.get(indexReader.getCoreCacheKey());
|
||||
if (readerParentDocs == null) {
|
||||
//The number of docs in the reader and in the query both upper bound the size of parentDocsPerReader
|
||||
int mapSize = Math.min(indexReader.maxDoc(), context.from() + context.size());
|
||||
readerParentDocs = new IntObjectOpenHashMap<>(mapSize);
|
||||
parentDocsPerReader.put(indexReader.getCoreCacheKey(), readerParentDocs);
|
||||
}
|
||||
ParentDoc parentDoc = readerParentDocs.get(parentDocId);
|
||||
if (parentDoc == null) {
|
||||
parentHitsResolved++; // we have a hit on a parent
|
||||
parentDoc = new ParentDoc();
|
||||
parentDoc.docId = parentDocId;
|
||||
parentDoc.count = 1;
|
||||
parentDoc.maxScore = scoreDoc.score;
|
||||
parentDoc.minScore = scoreDoc.score;
|
||||
parentDoc.sumScores = scoreDoc.score;
|
||||
readerParentDocs.put(parentDocId, parentDoc);
|
||||
} else {
|
||||
parentDoc.count++;
|
||||
parentDoc.sumScores += scoreDoc.score;
|
||||
if (scoreDoc.score < parentDoc.minScore) {
|
||||
parentDoc.minScore = scoreDoc.score;
|
||||
}
|
||||
if (scoreDoc.score > parentDoc.maxScore) {
|
||||
parentDoc.maxScore = scoreDoc.score;
|
||||
}
|
||||
}
|
||||
continue child_hits;
|
||||
}
|
||||
}
|
||||
}
|
||||
boolean[] states = parentDocsPerReader.allocated;
|
||||
Object[] keys = parentDocsPerReader.keys;
|
||||
Object[] values = parentDocsPerReader.values;
|
||||
for (int i = 0; i < states.length; i++) {
|
||||
if (states[i]) {
|
||||
IntObjectOpenHashMap<ParentDoc> value = (IntObjectOpenHashMap<ParentDoc>) values[i];
|
||||
ParentDoc[] _parentDocs = value.values().toArray(ParentDoc.class);
|
||||
Arrays.sort(_parentDocs, PARENT_DOC_COMP);
|
||||
parentDocs.put(keys[i], _parentDocs);
|
||||
}
|
||||
}
|
||||
return parentHitsResolved;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj) {
|
||||
return true;
|
||||
}
|
||||
if (super.equals(obj) == false) {
|
||||
return false;
|
||||
}
|
||||
|
||||
TopChildrenQuery that = (TopChildrenQuery) obj;
|
||||
if (!childQuery.equals(that.childQuery)) {
|
||||
return false;
|
||||
}
|
||||
if (!childType.equals(that.childType)) {
|
||||
return false;
|
||||
}
|
||||
if (incrementalFactor != that.incrementalFactor) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
int result = super.hashCode();
|
||||
result = 31 * result + childQuery.hashCode();
|
||||
result = 31 * result + parentType.hashCode();
|
||||
result = 31 * result + incrementalFactor;
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
StringBuilder sb = new StringBuilder();
|
||||
sb.append("score_child[").append(childType).append("/").append(parentType).append("](").append(childQuery.toString(field)).append(')');
|
||||
sb.append(ToStringUtils.boost(getBoost()));
|
||||
return sb.toString();
|
||||
}
|
||||
|
||||
private class ParentWeight extends Weight implements Releasable {
|
||||
|
||||
private final Weight queryWeight;
|
||||
private final ObjectObjectOpenHashMap<Object, ParentDoc[]> parentDocs;
|
||||
|
||||
public ParentWeight(Query query, Weight queryWeight, ObjectObjectOpenHashMap<Object, ParentDoc[]> parentDocs) throws IOException {
|
||||
super(query);
|
||||
this.queryWeight = queryWeight;
|
||||
this.parentDocs = parentDocs;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void extractTerms(Set<Term> terms) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public float getValueForNormalization() throws IOException {
|
||||
float sum = queryWeight.getValueForNormalization();
|
||||
sum *= getBoost() * getBoost();
|
||||
return sum;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void normalize(float norm, float topLevelBoost) {
|
||||
// Nothing to normalize
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() {
|
||||
}
|
||||
|
||||
@Override
|
||||
public Scorer scorer(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
ParentDoc[] readerParentDocs = parentDocs.get(context.reader().getCoreCacheKey());
|
||||
// We ignore the needsScores parameter here because there isn't really anything that we
|
||||
// can improve by ignoring scores. Actually this query does not really make sense
|
||||
// with needsScores=false...
|
||||
if (readerParentDocs != null) {
|
||||
if (scoreType == ScoreType.MIN) {
|
||||
return new ParentScorer(this, readerParentDocs) {
|
||||
@Override
|
||||
public float score() throws IOException {
|
||||
assert doc.docId >= 0 && doc.docId != NO_MORE_DOCS;
|
||||
return doc.minScore;
|
||||
}
|
||||
};
|
||||
} else if (scoreType == ScoreType.MAX) {
|
||||
return new ParentScorer(this, readerParentDocs) {
|
||||
@Override
|
||||
public float score() throws IOException {
|
||||
assert doc.docId >= 0 && doc.docId != NO_MORE_DOCS;
|
||||
return doc.maxScore;
|
||||
}
|
||||
};
|
||||
} else if (scoreType == ScoreType.AVG) {
|
||||
return new ParentScorer(this, readerParentDocs) {
|
||||
@Override
|
||||
public float score() throws IOException {
|
||||
assert doc.docId >= 0 && doc.docId != NO_MORE_DOCS;
|
||||
return doc.sumScores / doc.count;
|
||||
}
|
||||
};
|
||||
} else if (scoreType == ScoreType.SUM) {
|
||||
return new ParentScorer(this, readerParentDocs) {
|
||||
@Override
|
||||
public float score() throws IOException {
|
||||
assert doc.docId >= 0 && doc.docId != NO_MORE_DOCS;
|
||||
return doc.sumScores;
|
||||
}
|
||||
|
||||
};
|
||||
}
|
||||
throw new IllegalStateException("No support for score type [" + scoreType + "]");
|
||||
}
|
||||
return new EmptyScorer(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Explanation explain(LeafReaderContext context, int doc) throws IOException {
|
||||
return Explanation.match(getBoost(), "not implemented yet...");
|
||||
}
|
||||
}
|
||||
|
||||
private static abstract class ParentScorer extends Scorer {
|
||||
|
||||
private final ParentDoc spare = new ParentDoc();
|
||||
protected final ParentDoc[] docs;
|
||||
protected ParentDoc doc = spare;
|
||||
private int index = -1;
|
||||
|
||||
ParentScorer(ParentWeight weight, ParentDoc[] docs) throws IOException {
|
||||
super(weight);
|
||||
this.docs = docs;
|
||||
spare.docId = -1;
|
||||
spare.count = -1;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int docID() {
|
||||
return doc.docId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int advance(int target) throws IOException {
|
||||
return slowAdvance(target);
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int nextDoc() throws IOException {
|
||||
if (++index >= docs.length) {
|
||||
doc = spare;
|
||||
doc.count = 0;
|
||||
return (doc.docId = NO_MORE_DOCS);
|
||||
}
|
||||
return (doc = docs[index]).docId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int freq() throws IOException {
|
||||
return doc.count; // The number of matches in the child doc, which is propagated to parent
|
||||
}
|
||||
|
||||
@Override
|
||||
public final long cost() {
|
||||
return docs.length;
|
||||
}
|
||||
}
|
||||
|
||||
private static class ParentDocComparator implements Comparator<ParentDoc> {
|
||||
@Override
|
||||
public int compare(ParentDoc o1, ParentDoc o2) {
|
||||
return o1.docId - o2.docId;
|
||||
}
|
||||
}
|
||||
|
||||
private static class ParentDoc {
|
||||
public int docId;
|
||||
public int count;
|
||||
public float minScore = Float.NaN;
|
||||
public float maxScore = Float.NaN;
|
||||
public float sumScores = 0;
|
||||
}
|
||||
|
||||
}
|
|
@ -59,7 +59,6 @@ public class IndicesQueriesModule extends AbstractModule {
|
|||
qpBinders.addBinding().to(NestedQueryParser.class).asEagerSingleton();
|
||||
qpBinders.addBinding().to(HasChildQueryParser.class).asEagerSingleton();
|
||||
qpBinders.addBinding().to(HasParentQueryParser.class).asEagerSingleton();
|
||||
qpBinders.addBinding().to(TopChildrenQueryParser.class).asEagerSingleton();
|
||||
qpBinders.addBinding().to(DisMaxQueryParser.class).asEagerSingleton();
|
||||
qpBinders.addBinding().to(IdsQueryParser.class).asEagerSingleton();
|
||||
qpBinders.addBinding().to(MatchAllQueryParser.class).asEagerSingleton();
|
||||
|
|
|
@ -42,7 +42,6 @@ import static org.elasticsearch.index.query.QueryBuilders.hasParentQuery;
|
|||
import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;
|
||||
import static org.elasticsearch.index.query.QueryBuilders.matchQuery;
|
||||
import static org.elasticsearch.index.query.QueryBuilders.termQuery;
|
||||
import static org.elasticsearch.index.query.QueryBuilders.topChildrenQuery;
|
||||
import static org.elasticsearch.node.NodeBuilder.nodeBuilder;
|
||||
|
||||
/**
|
||||
|
@ -257,37 +256,6 @@ public class ChildSearchBenchmark {
|
|||
totalQueryTime += searchResponse.getTookInMillis();
|
||||
}
|
||||
System.out.println("--> has_parent filter with match_all parent query, Query Avg: " + (totalQueryTime / QUERY_COUNT) + "ms");
|
||||
System.out.println("--> Running top_children query");
|
||||
// run parent child score query
|
||||
for (int j = 0; j < QUERY_WARMUP; j++) {
|
||||
client.prepareSearch(indexName).setQuery(topChildrenQuery("child", termQuery("field2", parentChildIndexGenerator.getQueryValue()))).execute().actionGet();
|
||||
}
|
||||
|
||||
totalQueryTime = 0;
|
||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||
SearchResponse searchResponse = client.prepareSearch(indexName).setQuery(topChildrenQuery("child", termQuery("field2", parentChildIndexGenerator.getQueryValue()))).execute().actionGet();
|
||||
if (j % 10 == 0) {
|
||||
System.out.println("--> hits [" + j + "], got [" + searchResponse.getHits().totalHits() + "]");
|
||||
}
|
||||
totalQueryTime += searchResponse.getTookInMillis();
|
||||
}
|
||||
System.out.println("--> top_children Query Avg: " + (totalQueryTime / QUERY_COUNT) + "ms");
|
||||
|
||||
System.out.println("--> Running top_children query, with match_all as child query");
|
||||
// run parent child score query
|
||||
for (int j = 0; j < QUERY_WARMUP; j++) {
|
||||
client.prepareSearch(indexName).setQuery(topChildrenQuery("child", matchAllQuery())).execute().actionGet();
|
||||
}
|
||||
|
||||
totalQueryTime = 0;
|
||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||
SearchResponse searchResponse = client.prepareSearch(indexName).setQuery(topChildrenQuery("child", matchAllQuery())).execute().actionGet();
|
||||
if (j % 10 == 0) {
|
||||
System.out.println("--> hits [" + j + "], got [" + searchResponse.getHits().totalHits() + "]");
|
||||
}
|
||||
totalQueryTime += searchResponse.getTookInMillis();
|
||||
}
|
||||
System.out.println("--> top_children, with match_all Query Avg: " + (totalQueryTime / QUERY_COUNT) + "ms");
|
||||
|
||||
statsResponse = client.admin().cluster().prepareNodesStats()
|
||||
.setJvm(true).setIndices(true).execute().actionGet();
|
||||
|
|
|
@ -1,63 +0,0 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.index.search.child;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryUtils;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.elasticsearch.common.lease.Releasables;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;
|
||||
import org.elasticsearch.index.mapper.internal.ParentFieldMapper;
|
||||
import org.elasticsearch.search.internal.SearchContext;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.Test;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
*/
|
||||
public class TopChildrenQueryTests extends AbstractChildTests {
|
||||
|
||||
@BeforeClass
|
||||
public static void before() throws IOException {
|
||||
SearchContext.setCurrent(ChildrenConstantScoreQueryTests.createSearchContext("test", "parent", "child"));
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void after() throws IOException {
|
||||
SearchContext current = SearchContext.current();
|
||||
SearchContext.removeCurrent();
|
||||
Releasables.close(current);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testBasicQuerySanities() {
|
||||
Query childQuery = new TermQuery(new Term("field", "value"));
|
||||
ScoreType scoreType = ScoreType.values()[random().nextInt(ScoreType.values().length)];
|
||||
ParentFieldMapper parentFieldMapper = SearchContext.current().mapperService().documentMapper("child").parentFieldMapper();
|
||||
ParentChildIndexFieldData parentChildIndexFieldData = SearchContext.current().fieldData().getForField(parentFieldMapper);
|
||||
Query query = new TopChildrenQuery(parentChildIndexFieldData, childQuery, "child", "parent", scoreType, 1, 1, wrapWithBitSetFilter(Queries.newNonNestedFilter()));
|
||||
QueryUtils.check(query);
|
||||
}
|
||||
|
||||
}
|
|
@ -85,7 +85,6 @@ import static org.elasticsearch.index.query.QueryBuilders.prefixQuery;
|
|||
import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;
|
||||
import static org.elasticsearch.index.query.QueryBuilders.termQuery;
|
||||
import static org.elasticsearch.index.query.QueryBuilders.termsQuery;
|
||||
import static org.elasticsearch.index.query.QueryBuilders.topChildrenQuery;
|
||||
import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.factorFunction;
|
||||
import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.scriptFunction;
|
||||
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
|
||||
|
@ -253,23 +252,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
assertThat(searchResponse.getHits().getAt(1).id(), anyOf(equalTo("c1"), equalTo("c2")));
|
||||
assertThat(searchResponse.getHits().getAt(1).field("_parent").value().toString(), equalTo("p1"));
|
||||
|
||||
// TOP CHILDREN QUERY
|
||||
searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "yellow"))).execute()
|
||||
.actionGet();
|
||||
assertHitCount(searchResponse, 1l);
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p1"));
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "blue")))
|
||||
.get();
|
||||
assertHitCount(searchResponse, 1l);
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p2"));
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "red"))).execute()
|
||||
.actionGet();
|
||||
assertHitCount(searchResponse, 2l);
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), anyOf(equalTo("p2"), equalTo("p1")));
|
||||
assertThat(searchResponse.getHits().getAt(1).id(), anyOf(equalTo("p2"), equalTo("p1")));
|
||||
|
||||
// HAS CHILD
|
||||
searchResponse = client().prepareSearch("test").setQuery(randomHasChild("child", "c_field", "yellow"))
|
||||
.get();
|
||||
|
@ -414,10 +396,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
for (int i = 1; i <= 10; i++) {
|
||||
logger.info("Round {}", i);
|
||||
SearchResponse searchResponse = client().prepareSearch("test")
|
||||
.setQuery(constantScoreQuery(topChildrenQuery("child", matchAllQuery()))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(constantScoreQuery(hasChildQuery("child", matchAllQuery()).scoreType("max")))
|
||||
.get();
|
||||
assertNoFailures(searchResponse);
|
||||
|
@ -500,31 +478,9 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
client().admin().indices().prepareFlush().get();
|
||||
refresh();
|
||||
|
||||
// TOP CHILDREN QUERY
|
||||
|
||||
SearchResponse searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "yellow")))
|
||||
.get();
|
||||
assertNoFailures(searchResponse);
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p1"));
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "blue"))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p2"));
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "red"))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(2l));
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), anyOf(equalTo("p2"), equalTo("p1")));
|
||||
assertThat(searchResponse.getHits().getAt(1).id(), anyOf(equalTo("p2"), equalTo("p1")));
|
||||
|
||||
// HAS CHILD QUERY
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(hasChildQuery("child", termQuery("c_field", "yellow"))).execute()
|
||||
SearchResponse searchResponse = client().prepareSearch("test").setQuery(hasChildQuery("child", termQuery("c_field", "yellow"))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
|
@ -583,7 +539,7 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
|
||||
SearchResponse searchResponse = client()
|
||||
.prepareSearch("test")
|
||||
.setQuery(topChildrenQuery("child", boolQuery().should(termQuery("c_field", "red")).should(termQuery("c_field", "yellow"))))
|
||||
.setQuery(hasChildQuery("child", boolQuery().should(termQuery("c_field", "red")).should(termQuery("c_field", "yellow"))))
|
||||
.addAggregation(AggregationBuilders.global("global").subAggregation(
|
||||
AggregationBuilders.filter("filter").filter(boolQuery().should(termQuery("c_field", "red")).should(termQuery("c_field", "yellow"))).subAggregation(
|
||||
AggregationBuilders.terms("facet1").field("c_field")))).get();
|
||||
|
@ -618,16 +574,7 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
|
||||
refresh();
|
||||
|
||||
// TOP CHILDREN QUERY
|
||||
|
||||
SearchResponse searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "yellow")))
|
||||
.get();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p1"));
|
||||
assertThat(searchResponse.getHits().getAt(0).sourceAsString(), containsString("\"p_value1\""));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
SearchResponse searchResponse = client().prepareSearch("test")
|
||||
.setQuery(constantScoreQuery(hasChildQuery("child", termQuery("c_field", "yellow")))).get();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
|
@ -639,13 +586,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
client().prepareIndex("test", "parent", "p1").setSource("p_field", "p_value1_updated").get();
|
||||
client().admin().indices().prepareRefresh().get();
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "yellow"))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p1"));
|
||||
assertThat(searchResponse.getHits().getAt(0).sourceAsString(), containsString("\"p_value1_updated\""));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(constantScoreQuery(hasChildQuery("child", termQuery("c_field", "yellow")))).get();
|
||||
assertNoFailures(searchResponse);
|
||||
|
@ -679,69 +619,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
.setQuery(boolQuery().mustNot(hasParentQuery("parent", boolQuery().should(queryStringQuery("p_field:*"))))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
|
||||
searchResponse = client().prepareSearch("test").setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
|
||||
.setQuery(boolQuery().mustNot(topChildrenQuery("child", boolQuery().should(queryStringQuery("c_field:*"))))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testFixAOBEIfTopChildrenIsWrappedInMusNotClause() throws Exception {
|
||||
assertAcked(prepareCreate("test")
|
||||
.addMapping("parent")
|
||||
.addMapping("child", "_parent", "type=parent"));
|
||||
ensureGreen();
|
||||
|
||||
// index simple data
|
||||
client().prepareIndex("test", "parent", "p1").setSource("p_field", "p_value1").get();
|
||||
client().prepareIndex("test", "child", "c1").setSource("c_field", "red").setParent("p1").get();
|
||||
client().prepareIndex("test", "child", "c2").setSource("c_field", "yellow").setParent("p1").get();
|
||||
client().prepareIndex("test", "parent", "p2").setSource("p_field", "p_value2").get();
|
||||
client().prepareIndex("test", "child", "c3").setSource("c_field", "blue").setParent("p2").get();
|
||||
client().prepareIndex("test", "child", "c4").setSource("c_field", "red").setParent("p2").get();
|
||||
|
||||
refresh();
|
||||
|
||||
SearchResponse searchResponse = client().prepareSearch("test").setSearchType(SearchType.QUERY_THEN_FETCH)
|
||||
.setQuery(boolQuery().mustNot(topChildrenQuery("child", boolQuery().should(queryStringQuery("c_field:*"))))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTopChildrenReSearchBug() throws Exception {
|
||||
assertAcked(prepareCreate("test")
|
||||
.addMapping("parent")
|
||||
.addMapping("child", "_parent", "type=parent"));
|
||||
ensureGreen();
|
||||
int numberOfParents = 4;
|
||||
int numberOfChildrenPerParent = 123;
|
||||
for (int i = 1; i <= numberOfParents; i++) {
|
||||
String parentId = String.format(Locale.ROOT, "p%d", i);
|
||||
client().prepareIndex("test", "parent", parentId).setSource("p_field", String.format(Locale.ROOT, "p_value%d", i)).execute()
|
||||
.actionGet();
|
||||
for (int j = 1; j <= numberOfChildrenPerParent; j++) {
|
||||
client().prepareIndex("test", "child", String.format(Locale.ROOT, "%s_c%d", parentId, j))
|
||||
.setSource("c_field1", parentId, "c_field2", i % 2 == 0 ? "even" : "not_even").setParent(parentId).execute()
|
||||
.actionGet();
|
||||
}
|
||||
}
|
||||
|
||||
refresh();
|
||||
|
||||
SearchResponse searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field1", "p3")))
|
||||
.get();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), equalTo("p3"));
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field2", "even"))).execute()
|
||||
.actionGet();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(2l));
|
||||
assertThat(searchResponse.getHits().getAt(0).id(), anyOf(equalTo("p2"), equalTo("p4")));
|
||||
assertThat(searchResponse.getHits().getAt(1).id(), anyOf(equalTo("p2"), equalTo("p4")));
|
||||
}
|
||||
|
||||
@Test
|
||||
|
@ -781,11 +658,7 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
client().prepareIndex("test", "child", "c1").setSource("c_field", "1").setParent(parentId).get();
|
||||
refresh();
|
||||
|
||||
CountResponse countResponse = client().prepareCount("test").setQuery(topChildrenQuery("child", termQuery("c_field", "1")))
|
||||
.get();
|
||||
assertHitCount(countResponse, 1l);
|
||||
|
||||
countResponse = client().prepareCount("test").setQuery(hasChildQuery("child", termQuery("c_field", "1")).scoreType("max"))
|
||||
CountResponse countResponse = client().prepareCount("test").setQuery(hasChildQuery("child", termQuery("c_field", "1")).scoreType("max"))
|
||||
.get();
|
||||
assertHitCount(countResponse, 1l);
|
||||
|
||||
|
@ -815,13 +688,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
refresh();
|
||||
|
||||
SearchResponse searchResponse = client().prepareSearch("test")
|
||||
.setExplain(true)
|
||||
.setQuery(topChildrenQuery("child", termQuery("c_field", "1")))
|
||||
.get();
|
||||
assertHitCount(searchResponse, 1l);
|
||||
assertThat(searchResponse.getHits().getAt(0).explanation().getDescription(), equalTo("not implemented yet..."));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setExplain(true)
|
||||
.setQuery(hasChildQuery("child", termQuery("c_field", "1")).scoreType("max"))
|
||||
.get();
|
||||
|
@ -1061,10 +927,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
.setQuery(filteredQuery(matchAllQuery(), hasChildQuery("child", matchQuery("c_field", 1)))).get();
|
||||
assertSearchHit(searchResponse, 1, hasId("1"));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(filteredQuery(matchAllQuery(), topChildrenQuery("child", matchQuery("c_field", 1)))).get();
|
||||
assertSearchHit(searchResponse, 1, hasId("1"));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(filteredQuery(matchAllQuery(), hasParentQuery("parent", matchQuery("p_field", 1)))).get();
|
||||
assertSearchHit(searchResponse, 1, hasId("2"));
|
||||
|
@ -1073,10 +935,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
.setQuery(filteredQuery(matchAllQuery(), boolQuery().must(hasChildQuery("child", matchQuery("c_field", 1))))).get();
|
||||
assertSearchHit(searchResponse, 1, hasId("1"));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(filteredQuery(matchAllQuery(), boolQuery().must(topChildrenQuery("child", matchQuery("c_field", 1))))).get();
|
||||
assertSearchHit(searchResponse, 1, hasId("1"));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(filteredQuery(matchAllQuery(), boolQuery().must(hasParentQuery("parent", matchQuery("p_field", 1))))).get();
|
||||
assertSearchHit(searchResponse, 1, hasId("2"));
|
||||
|
@ -1085,10 +943,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
@Test
|
||||
public void testSimpleQueryRewrite() throws Exception {
|
||||
assertAcked(prepareCreate("test")
|
||||
//top_children query needs at least 2 shards for the totalHits to be accurate
|
||||
.setSettings(settingsBuilder()
|
||||
.put(indexSettings())
|
||||
.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, between(2, DEFAULT_MAX_NUM_SHARDS)))
|
||||
.addMapping("parent", "p_field", "type=string")
|
||||
.addMapping("child", "_parent", "type=parent", "c_field", "type=string"));
|
||||
ensureGreen();
|
||||
|
@ -1130,17 +984,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
assertThat(searchResponse.getHits().hits()[2].id(), equalTo("c002"));
|
||||
assertThat(searchResponse.getHits().hits()[3].id(), equalTo("c003"));
|
||||
assertThat(searchResponse.getHits().hits()[4].id(), equalTo("c004"));
|
||||
|
||||
searchResponse = client().prepareSearch("test").setSearchType(searchType)
|
||||
.setQuery(topChildrenQuery("child", prefixQuery("c_field", "c")).factor(10)).addSort("p_field", SortOrder.ASC).setSize(5)
|
||||
.get();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(10L));
|
||||
assertThat(searchResponse.getHits().hits()[0].id(), equalTo("p000"));
|
||||
assertThat(searchResponse.getHits().hits()[1].id(), equalTo("p001"));
|
||||
assertThat(searchResponse.getHits().hits()[2].id(), equalTo("p002"));
|
||||
assertThat(searchResponse.getHits().hits()[3].id(), equalTo("p003"));
|
||||
assertThat(searchResponse.getHits().hits()[4].id(), equalTo("p004"));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1466,59 +1309,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
}
|
||||
}
|
||||
|
||||
@Test @Slow
|
||||
// The SimpleIdReaderTypeCache#docById method used lget, which can't be used if a map is shared.
|
||||
public void testTopChildrenBug_concurrencyIssue() throws Exception {
|
||||
assertAcked(prepareCreate("test")
|
||||
.addMapping("parent")
|
||||
.addMapping("child", "_parent", "type=parent"));
|
||||
ensureGreen();
|
||||
|
||||
// index simple data
|
||||
client().prepareIndex("test", "parent", "p1").setSource("p_field", "p_value1").get();
|
||||
client().prepareIndex("test", "parent", "p2").setSource("p_field", "p_value2").get();
|
||||
client().prepareIndex("test", "child", "c1").setParent("p1").setSource("c_field", "blue").get();
|
||||
client().prepareIndex("test", "child", "c2").setParent("p1").setSource("c_field", "red").get();
|
||||
client().prepareIndex("test", "child", "c3").setParent("p2").setSource("c_field", "red").get();
|
||||
client().admin().indices().prepareRefresh("test").get();
|
||||
|
||||
int numThreads = 10;
|
||||
final CountDownLatch latch = new CountDownLatch(numThreads);
|
||||
final AtomicReference<AssertionError> holder = new AtomicReference<>();
|
||||
Runnable r = new Runnable() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
for (int i = 0; i < 100; i++) {
|
||||
SearchResponse searchResponse = client().prepareSearch("test")
|
||||
.setQuery(topChildrenQuery("child", termQuery("c_field", "blue")))
|
||||
.get();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(topChildrenQuery("child", termQuery("c_field", "red")))
|
||||
.get();
|
||||
assertNoFailures(searchResponse);
|
||||
assertThat(searchResponse.getHits().totalHits(), equalTo(2l));
|
||||
}
|
||||
} catch (AssertionError error) {
|
||||
holder.set(error);
|
||||
} finally {
|
||||
latch.countDown();
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
for (int i = 0; i < 10; i++) {
|
||||
new Thread(r).start();
|
||||
}
|
||||
latch.await();
|
||||
if (holder.get() != null) {
|
||||
throw holder.get();
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testHasChildQueryWithNestedInnerObjects() throws Exception {
|
||||
assertAcked(prepareCreate("test")
|
||||
|
@ -1573,14 +1363,7 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
client().prepareIndex("test", "child", "c1").setSource("c_field", "1").setParent(parentId).get();
|
||||
refresh();
|
||||
|
||||
SearchResponse searchResponse = client().prepareSearch("test").setQuery(topChildrenQuery("child", termQuery("c_field", "1")).queryName("test"))
|
||||
.get();
|
||||
System.out.println(searchResponse);
|
||||
assertHitCount(searchResponse, 1l);
|
||||
assertThat(searchResponse.getHits().getAt(0).getMatchedQueries().length, equalTo(1));
|
||||
assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo("test"));
|
||||
|
||||
searchResponse = client().prepareSearch("test").setQuery(hasChildQuery("child", termQuery("c_field", "1")).scoreType("max").queryName("test"))
|
||||
SearchResponse searchResponse = client().prepareSearch("test").setQuery(hasChildQuery("child", termQuery("c_field", "1")).scoreType("max").queryName("test"))
|
||||
.get();
|
||||
assertHitCount(searchResponse, 1l);
|
||||
assertThat(searchResponse.getHits().getAt(0).getMatchedQueries().length, equalTo(1));
|
||||
|
@ -1644,15 +1427,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
assertThat(e.status(), equalTo(RestStatus.BAD_REQUEST));
|
||||
}
|
||||
|
||||
try {
|
||||
client().prepareSearch("test")
|
||||
.setQuery(topChildrenQuery("child", termQuery("c_field", "1")).score("max"))
|
||||
.get();
|
||||
fail();
|
||||
} catch (SearchPhaseExecutionException e) {
|
||||
assertThat(e.status(), equalTo(RestStatus.BAD_REQUEST));
|
||||
}
|
||||
|
||||
try {
|
||||
client().prepareSearch("test")
|
||||
.setQuery(hasParentQuery("parent", termQuery("p_field", "1")).scoreType("score"))
|
||||
|
@ -1710,12 +1484,6 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
assertHitCount(searchResponse, 1l);
|
||||
assertSearchHits(searchResponse, parentId);
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setQuery(topChildrenQuery("child", termQuery("c_field", "1")).score("max"))
|
||||
.get();
|
||||
assertHitCount(searchResponse, 1l);
|
||||
assertSearchHits(searchResponse, parentId);
|
||||
|
||||
searchResponse = client().prepareSearch("test")
|
||||
.setPostFilter(hasParentQuery("parent", termQuery("p_field", "1")))
|
||||
.get();
|
||||
|
@ -1795,8 +1563,7 @@ public class SimpleChildQuerySearchTests extends ElasticsearchIntegrationTest {
|
|||
hasChildQuery("child", matchAllQuery()),
|
||||
filteredQuery(matchAllQuery(), hasChildQuery("child", matchAllQuery())),
|
||||
hasParentQuery("parent", matchAllQuery()),
|
||||
filteredQuery(matchAllQuery(), hasParentQuery("parent", matchAllQuery())),
|
||||
topChildrenQuery("child", matchAllQuery()).factor(10)
|
||||
filteredQuery(matchAllQuery(), hasParentQuery("parent", matchAllQuery()))
|
||||
};
|
||||
|
||||
for (QueryBuilder query : queries) {
|
||||
|
|
Loading…
Reference in New Issue