mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-03-26 18:08:36 +00:00
Search: Remove the count
search type.
This commit brings the benefits of the `count` search type to search requests that have a `size` of 0: - a single round-trip to shards (no fetch phase) - ability to use the query cache Since `count` now provides no benefits over `query_then_fetch`, it has been deprecated. Close #7630
This commit is contained in:
parent
171e415a47
commit
a608db122d
@ -16,7 +16,7 @@ results from older indices will be served directly from the cache.
|
|||||||
==================================
|
==================================
|
||||||
|
|
||||||
For now, the query cache will only cache the results of search requests
|
For now, the query cache will only cache the results of search requests
|
||||||
where <<count,`?search_type=count`>>, so it will not cache `hits`,
|
where `size=0`, so it will not cache `hits`,
|
||||||
but it will cache `hits.total`, <<search-aggregations,aggregations>>, and
|
but it will cache `hits.total`, <<search-aggregations,aggregations>>, and
|
||||||
<<search-suggesters,suggestions>>.
|
<<search-suggesters,suggestions>>.
|
||||||
|
|
||||||
@ -80,8 +80,9 @@ caching on a *per-query* basis. If set, it overrides the index-level setting:
|
|||||||
|
|
||||||
[source,json]
|
[source,json]
|
||||||
-----------------------------
|
-----------------------------
|
||||||
curl 'localhost:9200/my_index/_search?search_type=count&query_cache=true' -d'
|
curl 'localhost:9200/my_index/_search?query_cache=true' -d'
|
||||||
{
|
{
|
||||||
|
"size": 0,
|
||||||
"aggs": {
|
"aggs": {
|
||||||
"popular_colors": {
|
"popular_colors": {
|
||||||
"terms": {
|
"terms": {
|
||||||
|
@ -297,3 +297,8 @@ in their place.
|
|||||||
The thrift and memcached transport plugins are no longer supported. Instead, use
|
The thrift and memcached transport plugins are no longer supported. Instead, use
|
||||||
either the HTTP transport (enabled by default) or the node or transport Java client.
|
either the HTTP transport (enabled by default) or the node or transport Java client.
|
||||||
|
|
||||||
|
=== `search_type=count` deprecation
|
||||||
|
|
||||||
|
The `count` search type has been deprecated. All benefits from this search type can
|
||||||
|
now be achieved by using the `query_then_fetch` search type (which is the
|
||||||
|
default) and setting `size` to `0`.
|
||||||
|
@ -130,11 +130,12 @@ See <<index-modules-shard-query-cache>> for more details.
|
|||||||
=== Returning only aggregation results
|
=== Returning only aggregation results
|
||||||
|
|
||||||
There are many occasions when aggregations are required but search hits are not. For these cases the hits can be ignored by
|
There are many occasions when aggregations are required but search hits are not. For these cases the hits can be ignored by
|
||||||
adding `search_type=count` to the request URL parameters. For example:
|
setting `size=0`. For example:
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
$ curl -XGET 'http://localhost:9200/twitter/tweet/_search?search_type=count' -d '{
|
$ curl -XGET 'http://localhost:9200/twitter/tweet/_search' -d '{
|
||||||
|
"size": 0,
|
||||||
"aggregations": {
|
"aggregations": {
|
||||||
"my_agg": {
|
"my_agg": {
|
||||||
"terms": {
|
"terms": {
|
||||||
@ -146,8 +147,7 @@ $ curl -XGET 'http://localhost:9200/twitter/tweet/_search?search_type=count' -d
|
|||||||
'
|
'
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Setting `search_type` to `count` avoids executing the fetch phase of the search making the request more efficient. See
|
Setting `size` to `0` avoids executing the fetch phase of the search making the request more efficient.
|
||||||
<<search-request-search-type>> for more information on the `search_type` parameter.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Metadata
|
=== Metadata
|
||||||
|
@ -26,13 +26,13 @@ the `query`, `aggregations`, `from`, `size`, and so on). Here is an example:
|
|||||||
$ cat requests
|
$ cat requests
|
||||||
{"index" : "test"}
|
{"index" : "test"}
|
||||||
{"query" : {"match_all" : {}}, "from" : 0, "size" : 10}
|
{"query" : {"match_all" : {}}, "from" : 0, "size" : 10}
|
||||||
{"index" : "test", "search_type" : "count"}
|
{"index" : "test", "search_type" : "dfs_query_then_fetch"}
|
||||||
{"query" : {"match_all" : {}}}
|
{"query" : {"match_all" : {}}}
|
||||||
{}
|
{}
|
||||||
{"query" : {"match_all" : {}}}
|
{"query" : {"match_all" : {}}}
|
||||||
|
|
||||||
{"query" : {"match_all" : {}}}
|
{"query" : {"match_all" : {}}}
|
||||||
{"search_type" : "count"}
|
{"search_type" : "dfs_query_then_fetch"}
|
||||||
{"query" : {"match_all" : {}}}
|
{"query" : {"match_all" : {}}}
|
||||||
|
|
||||||
$ curl -XGET localhost:9200/_msearch --data-binary @requests; echo
|
$ curl -XGET localhost:9200/_msearch --data-binary @requests; echo
|
||||||
|
@ -71,8 +71,9 @@ And here is a sample response:
|
|||||||
`query_cache`::
|
`query_cache`::
|
||||||
|
|
||||||
Set to `true` or `false` to enable or disable the caching
|
Set to `true` or `false` to enable or disable the caching
|
||||||
of search results for requests where `?search_type=count`, ie
|
of search results for requests where `size` is 0, ie
|
||||||
aggregations and suggestions. See <<index-modules-shard-query-cache>>.
|
aggregations and suggestions (no top hits returned).
|
||||||
|
See <<index-modules-shard-query-cache>>.
|
||||||
|
|
||||||
`terminate_after`::
|
`terminate_after`::
|
||||||
|
|
||||||
|
@ -65,6 +65,8 @@ scoring.
|
|||||||
[[count]]
|
[[count]]
|
||||||
==== Count
|
==== Count
|
||||||
|
|
||||||
|
deprecated[2.0.0, `count` does not provide any benefits over `query_then_fetch` with a `size` of `0`]
|
||||||
|
|
||||||
Parameter value: *count*.
|
Parameter value: *count*.
|
||||||
|
|
||||||
A special search type that returns the count that matched the search
|
A special search type that returns the count that matched the search
|
||||||
|
@ -141,14 +141,15 @@ level override the suggest text on the global level.
|
|||||||
In the below example we request suggestions for the following suggest
|
In the below example we request suggestions for the following suggest
|
||||||
text: `devloping distibutd saerch engies` on the `title` field with a
|
text: `devloping distibutd saerch engies` on the `title` field with a
|
||||||
maximum of 3 suggestions per term inside the suggest text. Note that in
|
maximum of 3 suggestions per term inside the suggest text. Note that in
|
||||||
this example we use the `count` search type. This isn't required, but a
|
this example we set `size` to `0`. This isn't required, but a
|
||||||
nice optimization. The suggestions are gather in the `query` phase and
|
nice optimization. The suggestions are gather in the `query` phase and
|
||||||
in the case that we only care about suggestions (so no hits) we don't
|
in the case that we only care about suggestions (so no hits) we don't
|
||||||
need to execute the `fetch` phase.
|
need to execute the `fetch` phase.
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
curl -s -XPOST 'localhost:9200/_search?search_type=count' -d '{
|
curl -s -XPOST 'localhost:9200/_search' -d '{
|
||||||
|
"size": 0,
|
||||||
"suggest" : {
|
"suggest" : {
|
||||||
"my-title-suggestions-1" : {
|
"my-title-suggestions-1" : {
|
||||||
"text" : "devloping distibutd saerch engies",
|
"text" : "devloping distibutd saerch engies",
|
||||||
|
@ -94,7 +94,7 @@ Defaults to no terminate_after.
|
|||||||
|
|
||||||
|`search_type` |The type of the search operation to perform. Can be
|
|`search_type` |The type of the search operation to perform. Can be
|
||||||
`dfs_query_then_fetch`, `dfs_query_and_fetch`, `query_then_fetch`,
|
`dfs_query_then_fetch`, `dfs_query_and_fetch`, `query_then_fetch`,
|
||||||
`query_and_fetch`, `count`, `scan`. Defaults to `query_then_fetch`. See
|
`query_and_fetch`, `scan` or `count` deprecated[2.0,Replaced by `size: 0`]. Defaults to `query_then_fetch`. See
|
||||||
<<search-request-search-type,_Search Type_>> for
|
<<search-request-search-type,_Search Type_>> for
|
||||||
more details on the different types of search that can be performed.
|
more details on the different types of search that can be performed.
|
||||||
|
|
||||||
|
@ -89,7 +89,7 @@
|
|||||||
},
|
},
|
||||||
"search_type": {
|
"search_type": {
|
||||||
"type" : "string",
|
"type" : "string",
|
||||||
"description" : "Specific search type (eg. `dfs_then_fetch`, `count`, etc)"
|
"description" : "Specific search type (eg. `dfs_then_fetch`, `scan`, etc)"
|
||||||
},
|
},
|
||||||
"search_types": {
|
"search_types": {
|
||||||
"type" : "list",
|
"type" : "list",
|
||||||
|
@ -33,7 +33,7 @@
|
|||||||
- index: test_2
|
- index: test_2
|
||||||
- query:
|
- query:
|
||||||
match_all: {}
|
match_all: {}
|
||||||
- search_type: count
|
- search_type: query_then_fetch
|
||||||
index: test_1
|
index: test_1
|
||||||
- query:
|
- query:
|
||||||
match: {foo: bar}
|
match: {foo: bar}
|
||||||
|
33
rest-api-spec/test/search/50_search_count.yaml
Normal file
33
rest-api-spec/test/search/50_search_count.yaml
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
---
|
||||||
|
"search_type=count (deprecated) support":
|
||||||
|
- do:
|
||||||
|
indices.create:
|
||||||
|
index: test
|
||||||
|
- do:
|
||||||
|
index:
|
||||||
|
index: test
|
||||||
|
type: test
|
||||||
|
id: 1
|
||||||
|
body: { foo: bar }
|
||||||
|
|
||||||
|
- do:
|
||||||
|
index:
|
||||||
|
index: test
|
||||||
|
type: test
|
||||||
|
id: 2
|
||||||
|
body: { foo: bar }
|
||||||
|
|
||||||
|
- do:
|
||||||
|
indices.refresh:
|
||||||
|
index: [test]
|
||||||
|
|
||||||
|
- do:
|
||||||
|
search:
|
||||||
|
index: test
|
||||||
|
search_type: count
|
||||||
|
body:
|
||||||
|
query:
|
||||||
|
match:
|
||||||
|
foo: bar
|
||||||
|
|
||||||
|
- match: {hits.total: 2}
|
@ -20,6 +20,7 @@
|
|||||||
package org.elasticsearch.action.search;
|
package org.elasticsearch.action.search;
|
||||||
|
|
||||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||||
|
import org.elasticsearch.common.ParseField;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Search type represent the manner at which the search operation is executed.
|
* Search type represent the manner at which the search operation is executed.
|
||||||
@ -57,7 +58,9 @@ public enum SearchType {
|
|||||||
SCAN((byte) 4),
|
SCAN((byte) 4),
|
||||||
/**
|
/**
|
||||||
* Only counts the results, will still execute aggregations and the like.
|
* Only counts the results, will still execute aggregations and the like.
|
||||||
|
* @deprecated does not any improvements compared to {@link #QUERY_THEN_FETCH} with a `size` of {@code 0}
|
||||||
*/
|
*/
|
||||||
|
@Deprecated
|
||||||
COUNT((byte) 5);
|
COUNT((byte) 5);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -65,6 +68,8 @@ public enum SearchType {
|
|||||||
*/
|
*/
|
||||||
public static final SearchType DEFAULT = QUERY_THEN_FETCH;
|
public static final SearchType DEFAULT = QUERY_THEN_FETCH;
|
||||||
|
|
||||||
|
private static final ParseField COUNT_VALUE = new ParseField("count").withAllDeprecated("query_then_fetch");
|
||||||
|
|
||||||
private byte id;
|
private byte id;
|
||||||
|
|
||||||
SearchType(byte id) {
|
SearchType(byte id) {
|
||||||
@ -118,7 +123,7 @@ public enum SearchType {
|
|||||||
return SearchType.QUERY_AND_FETCH;
|
return SearchType.QUERY_AND_FETCH;
|
||||||
} else if ("scan".equals(searchType)) {
|
} else if ("scan".equals(searchType)) {
|
||||||
return SearchType.SCAN;
|
return SearchType.SCAN;
|
||||||
} else if ("count".equals(searchType)) {
|
} else if (COUNT_VALUE.match(searchType)) {
|
||||||
return SearchType.COUNT;
|
return SearchType.COUNT;
|
||||||
} else {
|
} else {
|
||||||
throw new ElasticsearchIllegalArgumentException("No search type for [" + searchType + "]");
|
throw new ElasticsearchIllegalArgumentException("No search type for [" + searchType + "]");
|
||||||
|
@ -19,6 +19,7 @@
|
|||||||
|
|
||||||
package org.elasticsearch.action.search;
|
package org.elasticsearch.action.search;
|
||||||
|
|
||||||
|
import org.elasticsearch.ElasticsearchIllegalStateException;
|
||||||
import org.elasticsearch.action.ActionListener;
|
import org.elasticsearch.action.ActionListener;
|
||||||
import org.elasticsearch.action.search.type.*;
|
import org.elasticsearch.action.search.type.*;
|
||||||
import org.elasticsearch.action.support.ActionFilters;
|
import org.elasticsearch.action.support.ActionFilters;
|
||||||
@ -59,7 +60,8 @@ public class TransportSearchAction extends HandledTransportAction<SearchRequest,
|
|||||||
TransportSearchDfsQueryAndFetchAction dfsQueryAndFetchAction,
|
TransportSearchDfsQueryAndFetchAction dfsQueryAndFetchAction,
|
||||||
TransportSearchQueryAndFetchAction queryAndFetchAction,
|
TransportSearchQueryAndFetchAction queryAndFetchAction,
|
||||||
TransportSearchScanAction scanAction,
|
TransportSearchScanAction scanAction,
|
||||||
TransportSearchCountAction countAction, ActionFilters actionFilters) {
|
TransportSearchCountAction countAction,
|
||||||
|
ActionFilters actionFilters) {
|
||||||
super(settings, SearchAction.NAME, threadPool, transportService, actionFilters);
|
super(settings, SearchAction.NAME, threadPool, transportService, actionFilters);
|
||||||
this.clusterService = clusterService;
|
this.clusterService = clusterService;
|
||||||
this.dfsQueryThenFetchAction = dfsQueryThenFetchAction;
|
this.dfsQueryThenFetchAction = dfsQueryThenFetchAction;
|
||||||
@ -68,10 +70,7 @@ public class TransportSearchAction extends HandledTransportAction<SearchRequest,
|
|||||||
this.queryAndFetchAction = queryAndFetchAction;
|
this.queryAndFetchAction = queryAndFetchAction;
|
||||||
this.scanAction = scanAction;
|
this.scanAction = scanAction;
|
||||||
this.countAction = countAction;
|
this.countAction = countAction;
|
||||||
|
|
||||||
this.optimizeSingleShard = this.settings.getAsBoolean("action.search.optimize_single_shard", true);
|
this.optimizeSingleShard = this.settings.getAsBoolean("action.search.optimize_single_shard", true);
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@ -106,6 +105,8 @@ public class TransportSearchAction extends HandledTransportAction<SearchRequest,
|
|||||||
scanAction.execute(searchRequest, listener);
|
scanAction.execute(searchRequest, listener);
|
||||||
} else if (searchRequest.searchType() == SearchType.COUNT) {
|
} else if (searchRequest.searchType() == SearchType.COUNT) {
|
||||||
countAction.execute(searchRequest, listener);
|
countAction.execute(searchRequest, listener);
|
||||||
|
} else {
|
||||||
|
throw new ElasticsearchIllegalStateException("Unknown search type: [" + searchRequest.searchType() + "]");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -20,7 +20,9 @@
|
|||||||
package org.elasticsearch.action.search.type;
|
package org.elasticsearch.action.search.type;
|
||||||
|
|
||||||
import com.carrotsearch.hppc.IntArrayList;
|
import com.carrotsearch.hppc.IntArrayList;
|
||||||
|
|
||||||
import org.apache.lucene.search.ScoreDoc;
|
import org.apache.lucene.search.ScoreDoc;
|
||||||
|
import org.apache.lucene.search.TopDocs;
|
||||||
import org.elasticsearch.ElasticsearchIllegalStateException;
|
import org.elasticsearch.ElasticsearchIllegalStateException;
|
||||||
import org.elasticsearch.action.ActionListener;
|
import org.elasticsearch.action.ActionListener;
|
||||||
import org.elasticsearch.action.NoShardAvailableActionException;
|
import org.elasticsearch.action.NoShardAvailableActionException;
|
||||||
@ -325,7 +327,9 @@ public abstract class TransportSearchTypeAction extends TransportAction<SearchRe
|
|||||||
// we only release search context that we did not fetch from if we are not scrolling
|
// we only release search context that we did not fetch from if we are not scrolling
|
||||||
if (request.scroll() == null) {
|
if (request.scroll() == null) {
|
||||||
for (AtomicArray.Entry<? extends QuerySearchResultProvider> entry : queryResults.asList()) {
|
for (AtomicArray.Entry<? extends QuerySearchResultProvider> entry : queryResults.asList()) {
|
||||||
if (docIdsToLoad.get(entry.index) == null) {
|
final TopDocs topDocs = entry.value.queryResult().queryResult().topDocs();
|
||||||
|
if (topDocs != null && topDocs.scoreDocs.length > 0 // the shard had matches
|
||||||
|
&& docIdsToLoad.get(entry.index) == null) { // but none of them made it to the global top docs
|
||||||
try {
|
try {
|
||||||
DiscoveryNode node = nodes.get(entry.value.queryResult().shardTarget().nodeId());
|
DiscoveryNode node = nodes.get(entry.value.queryResult().shardTarget().nodeId());
|
||||||
if (node != null) { // should not happen (==null) but safeguard anyhow
|
if (node != null) { // should not happen (==null) but safeguard anyhow
|
||||||
|
@ -423,8 +423,7 @@ public class Lucene {
|
|||||||
return new ScoreDoc(in.readVInt(), in.readFloat());
|
return new ScoreDoc(in.readVInt(), in.readFloat());
|
||||||
}
|
}
|
||||||
|
|
||||||
public static void writeTopDocs(StreamOutput out, TopDocs topDocs, int from) throws IOException {
|
public static void writeTopDocs(StreamOutput out, TopDocs topDocs) throws IOException {
|
||||||
from = Math.min(from, topDocs.scoreDocs.length);
|
|
||||||
if (topDocs instanceof TopFieldDocs) {
|
if (topDocs instanceof TopFieldDocs) {
|
||||||
out.writeBoolean(true);
|
out.writeBoolean(true);
|
||||||
TopFieldDocs topFieldDocs = (TopFieldDocs) topDocs;
|
TopFieldDocs topFieldDocs = (TopFieldDocs) topDocs;
|
||||||
@ -448,9 +447,8 @@ public class Lucene {
|
|||||||
out.writeBoolean(sortField.getReverse());
|
out.writeBoolean(sortField.getReverse());
|
||||||
}
|
}
|
||||||
|
|
||||||
out.writeVInt(topDocs.scoreDocs.length - from);
|
out.writeVInt(topDocs.scoreDocs.length);
|
||||||
for (int i = from; i < topFieldDocs.scoreDocs.length; ++i) {
|
for (ScoreDoc doc : topFieldDocs.scoreDocs) {
|
||||||
ScoreDoc doc = topFieldDocs.scoreDocs[i];
|
|
||||||
writeFieldDoc(out, (FieldDoc) doc);
|
writeFieldDoc(out, (FieldDoc) doc);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@ -458,9 +456,8 @@ public class Lucene {
|
|||||||
out.writeVInt(topDocs.totalHits);
|
out.writeVInt(topDocs.totalHits);
|
||||||
out.writeFloat(topDocs.getMaxScore());
|
out.writeFloat(topDocs.getMaxScore());
|
||||||
|
|
||||||
out.writeVInt(topDocs.scoreDocs.length - from);
|
out.writeVInt(topDocs.scoreDocs.length);
|
||||||
for (int i = from; i < topDocs.scoreDocs.length; ++i) {
|
for (ScoreDoc doc : topDocs.scoreDocs) {
|
||||||
ScoreDoc doc = topDocs.scoreDocs[i];
|
|
||||||
writeScoreDoc(out, doc);
|
writeScoreDoc(out, doc);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -54,6 +54,7 @@ import org.elasticsearch.threadpool.ThreadPool;
|
|||||||
|
|
||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
|
import java.util.EnumSet;
|
||||||
import java.util.Iterator;
|
import java.util.Iterator;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.concurrent.Callable;
|
import java.util.concurrent.Callable;
|
||||||
@ -88,6 +89,8 @@ public class IndicesQueryCache extends AbstractComponent implements RemovalListe
|
|||||||
public static final String INDICES_CACHE_QUERY_EXPIRE = "indices.cache.query.expire";
|
public static final String INDICES_CACHE_QUERY_EXPIRE = "indices.cache.query.expire";
|
||||||
public static final String INDICES_CACHE_QUERY_CONCURRENCY_LEVEL = "indices.cache.query.concurrency_level";
|
public static final String INDICES_CACHE_QUERY_CONCURRENCY_LEVEL = "indices.cache.query.concurrency_level";
|
||||||
|
|
||||||
|
private static final Set<SearchType> CACHEABLE_SEARCH_TYPES = EnumSet.of(SearchType.QUERY_THEN_FETCH, SearchType.QUERY_AND_FETCH);
|
||||||
|
|
||||||
private final ThreadPool threadPool;
|
private final ThreadPool threadPool;
|
||||||
private final ClusterService clusterService;
|
private final ClusterService clusterService;
|
||||||
|
|
||||||
@ -177,10 +180,20 @@ public class IndicesQueryCache extends AbstractComponent implements RemovalListe
|
|||||||
if (hasLength(request.templateSource())) {
|
if (hasLength(request.templateSource())) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
// for now, only enable it for search type count
|
|
||||||
if (context.searchType() != SearchType.COUNT) {
|
// for now, only enable it for requests with no hits
|
||||||
|
if (context.size() != 0) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// We cannot cache with DFS because results depend not only on the content of the index but also
|
||||||
|
// on the overridden statistics. So if you ran two queries on the same index with different stats
|
||||||
|
// (because an other shard was updated) you would get wrong results because of the scores
|
||||||
|
// (think about top_hits aggs or scripts using the score)
|
||||||
|
if (!CACHEABLE_SEARCH_TYPES.contains(context.searchType())) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
IndexMetaData index = clusterService.state().getMetaData().index(request.index());
|
IndexMetaData index = clusterService.state().getMetaData().index(request.index());
|
||||||
if (index == null) { // in case we didn't yet have the cluster state, or it just got deleted
|
if (index == null) { // in case we didn't yet have the cluster state, or it just got deleted
|
||||||
return false;
|
return false;
|
||||||
|
@ -226,17 +226,21 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
|
|
||||||
public QuerySearchResult executeScan(ShardSearchRequest request) throws ElasticsearchException {
|
public QuerySearchResult executeScan(ShardSearchRequest request) throws ElasticsearchException {
|
||||||
final SearchContext context = createAndPutContext(request);
|
final SearchContext context = createAndPutContext(request);
|
||||||
|
final int originalSize = context.size();
|
||||||
try {
|
try {
|
||||||
if (context.aggregations() != null) {
|
if (context.aggregations() != null) {
|
||||||
throw new ElasticsearchIllegalArgumentException("aggregations are not supported with search_type=scan");
|
throw new ElasticsearchIllegalArgumentException("aggregations are not supported with search_type=scan");
|
||||||
}
|
}
|
||||||
assert context.searchType() == SearchType.SCAN;
|
|
||||||
context.searchType(SearchType.COUNT); // move to COUNT, and then, when scrolling, move to SCAN
|
|
||||||
assert context.searchType() == SearchType.COUNT;
|
|
||||||
|
|
||||||
if (context.scroll() == null) {
|
if (context.scroll() == null) {
|
||||||
throw new ElasticsearchException("Scroll must be provided when scanning...");
|
throw new ElasticsearchException("Scroll must be provided when scanning...");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
assert context.searchType() == SearchType.SCAN;
|
||||||
|
context.searchType(SearchType.QUERY_THEN_FETCH); // move to QUERY_THEN_FETCH, and then, when scrolling, move to SCAN
|
||||||
|
context.size(0); // set size to 0 so that we only count matches
|
||||||
|
assert context.searchType() == SearchType.QUERY_THEN_FETCH;
|
||||||
|
|
||||||
contextProcessing(context);
|
contextProcessing(context);
|
||||||
queryPhase.execute(context);
|
queryPhase.execute(context);
|
||||||
contextProcessedSuccessfully(context);
|
contextProcessedSuccessfully(context);
|
||||||
@ -246,6 +250,7 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
freeContext(context.id());
|
freeContext(context.id());
|
||||||
throw ExceptionsHelper.convertToRuntime(e);
|
throw ExceptionsHelper.convertToRuntime(e);
|
||||||
} finally {
|
} finally {
|
||||||
|
context.size(originalSize);
|
||||||
cleanContext(context);
|
cleanContext(context);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -255,7 +260,7 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
contextProcessing(context);
|
contextProcessing(context);
|
||||||
try {
|
try {
|
||||||
processScroll(request, context);
|
processScroll(request, context);
|
||||||
if (context.searchType() == SearchType.COUNT) {
|
if (context.searchType() == SearchType.QUERY_THEN_FETCH) {
|
||||||
// first scanning, reset the from to 0
|
// first scanning, reset the from to 0
|
||||||
context.searchType(SearchType.SCAN);
|
context.searchType(SearchType.SCAN);
|
||||||
context.from(0);
|
context.from(0);
|
||||||
@ -300,7 +305,7 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
|
|
||||||
loadOrExecuteQueryPhase(request, context, queryPhase);
|
loadOrExecuteQueryPhase(request, context, queryPhase);
|
||||||
|
|
||||||
if (context.searchType() == SearchType.COUNT) {
|
if (context.queryResult().topDocs().scoreDocs.length == 0) {
|
||||||
freeContext(context.id());
|
freeContext(context.id());
|
||||||
} else {
|
} else {
|
||||||
contextProcessedSuccessfully(context);
|
contextProcessedSuccessfully(context);
|
||||||
@ -357,7 +362,12 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
context.indexShard().searchService().onPreQueryPhase(context);
|
context.indexShard().searchService().onPreQueryPhase(context);
|
||||||
long time = System.nanoTime();
|
long time = System.nanoTime();
|
||||||
queryPhase.execute(context);
|
queryPhase.execute(context);
|
||||||
contextProcessedSuccessfully(context);
|
if (context.queryResult().topDocs().scoreDocs.length == 0) {
|
||||||
|
// no hits, we can release the context since there will be no fetch phase
|
||||||
|
freeContext(context.id());
|
||||||
|
} else {
|
||||||
|
contextProcessedSuccessfully(context);
|
||||||
|
}
|
||||||
context.indexShard().searchService().onQueryPhase(context, System.nanoTime() - time);
|
context.indexShard().searchService().onQueryPhase(context, System.nanoTime() - time);
|
||||||
return context.queryResult();
|
return context.queryResult();
|
||||||
} catch (Throwable e) {
|
} catch (Throwable e) {
|
||||||
@ -377,7 +387,7 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
context.indexShard().searchService().onPreQueryPhase(context);
|
context.indexShard().searchService().onPreQueryPhase(context);
|
||||||
long time = System.nanoTime();
|
long time = System.nanoTime();
|
||||||
try {
|
try {
|
||||||
queryPhase.execute(context);
|
loadOrExecuteQueryPhase(request, context, queryPhase);
|
||||||
} catch (Throwable e) {
|
} catch (Throwable e) {
|
||||||
context.indexShard().searchService().onFailedQueryPhase(context);
|
context.indexShard().searchService().onFailedQueryPhase(context);
|
||||||
throw ExceptionsHelper.convertToRuntime(e);
|
throw ExceptionsHelper.convertToRuntime(e);
|
||||||
@ -564,7 +574,12 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
if (context.from() == -1) {
|
if (context.from() == -1) {
|
||||||
context.from(0);
|
context.from(0);
|
||||||
}
|
}
|
||||||
if (context.size() == -1) {
|
if (context.searchType() == SearchType.COUNT) {
|
||||||
|
// so that the optimizations we apply to size=0 also apply to search_type=COUNT
|
||||||
|
// and that we close contexts when done with the query phase
|
||||||
|
context.searchType(SearchType.QUERY_THEN_FETCH);
|
||||||
|
context.size(0);
|
||||||
|
} else if (context.size() == -1) {
|
||||||
context.size(10);
|
context.size(10);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -992,9 +1007,9 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {
|
|||||||
SearchType.QUERY_THEN_FETCH, entry.source(), entry.types(), entry.queryCache());
|
SearchType.QUERY_THEN_FETCH, entry.source(), entry.types(), entry.queryCache());
|
||||||
context = createContext(request, warmerContext.searcher());
|
context = createContext(request, warmerContext.searcher());
|
||||||
// if we use sort, we need to do query to sort on it and load relevant field data
|
// if we use sort, we need to do query to sort on it and load relevant field data
|
||||||
// if not, we might as well use COUNT (and cache if needed)
|
// if not, we might as well set size=0 (and cache if needed)
|
||||||
if (context.sort() == null) {
|
if (context.sort() == null) {
|
||||||
context.searchType(SearchType.COUNT);
|
context.size(0);
|
||||||
}
|
}
|
||||||
boolean canCache = indicesQueryCache.canCache(request, context);
|
boolean canCache = indicesQueryCache.canCache(request, context);
|
||||||
// early terminate when we can cache, since we can only do proper caching on top level searcher
|
// early terminate when we can cache, since we can only do proper caching on top level searcher
|
||||||
|
@ -150,7 +150,7 @@ public class InternalTopHits extends InternalMetricsAggregation implements TopHi
|
|||||||
protected void doWriteTo(StreamOutput out) throws IOException {
|
protected void doWriteTo(StreamOutput out) throws IOException {
|
||||||
out.writeVInt(from);
|
out.writeVInt(from);
|
||||||
out.writeVInt(size);
|
out.writeVInt(size);
|
||||||
Lucene.writeTopDocs(out, topDocs, 0);
|
Lucene.writeTopDocs(out, topDocs);
|
||||||
searchHits.writeTo(out);
|
searchHits.writeTo(out);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -105,10 +105,10 @@ public class QueryPhase implements SearchPhase {
|
|||||||
|
|
||||||
Query query = searchContext.query();
|
Query query = searchContext.query();
|
||||||
|
|
||||||
TopDocs topDocs;
|
final TopDocs topDocs;
|
||||||
int numDocs = searchContext.from() + searchContext.size();
|
int numDocs = searchContext.from() + searchContext.size();
|
||||||
|
|
||||||
if (searchContext.searchType() == SearchType.COUNT || numDocs == 0) {
|
if (searchContext.size() == 0) { // no matter what the value of from is
|
||||||
TotalHitCountCollector collector = new TotalHitCountCollector();
|
TotalHitCountCollector collector = new TotalHitCountCollector();
|
||||||
searchContext.searcher().search(query, collector);
|
searchContext.searcher().search(query, collector);
|
||||||
topDocs = new TopDocs(collector.getTotalHits(), Lucene.EMPTY_SCORE_DOCS, 0);
|
topDocs = new TopDocs(collector.getTotalHits(), Lucene.EMPTY_SCORE_DOCS, 0);
|
||||||
|
@ -180,7 +180,7 @@ public class QuerySearchResult extends QuerySearchResultProvider {
|
|||||||
// shardTarget.writeTo(out);
|
// shardTarget.writeTo(out);
|
||||||
out.writeVInt(from);
|
out.writeVInt(from);
|
||||||
out.writeVInt(size);
|
out.writeVInt(size);
|
||||||
writeTopDocs(out, topDocs, 0);
|
writeTopDocs(out, topDocs);
|
||||||
if (aggregations == null) {
|
if (aggregations == null) {
|
||||||
out.writeBoolean(false);
|
out.writeBoolean(false);
|
||||||
} else {
|
} else {
|
||||||
|
@ -169,13 +169,13 @@ public final class PhraseSuggester extends Suggester<PhraseSuggestionContext> {
|
|||||||
req = client.prepareSearch()
|
req = client.prepareSearch()
|
||||||
.setPreference(suggestions.getPreference())
|
.setPreference(suggestions.getPreference())
|
||||||
.setQuery(QueryBuilders.constantScoreQuery(FilterBuilders.bytesFilter(querySource)))
|
.setQuery(QueryBuilders.constantScoreQuery(FilterBuilders.bytesFilter(querySource)))
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setTerminateAfter(1);
|
.setTerminateAfter(1);
|
||||||
} else {
|
} else {
|
||||||
req = client.prepareSearch()
|
req = client.prepareSearch()
|
||||||
.setPreference(suggestions.getPreference())
|
.setPreference(suggestions.getPreference())
|
||||||
.setQuery(querySource)
|
.setQuery(querySource)
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setTerminateAfter(1);
|
.setTerminateAfter(1);
|
||||||
}
|
}
|
||||||
multiSearchRequestBuilder.add(req);
|
multiSearchRequestBuilder.add(req);
|
||||||
|
@ -46,7 +46,7 @@ public class MultiSearchRequestTests extends ElasticsearchTestCase {
|
|||||||
assertThat(request.requests().get(2).types().length, equalTo(0));
|
assertThat(request.requests().get(2).types().length, equalTo(0));
|
||||||
assertThat(request.requests().get(3).indices(), nullValue());
|
assertThat(request.requests().get(3).indices(), nullValue());
|
||||||
assertThat(request.requests().get(3).types().length, equalTo(0));
|
assertThat(request.requests().get(3).types().length, equalTo(0));
|
||||||
assertThat(request.requests().get(3).searchType(), equalTo(SearchType.COUNT));
|
assertThat(request.requests().get(3).searchType(), equalTo(SearchType.DFS_QUERY_THEN_FETCH));
|
||||||
assertThat(request.requests().get(4).indices(), nullValue());
|
assertThat(request.requests().get(4).indices(), nullValue());
|
||||||
assertThat(request.requests().get(4).types().length, equalTo(0));
|
assertThat(request.requests().get(4).types().length, equalTo(0));
|
||||||
}
|
}
|
||||||
@ -64,7 +64,7 @@ public class MultiSearchRequestTests extends ElasticsearchTestCase {
|
|||||||
assertThat(request.requests().get(2).types().length, equalTo(0));
|
assertThat(request.requests().get(2).types().length, equalTo(0));
|
||||||
assertThat(request.requests().get(3).indices(), nullValue());
|
assertThat(request.requests().get(3).indices(), nullValue());
|
||||||
assertThat(request.requests().get(3).types().length, equalTo(0));
|
assertThat(request.requests().get(3).types().length, equalTo(0));
|
||||||
assertThat(request.requests().get(3).searchType(), equalTo(SearchType.COUNT));
|
assertThat(request.requests().get(3).searchType(), equalTo(SearchType.DFS_QUERY_THEN_FETCH));
|
||||||
assertThat(request.requests().get(4).indices(), nullValue());
|
assertThat(request.requests().get(4).indices(), nullValue());
|
||||||
assertThat(request.requests().get(4).types().length, equalTo(0));
|
assertThat(request.requests().get(4).types().length, equalTo(0));
|
||||||
}
|
}
|
||||||
@ -85,6 +85,6 @@ public class MultiSearchRequestTests extends ElasticsearchTestCase {
|
|||||||
assertThat(request.requests().get(2).types()[1], equalTo("type1"));
|
assertThat(request.requests().get(2).types()[1], equalTo("type1"));
|
||||||
assertThat(request.requests().get(3).indices(), nullValue());
|
assertThat(request.requests().get(3).indices(), nullValue());
|
||||||
assertThat(request.requests().get(3).types().length, equalTo(0));
|
assertThat(request.requests().get(3).types().length, equalTo(0));
|
||||||
assertThat(request.requests().get(3).searchType(), equalTo(SearchType.COUNT));
|
assertThat(request.requests().get(3).searchType(), equalTo(SearchType.DFS_QUERY_THEN_FETCH));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
{}
|
{}
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
{"search_type" : "count"}
|
{"search_type" : "dfs_query_then_fetch"}
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
|
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
{}
|
{}
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
{"search_type" : "count"}
|
{"search_type" : "dfs_query_then_fetch"}
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
|
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
|
@ -4,5 +4,5 @@
|
|||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
{"index" : ["test4", "test1"], "type" : [ "type2", "type1" ]}
|
{"index" : ["test4", "test1"], "type" : [ "type2", "type1" ]}
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
{"search_type" : "count"}
|
{"search_type" : "dfs_query_then_fetch"}
|
||||||
{"query" : {"match_all" {}}}
|
{"query" : {"match_all" {}}}
|
||||||
|
@ -76,7 +76,7 @@ public class CircuitBreakerBenchmark {
|
|||||||
terms("myterms")
|
terms("myterms")
|
||||||
.size(AGG_SIZE)
|
.size(AGG_SIZE)
|
||||||
.field("num")
|
.field("num")
|
||||||
).setSearchType(SearchType.COUNT).get();
|
).setSize(0).get();
|
||||||
Terms terms = resp.getAggregations().get("myterms");
|
Terms terms = resp.getAggregations().get("myterms");
|
||||||
assertNotNull("term aggs were calculated", terms);
|
assertNotNull("term aggs were calculated", terms);
|
||||||
totalTime += resp.getTookInMillis();
|
totalTime += resp.getTookInMillis();
|
||||||
@ -103,7 +103,7 @@ public class CircuitBreakerBenchmark {
|
|||||||
terms("myterms")
|
terms("myterms")
|
||||||
.size(AGG_SIZE)
|
.size(AGG_SIZE)
|
||||||
.field("num")
|
.field("num")
|
||||||
).setSearchType(SearchType.COUNT).get();
|
).setSize(0).get();
|
||||||
Terms terms = resp.getAggregations().get("myterms");
|
Terms terms = resp.getAggregations().get("myterms");
|
||||||
assertNotNull("term aggs were calculated", terms);
|
assertNotNull("term aggs were calculated", terms);
|
||||||
totalThreadedTime.addAndGet(resp.getTookInMillis());
|
totalThreadedTime.addAndGet(resp.getTookInMillis());
|
||||||
@ -153,7 +153,7 @@ public class CircuitBreakerBenchmark {
|
|||||||
}
|
}
|
||||||
bulkBuilder.get();
|
bulkBuilder.get();
|
||||||
client.admin().indices().prepareRefresh(INDEX).get();
|
client.admin().indices().prepareRefresh(INDEX).get();
|
||||||
SearchResponse countResp = client.prepareSearch(INDEX).setQuery(matchAllQuery()).setSearchType(SearchType.COUNT).get();
|
SearchResponse countResp = client.prepareSearch(INDEX).setQuery(matchAllQuery()).setSize(0).get();
|
||||||
assert countResp.getHits().getTotalHits() == NUM_DOCS : "all docs should be indexed";
|
assert countResp.getHits().getTotalHits() == NUM_DOCS : "all docs should be indexed";
|
||||||
|
|
||||||
final int warmupCount = 100;
|
final int warmupCount = 100;
|
||||||
@ -166,7 +166,7 @@ public class CircuitBreakerBenchmark {
|
|||||||
terms("myterms")
|
terms("myterms")
|
||||||
.size(AGG_SIZE)
|
.size(AGG_SIZE)
|
||||||
.field("num")
|
.field("num")
|
||||||
).setSearchType(SearchType.COUNT).get();
|
).setSize(0).get();
|
||||||
Terms terms = resp.getAggregations().get("myterms");
|
Terms terms = resp.getAggregations().get("myterms");
|
||||||
assertNotNull("term aggs were calculated", terms);
|
assertNotNull("term aggs were calculated", terms);
|
||||||
}
|
}
|
||||||
|
@ -148,7 +148,7 @@ public class CardinalityAggregationSearchBenchmark {
|
|||||||
long start = System.nanoTime();
|
long start = System.nanoTime();
|
||||||
SearchResponse resp = null;
|
SearchResponse resp = null;
|
||||||
for (int j = 0; j < ITERS; ++j) {
|
for (int j = 0; j < ITERS; ++j) {
|
||||||
resp = client.prepareSearch("index").setSearchType(SearchType.COUNT).addAggregation(cardinality("cardinality").field(field)).execute().actionGet();
|
resp = client.prepareSearch("index").setSize(0).addAggregation(cardinality("cardinality").field(field)).execute().actionGet();
|
||||||
}
|
}
|
||||||
long end = System.nanoTime();
|
long end = System.nanoTime();
|
||||||
final long cardinality = ((Cardinality) resp.getAggregations().get("cardinality")).getValue();
|
final long cardinality = ((Cardinality) resp.getAggregations().get("cardinality")).getValue();
|
||||||
|
@ -211,7 +211,7 @@ public class GlobalOrdinalsBenchmark {
|
|||||||
// run just the child query, warm up first
|
// run just the child query, warm up first
|
||||||
for (int j = 0; j < QUERY_WARMUP; j++) {
|
for (int j = 0; j < QUERY_WARMUP; j++) {
|
||||||
SearchResponse searchResponse = client.prepareSearch(INDEX_NAME)
|
SearchResponse searchResponse = client.prepareSearch(INDEX_NAME)
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(AggregationBuilders.terms(name).field(field).executionHint(executionHint))
|
.addAggregation(AggregationBuilders.terms(name).field(field).executionHint(executionHint))
|
||||||
.get();
|
.get();
|
||||||
@ -229,7 +229,7 @@ public class GlobalOrdinalsBenchmark {
|
|||||||
totalQueryTime = 0;
|
totalQueryTime = 0;
|
||||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||||
SearchResponse searchResponse = client.prepareSearch(INDEX_NAME)
|
SearchResponse searchResponse = client.prepareSearch(INDEX_NAME)
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(AggregationBuilders.terms(name).field(field).executionHint(executionHint))
|
.addAggregation(AggregationBuilders.terms(name).field(field).executionHint(executionHint))
|
||||||
.get();
|
.get();
|
||||||
|
@ -177,7 +177,7 @@ public class PercentilesAggregationSearchBenchmark {
|
|||||||
}
|
}
|
||||||
System.out.println("Expected percentiles: " + percentiles);
|
System.out.println("Expected percentiles: " + percentiles);
|
||||||
System.out.println();
|
System.out.println();
|
||||||
SearchResponse resp = client.prepareSearch(d.indexName()).setSearchType(SearchType.COUNT).addAggregation(percentiles("pcts").field("v").percentiles(PERCENTILES)).execute().actionGet();
|
SearchResponse resp = client.prepareSearch(d.indexName()).setSize(0).addAggregation(percentiles("pcts").field("v").percentiles(PERCENTILES)).execute().actionGet();
|
||||||
Percentiles pcts = resp.getAggregations().get("pcts");
|
Percentiles pcts = resp.getAggregations().get("pcts");
|
||||||
Map<Double, Double> asMap = Maps.newLinkedHashMap();
|
Map<Double, Double> asMap = Maps.newLinkedHashMap();
|
||||||
double sumOfErrorSquares = 0;
|
double sumOfErrorSquares = 0;
|
||||||
@ -196,11 +196,11 @@ public class PercentilesAggregationSearchBenchmark {
|
|||||||
for (Distribution d : Distribution.values()) {
|
for (Distribution d : Distribution.values()) {
|
||||||
System.out.println("#### " + d);
|
System.out.println("#### " + d);
|
||||||
for (int j = 0; j < QUERY_WARMUP; ++j) {
|
for (int j = 0; j < QUERY_WARMUP; ++j) {
|
||||||
client.prepareSearch(d.indexName()).setSearchType(SearchType.COUNT).addAggregation(percentiles("pcts").field("v").percentiles(PERCENTILES)).execute().actionGet();
|
client.prepareSearch(d.indexName()).setSize(0).addAggregation(percentiles("pcts").field("v").percentiles(PERCENTILES)).execute().actionGet();
|
||||||
}
|
}
|
||||||
long start = System.nanoTime();
|
long start = System.nanoTime();
|
||||||
for (int j = 0; j < QUERY_COUNT; ++j) {
|
for (int j = 0; j < QUERY_COUNT; ++j) {
|
||||||
client.prepareSearch(d.indexName()).setSearchType(SearchType.COUNT).addAggregation(percentiles("pcts").field("v").percentiles(PERCENTILES)).execute().actionGet();
|
client.prepareSearch(d.indexName()).setSize(0).addAggregation(percentiles("pcts").field("v").percentiles(PERCENTILES)).execute().actionGet();
|
||||||
}
|
}
|
||||||
System.out.println(new TimeValue((System.nanoTime() - start) / QUERY_COUNT, TimeUnit.NANOSECONDS));
|
System.out.println(new TimeValue((System.nanoTime() - start) / QUERY_COUNT, TimeUnit.NANOSECONDS));
|
||||||
}
|
}
|
||||||
|
@ -126,7 +126,7 @@ public class QueryFilterAggregationSearchBenchmark {
|
|||||||
totalQueryTime = 0;
|
totalQueryTime = 0;
|
||||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||||
SearchResponse searchResponse = client.prepareSearch()
|
SearchResponse searchResponse = client.prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(termQuery("l_value", anyValue))
|
.setQuery(termQuery("l_value", anyValue))
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
totalQueryTime += searchResponse.getTookInMillis();
|
totalQueryTime += searchResponse.getTookInMillis();
|
||||||
@ -136,7 +136,7 @@ public class QueryFilterAggregationSearchBenchmark {
|
|||||||
totalQueryTime = 0;
|
totalQueryTime = 0;
|
||||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||||
SearchResponse searchResponse = client.prepareSearch()
|
SearchResponse searchResponse = client.prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(termQuery("l_value", anyValue))
|
.setQuery(termQuery("l_value", anyValue))
|
||||||
.addAggregation(AggregationBuilders.filter("filter").filter(FilterBuilders.termFilter("l_value", anyValue)))
|
.addAggregation(AggregationBuilders.filter("filter").filter(FilterBuilders.termFilter("l_value", anyValue)))
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
|
@ -265,7 +265,7 @@ public class SubAggregationSearchCollectModeBenchmark {
|
|||||||
// run just the child query, warm up first
|
// run just the child query, warm up first
|
||||||
for (int j = 0; j < QUERY_WARMUP; j++) {
|
for (int j = 0; j < QUERY_WARMUP; j++) {
|
||||||
SearchResponse searchResponse = client.prepareSearch("test")
|
SearchResponse searchResponse = client.prepareSearch("test")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(AggregationBuilders.terms(name + "s_value").field("s_value").collectMode(collectionModes[0])
|
.addAggregation(AggregationBuilders.terms(name + "s_value").field("s_value").collectMode(collectionModes[0])
|
||||||
.subAggregation(AggregationBuilders.terms(name + "l_value").field("l_value").collectMode(collectionModes[1])
|
.subAggregation(AggregationBuilders.terms(name + "l_value").field("l_value").collectMode(collectionModes[1])
|
||||||
@ -286,7 +286,7 @@ public class SubAggregationSearchCollectModeBenchmark {
|
|||||||
totalQueryTime = 0;
|
totalQueryTime = 0;
|
||||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||||
SearchResponse searchResponse = client.prepareSearch("test")
|
SearchResponse searchResponse = client.prepareSearch("test")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(AggregationBuilders.terms(name + "s_value").field("s_value").collectMode(collectionModes[0])
|
.addAggregation(AggregationBuilders.terms(name + "s_value").field("s_value").collectMode(collectionModes[0])
|
||||||
.subAggregation(AggregationBuilders.terms(name + "l_value").field("l_value").collectMode(collectionModes[1])
|
.subAggregation(AggregationBuilders.terms(name + "l_value").field("l_value").collectMode(collectionModes[1])
|
||||||
|
@ -306,7 +306,7 @@ public class TermsAggregationSearchAndIndexingBenchmark {
|
|||||||
while (run) {
|
while (run) {
|
||||||
try {
|
try {
|
||||||
SearchResponse searchResponse = Method.AGGREGATION.addTermsAgg(client.prepareSearch()
|
SearchResponse searchResponse = Method.AGGREGATION.addTermsAgg(client.prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery()), "test", field, executionHint)
|
.setQuery(matchAllQuery()), "test", field, executionHint)
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
if (searchResponse.getHits().totalHits() != COUNT) {
|
if (searchResponse.getHits().totalHits() != COUNT) {
|
||||||
|
@ -322,7 +322,7 @@ public class TermsAggregationSearchBenchmark {
|
|||||||
// run just the child query, warm up first
|
// run just the child query, warm up first
|
||||||
for (int j = 0; j < QUERY_WARMUP; j++) {
|
for (int j = 0; j < QUERY_WARMUP; j++) {
|
||||||
SearchResponse searchResponse = method.addTermsAgg(client.prepareSearch("test")
|
SearchResponse searchResponse = method.addTermsAgg(client.prepareSearch("test")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery()), name, field, executionHint)
|
.setQuery(matchAllQuery()), name, field, executionHint)
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
if (j == 0) {
|
if (j == 0) {
|
||||||
@ -339,7 +339,7 @@ public class TermsAggregationSearchBenchmark {
|
|||||||
totalQueryTime = 0;
|
totalQueryTime = 0;
|
||||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||||
SearchResponse searchResponse = method.addTermsAgg(client.prepareSearch()
|
SearchResponse searchResponse = method.addTermsAgg(client.prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery()), name, field, executionHint)
|
.setQuery(matchAllQuery()), name, field, executionHint)
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
if (searchResponse.getHits().totalHits() != COUNT) {
|
if (searchResponse.getHits().totalHits() != COUNT) {
|
||||||
@ -372,7 +372,7 @@ public class TermsAggregationSearchBenchmark {
|
|||||||
// run just the child query, warm up first
|
// run just the child query, warm up first
|
||||||
for (int j = 0; j < QUERY_WARMUP; j++) {
|
for (int j = 0; j < QUERY_WARMUP; j++) {
|
||||||
SearchResponse searchResponse = method.addTermsStatsAgg(client.prepareSearch()
|
SearchResponse searchResponse = method.addTermsStatsAgg(client.prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery()), name, keyField, valueField)
|
.setQuery(matchAllQuery()), name, keyField, valueField)
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
if (j == 0) {
|
if (j == 0) {
|
||||||
@ -389,7 +389,7 @@ public class TermsAggregationSearchBenchmark {
|
|||||||
totalQueryTime = 0;
|
totalQueryTime = 0;
|
||||||
for (int j = 0; j < QUERY_COUNT; j++) {
|
for (int j = 0; j < QUERY_COUNT; j++) {
|
||||||
SearchResponse searchResponse = method.addTermsStatsAgg(client.prepareSearch()
|
SearchResponse searchResponse = method.addTermsStatsAgg(client.prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery()), name, keyField, valueField)
|
.setQuery(matchAllQuery()), name, keyField, valueField)
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
if (searchResponse.getHits().totalHits() != COUNT) {
|
if (searchResponse.getHits().totalHits() != COUNT) {
|
||||||
|
@ -210,7 +210,7 @@ public class TimeDataHistogramAggregationBenchmark {
|
|||||||
|
|
||||||
private static SearchResponse doTermsAggsSearch(String name, String field, float matchPercentage) {
|
private static SearchResponse doTermsAggsSearch(String name, String field, float matchPercentage) {
|
||||||
SearchResponse response = client.prepareSearch()
|
SearchResponse response = client.prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(QueryBuilders.constantScoreQuery(FilterBuilders.scriptFilter("random()<matchP").addParam("matchP", matchPercentage).cache(true)))
|
.setQuery(QueryBuilders.constantScoreQuery(FilterBuilders.scriptFilter("random()<matchP").addParam("matchP", matchPercentage).cache(true)))
|
||||||
.addAggregation(AggregationBuilders.histogram(name).field(field).interval(3600 * 1000)).get();
|
.addAggregation(AggregationBuilders.histogram(name).field(field).interval(3600 * 1000)).get();
|
||||||
|
|
||||||
|
@ -191,7 +191,7 @@ public class GeoDistanceSearchBenchmark {
|
|||||||
|
|
||||||
public static void run(Client client, GeoDistance geoDistance, String optimizeBbox) {
|
public static void run(Client client, GeoDistance geoDistance, String optimizeBbox) {
|
||||||
client.prepareSearch() // from NY
|
client.prepareSearch() // from NY
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(filteredQuery(matchAllQuery(), geoDistanceFilter("location")
|
.setQuery(filteredQuery(matchAllQuery(), geoDistanceFilter("location")
|
||||||
.distance("2km")
|
.distance("2km")
|
||||||
.optimizeBbox(optimizeBbox)
|
.optimizeBbox(optimizeBbox)
|
||||||
|
@ -71,7 +71,7 @@ public class FieldDataFilterIntegrationTests extends ElasticsearchIntegrationTes
|
|||||||
}
|
}
|
||||||
refresh();
|
refresh();
|
||||||
SearchResponse searchResponse = client().prepareSearch()
|
SearchResponse searchResponse = client().prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(matchAllQuery())
|
.setQuery(matchAllQuery())
|
||||||
.addAggregation(terms("name").field("name"))
|
.addAggregation(terms("name").field("name"))
|
||||||
.addAggregation(terms("not_filtered").field("not_filtered")).get();
|
.addAggregation(terms("not_filtered").field("not_filtered")).get();
|
||||||
|
@ -20,7 +20,6 @@
|
|||||||
package org.elasticsearch.indices.cache.query;
|
package org.elasticsearch.indices.cache.query;
|
||||||
|
|
||||||
import org.elasticsearch.action.search.SearchResponse;
|
import org.elasticsearch.action.search.SearchResponse;
|
||||||
import org.elasticsearch.action.search.SearchType;
|
|
||||||
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;
|
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;
|
||||||
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
|
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
|
||||||
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Bucket;
|
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Bucket;
|
||||||
@ -48,7 +47,7 @@ public class IndicesQueryCacheTests extends ElasticsearchIntegrationTest {
|
|||||||
// This is not a random example: serialization with time zones writes shared strings
|
// This is not a random example: serialization with time zones writes shared strings
|
||||||
// which used to not work well with the query cache because of the handles stream output
|
// which used to not work well with the query cache because of the handles stream output
|
||||||
// see #9500
|
// see #9500
|
||||||
final SearchResponse r1 = client().prepareSearch("index").setSearchType(SearchType.COUNT)
|
final SearchResponse r1 = client().prepareSearch("index").setSize(0)
|
||||||
.addAggregation(dateHistogram("histo").field("f").timeZone("+01:00").minDocCount(0).interval(DateHistogramInterval.MONTH)).get();
|
.addAggregation(dateHistogram("histo").field("f").timeZone("+01:00").minDocCount(0).interval(DateHistogramInterval.MONTH)).get();
|
||||||
assertSearchResponse(r1);
|
assertSearchResponse(r1);
|
||||||
|
|
||||||
@ -56,7 +55,7 @@ public class IndicesQueryCacheTests extends ElasticsearchIntegrationTest {
|
|||||||
assertThat(client().admin().indices().prepareStats("index").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
assertThat(client().admin().indices().prepareStats("index").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
||||||
|
|
||||||
for (int i = 0; i < 10; ++i) {
|
for (int i = 0; i < 10; ++i) {
|
||||||
final SearchResponse r2 = client().prepareSearch("index").setSearchType(SearchType.COUNT)
|
final SearchResponse r2 = client().prepareSearch("index").setSize(0)
|
||||||
.addAggregation(dateHistogram("histo").field("f").timeZone("+01:00").minDocCount(0).interval(DateHistogramInterval.MONTH)).get();
|
.addAggregation(dateHistogram("histo").field("f").timeZone("+01:00").minDocCount(0).interval(DateHistogramInterval.MONTH)).get();
|
||||||
assertSearchResponse(r2);
|
assertSearchResponse(r2);
|
||||||
Histogram h1 = r1.getAggregations().get("histo");
|
Histogram h1 = r1.getAggregations().get("histo");
|
||||||
|
@ -240,7 +240,7 @@ public class IndexStatsTests extends ElasticsearchIntegrationTest {
|
|||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getHitCount(), equalTo(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getHitCount(), equalTo(0l));
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMissCount(), equalTo(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMissCount(), equalTo(0l));
|
||||||
for (int i = 0; i < 10; i++) {
|
for (int i = 0; i < 10; i++) {
|
||||||
assertThat(client().prepareSearch("idx").setSearchType(SearchType.COUNT).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
assertThat(client().prepareSearch("idx").setSearchType(SearchType.QUERY_THEN_FETCH).setSize(0).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
||||||
}
|
}
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getHitCount(), greaterThan(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getHitCount(), greaterThan(0l));
|
||||||
@ -265,7 +265,7 @@ public class IndexStatsTests extends ElasticsearchIntegrationTest {
|
|||||||
});
|
});
|
||||||
|
|
||||||
for (int i = 0; i < 10; i++) {
|
for (int i = 0; i < 10; i++) {
|
||||||
assertThat(client().prepareSearch("idx").setSearchType(SearchType.COUNT).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
assertThat(client().prepareSearch("idx").setSearchType(SearchType.QUERY_THEN_FETCH).setSize(0).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -274,10 +274,10 @@ public class IndexStatsTests extends ElasticsearchIntegrationTest {
|
|||||||
|
|
||||||
// test explicit request parameter
|
// test explicit request parameter
|
||||||
|
|
||||||
assertThat(client().prepareSearch("idx").setSearchType(SearchType.COUNT).setQueryCache(false).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
assertThat(client().prepareSearch("idx").setSearchType(SearchType.QUERY_THEN_FETCH).setSize(0).setQueryCache(false).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), equalTo(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), equalTo(0l));
|
||||||
|
|
||||||
assertThat(client().prepareSearch("idx").setSearchType(SearchType.COUNT).setQueryCache(true).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
assertThat(client().prepareSearch("idx").setSearchType(SearchType.QUERY_THEN_FETCH).setSize(0).setQueryCache(true).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
||||||
|
|
||||||
// set the index level setting to false, and see that the reverse works
|
// set the index level setting to false, and see that the reverse works
|
||||||
@ -285,10 +285,10 @@ public class IndexStatsTests extends ElasticsearchIntegrationTest {
|
|||||||
client().admin().indices().prepareClearCache().setQueryCache(true).get(); // clean the cache
|
client().admin().indices().prepareClearCache().setQueryCache(true).get(); // clean the cache
|
||||||
assertAcked(client().admin().indices().prepareUpdateSettings("idx").setSettings(ImmutableSettings.builder().put(IndicesQueryCache.INDEX_CACHE_QUERY_ENABLED, false)));
|
assertAcked(client().admin().indices().prepareUpdateSettings("idx").setSettings(ImmutableSettings.builder().put(IndicesQueryCache.INDEX_CACHE_QUERY_ENABLED, false)));
|
||||||
|
|
||||||
assertThat(client().prepareSearch("idx").setSearchType(SearchType.COUNT).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
assertThat(client().prepareSearch("idx").setSearchType(SearchType.QUERY_THEN_FETCH).setSize(0).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), equalTo(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), equalTo(0l));
|
||||||
|
|
||||||
assertThat(client().prepareSearch("idx").setSearchType(SearchType.COUNT).setQueryCache(true).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
assertThat(client().prepareSearch("idx").setSearchType(SearchType.QUERY_THEN_FETCH).setSize(0).setQueryCache(true).get().getHits().getTotalHits(), equalTo((long) numDocs));
|
||||||
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
assertThat(client().admin().indices().prepareStats("idx").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -275,7 +275,7 @@ public class RecoveryWhileUnderLoadTests extends ElasticsearchIntegrationTest {
|
|||||||
SearchResponse[] iterationResults = new SearchResponse[iterations];
|
SearchResponse[] iterationResults = new SearchResponse[iterations];
|
||||||
boolean error = false;
|
boolean error = false;
|
||||||
for (int i = 0; i < iterations; i++) {
|
for (int i = 0; i < iterations; i++) {
|
||||||
SearchResponse searchResponse = client().prepareSearch().setSearchType(SearchType.COUNT).setQuery(matchAllQuery()).get();
|
SearchResponse searchResponse = client().prepareSearch().setSize(0).setQuery(matchAllQuery()).get();
|
||||||
logSearchResponse(numberOfShards, numberOfDocs, i, searchResponse);
|
logSearchResponse(numberOfShards, numberOfDocs, i, searchResponse);
|
||||||
iterationResults[i] = searchResponse;
|
iterationResults[i] = searchResponse;
|
||||||
if (searchResponse.getHits().totalHits() != numberOfDocs) {
|
if (searchResponse.getHits().totalHits() != numberOfDocs) {
|
||||||
@ -298,7 +298,7 @@ public class RecoveryWhileUnderLoadTests extends ElasticsearchIntegrationTest {
|
|||||||
public boolean apply(Object o) {
|
public boolean apply(Object o) {
|
||||||
boolean error = false;
|
boolean error = false;
|
||||||
for (int i = 0; i < iterations; i++) {
|
for (int i = 0; i < iterations; i++) {
|
||||||
SearchResponse searchResponse = client().prepareSearch().setSearchType(SearchType.COUNT).setQuery(matchAllQuery()).get();
|
SearchResponse searchResponse = client().prepareSearch().setSize(0).setQuery(matchAllQuery()).get();
|
||||||
if (searchResponse.getHits().totalHits() != numberOfDocs) {
|
if (searchResponse.getHits().totalHits() != numberOfDocs) {
|
||||||
error = true;
|
error = true;
|
||||||
}
|
}
|
||||||
|
@ -348,7 +348,7 @@ public class RelocationTests extends ElasticsearchIntegrationTest {
|
|||||||
logger.debug("--> verifying all searches return the same number of docs");
|
logger.debug("--> verifying all searches return the same number of docs");
|
||||||
long expectedCount = -1;
|
long expectedCount = -1;
|
||||||
for (Client client : clients()) {
|
for (Client client : clients()) {
|
||||||
SearchResponse response = client.prepareSearch("test").setPreference("_local").setSearchType(SearchType.COUNT).get();
|
SearchResponse response = client.prepareSearch("test").setPreference("_local").setSize(0).get();
|
||||||
assertNoFailures(response);
|
assertNoFailures(response);
|
||||||
if (expectedCount < 0) {
|
if (expectedCount < 0) {
|
||||||
expectedCount = response.getHits().totalHits();
|
expectedCount = response.getHits().totalHits();
|
||||||
|
65
src/test/java/org/elasticsearch/search/CountSearchTests.java
Normal file
65
src/test/java/org/elasticsearch/search/CountSearchTests.java
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
/*
|
||||||
|
* Licensed to Elasticsearch under one or more contributor
|
||||||
|
* license agreements. See the NOTICE file distributed with
|
||||||
|
* this work for additional information regarding copyright
|
||||||
|
* ownership. Elasticsearch licenses this file to you under
|
||||||
|
* the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
* not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing,
|
||||||
|
* software distributed under the License is distributed on an
|
||||||
|
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||||
|
* KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
package org.elasticsearch.search;
|
||||||
|
|
||||||
|
import org.elasticsearch.action.search.SearchResponse;
|
||||||
|
import org.elasticsearch.action.search.SearchType;
|
||||||
|
import org.elasticsearch.search.aggregations.AggregationBuilders;
|
||||||
|
import org.elasticsearch.search.aggregations.metrics.sum.Sum;
|
||||||
|
import org.elasticsearch.test.ElasticsearchIntegrationTest;
|
||||||
|
|
||||||
|
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* {@link SearchType#COUNT} is deprecated but let's make sure it still works as expected.
|
||||||
|
*/
|
||||||
|
public class CountSearchTests extends ElasticsearchIntegrationTest {
|
||||||
|
|
||||||
|
public void testDuelCountQueryThenFetch() throws Exception {
|
||||||
|
createIndex("idx");
|
||||||
|
ensureYellow();
|
||||||
|
indexRandom(true,
|
||||||
|
client().prepareIndex("idx", "type", "1").setSource("foo", "bar", "bar", 3),
|
||||||
|
client().prepareIndex("idx", "type", "2").setSource("foo", "baz", "bar", 10),
|
||||||
|
client().prepareIndex("idx", "type", "3").setSource("foo", "foo", "bar", 7));
|
||||||
|
|
||||||
|
final SearchResponse resp1 = client().prepareSearch("idx").setSize(0).addAggregation(AggregationBuilders.sum("bar").field("bar")).execute().get();
|
||||||
|
assertSearchResponse(resp1);
|
||||||
|
final SearchResponse resp2 = client().prepareSearch("idx").setSearchType(SearchType.COUNT).addAggregation(AggregationBuilders.sum("bar").field("bar")).execute().get();
|
||||||
|
assertSearchResponse(resp2);
|
||||||
|
|
||||||
|
assertEquals(resp1.getHits().getTotalHits(), resp2.getHits().getTotalHits());
|
||||||
|
Sum sum1 = resp1.getAggregations().get("bar");
|
||||||
|
Sum sum2 = resp2.getAggregations().get("bar");
|
||||||
|
assertEquals(sum1.getValue(), sum2.getValue(), 0d);
|
||||||
|
}
|
||||||
|
|
||||||
|
public void testCloseContextEvenWithExplicitSize() throws Exception {
|
||||||
|
createIndex("idx");
|
||||||
|
ensureYellow();
|
||||||
|
indexRandom(true,
|
||||||
|
client().prepareIndex("idx", "type", "1").setSource("foo", "bar", "bar", 3),
|
||||||
|
client().prepareIndex("idx", "type", "2").setSource("foo", "baz", "bar", 10),
|
||||||
|
client().prepareIndex("idx", "type", "3").setSource("foo", "foo", "bar", 7));
|
||||||
|
|
||||||
|
client().prepareSearch("idx").setSearchType(SearchType.COUNT).setSize(2).addAggregation(AggregationBuilders.sum("bar").field("bar")).execute().get();
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
@ -264,7 +264,7 @@ public class MinDocCountTests extends AbstractTermsTests {
|
|||||||
private void testMinDocCountOnTerms(String field, Script script, Terms.Order order, String include, boolean retryOnFailure) throws Exception {
|
private void testMinDocCountOnTerms(String field, Script script, Terms.Order order, String include, boolean retryOnFailure) throws Exception {
|
||||||
// all terms
|
// all terms
|
||||||
final SearchResponse allTermsResponse = client().prepareSearch("idx").setTypes("type")
|
final SearchResponse allTermsResponse = client().prepareSearch("idx").setTypes("type")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(QUERY)
|
.setQuery(QUERY)
|
||||||
.addAggregation(script.apply(terms("terms"), field)
|
.addAggregation(script.apply(terms("terms"), field)
|
||||||
.collectMode(randomFrom(SubAggCollectionMode.values()))
|
.collectMode(randomFrom(SubAggCollectionMode.values()))
|
||||||
@ -281,7 +281,7 @@ public class MinDocCountTests extends AbstractTermsTests {
|
|||||||
for (long minDocCount = 0; minDocCount < 20; ++minDocCount) {
|
for (long minDocCount = 0; minDocCount < 20; ++minDocCount) {
|
||||||
final int size = randomIntBetween(1, cardinality + 2);
|
final int size = randomIntBetween(1, cardinality + 2);
|
||||||
final SearchRequest request = client().prepareSearch("idx").setTypes("type")
|
final SearchRequest request = client().prepareSearch("idx").setTypes("type")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(QUERY)
|
.setQuery(QUERY)
|
||||||
.addAggregation(script.apply(terms("terms"), field)
|
.addAggregation(script.apply(terms("terms"), field)
|
||||||
.collectMode(randomFrom(SubAggCollectionMode.values()))
|
.collectMode(randomFrom(SubAggCollectionMode.values()))
|
||||||
@ -349,7 +349,7 @@ public class MinDocCountTests extends AbstractTermsTests {
|
|||||||
private void testMinDocCountOnHistogram(Histogram.Order order) throws Exception {
|
private void testMinDocCountOnHistogram(Histogram.Order order) throws Exception {
|
||||||
final int interval = randomIntBetween(1, 3);
|
final int interval = randomIntBetween(1, 3);
|
||||||
final SearchResponse allResponse = client().prepareSearch("idx").setTypes("type")
|
final SearchResponse allResponse = client().prepareSearch("idx").setTypes("type")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(QUERY)
|
.setQuery(QUERY)
|
||||||
.addAggregation(histogram("histo").field("d").interval(interval).order(order).minDocCount(0))
|
.addAggregation(histogram("histo").field("d").interval(interval).order(order).minDocCount(0))
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
@ -358,7 +358,7 @@ public class MinDocCountTests extends AbstractTermsTests {
|
|||||||
|
|
||||||
for (long minDocCount = 0; minDocCount < 50; ++minDocCount) {
|
for (long minDocCount = 0; minDocCount < 50; ++minDocCount) {
|
||||||
final SearchResponse response = client().prepareSearch("idx").setTypes("type")
|
final SearchResponse response = client().prepareSearch("idx").setTypes("type")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(QUERY)
|
.setQuery(QUERY)
|
||||||
.addAggregation(histogram("histo").field("d").interval(interval).order(order).minDocCount(minDocCount))
|
.addAggregation(histogram("histo").field("d").interval(interval).order(order).minDocCount(minDocCount))
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
@ -370,7 +370,7 @@ public class MinDocCountTests extends AbstractTermsTests {
|
|||||||
private void testMinDocCountOnDateHistogram(Histogram.Order order) throws Exception {
|
private void testMinDocCountOnDateHistogram(Histogram.Order order) throws Exception {
|
||||||
final int interval = randomIntBetween(1, 3);
|
final int interval = randomIntBetween(1, 3);
|
||||||
final SearchResponse allResponse = client().prepareSearch("idx").setTypes("type")
|
final SearchResponse allResponse = client().prepareSearch("idx").setTypes("type")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(QUERY)
|
.setQuery(QUERY)
|
||||||
.addAggregation(dateHistogram("histo").field("date").interval(DateHistogramInterval.DAY).order(order).minDocCount(0))
|
.addAggregation(dateHistogram("histo").field("date").interval(DateHistogramInterval.DAY).order(order).minDocCount(0))
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
@ -379,7 +379,7 @@ public class MinDocCountTests extends AbstractTermsTests {
|
|||||||
|
|
||||||
for (long minDocCount = 0; minDocCount < 50; ++minDocCount) {
|
for (long minDocCount = 0; minDocCount < 50; ++minDocCount) {
|
||||||
final SearchResponse response = client().prepareSearch("idx").setTypes("type")
|
final SearchResponse response = client().prepareSearch("idx").setTypes("type")
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setQuery(QUERY)
|
.setQuery(QUERY)
|
||||||
.addAggregation(dateHistogram("histo").field("date").interval(DateHistogramInterval.DAY).order(order).minDocCount(minDocCount))
|
.addAggregation(dateHistogram("histo").field("date").interval(DateHistogramInterval.DAY).order(order).minDocCount(minDocCount))
|
||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
|
@ -664,7 +664,7 @@ public class GeoDistanceTests extends ElasticsearchIntegrationTest {
|
|||||||
logger.info("Now testing GeoDistance={}, distance={}, origin=({}, {})", geoDistance, distance, originLat, originLon);
|
logger.info("Now testing GeoDistance={}, distance={}, origin=({}, {})", geoDistance, distance, originLat, originLon);
|
||||||
long matches = -1;
|
long matches = -1;
|
||||||
for (String optimizeBbox : Arrays.asList("none", "memory", "indexed")) {
|
for (String optimizeBbox : Arrays.asList("none", "memory", "indexed")) {
|
||||||
SearchResponse resp = client().prepareSearch("index").setSearchType(SearchType.COUNT).setQuery(QueryBuilders.constantScoreQuery(
|
SearchResponse resp = client().prepareSearch("index").setSize(0).setQuery(QueryBuilders.constantScoreQuery(
|
||||||
FilterBuilders.geoDistanceFilter("location").point(originLat, originLon).distance(distance).geoDistance(geoDistance).optimizeBbox(optimizeBbox))).execute().actionGet();
|
FilterBuilders.geoDistanceFilter("location").point(originLat, originLon).distance(distance).geoDistance(geoDistance).optimizeBbox(optimizeBbox))).execute().actionGet();
|
||||||
assertSearchResponse(resp);
|
assertSearchResponse(resp);
|
||||||
logger.info("{} -> {} hits", optimizeBbox, resp.getHits().totalHits());
|
logger.info("{} -> {} hits", optimizeBbox, resp.getHits().totalHits());
|
||||||
|
@ -50,7 +50,7 @@ public class SearchPreferenceTests extends ElasticsearchIntegrationTest {
|
|||||||
client().admin().cluster().prepareHealth().setWaitForStatus(ClusterHealthStatus.RED).execute().actionGet();
|
client().admin().cluster().prepareHealth().setWaitForStatus(ClusterHealthStatus.RED).execute().actionGet();
|
||||||
String[] preferences = new String[] {"_primary", "_local", "_primary_first", "_prefer_node:somenode", "_prefer_node:server2"};
|
String[] preferences = new String[] {"_primary", "_local", "_primary_first", "_prefer_node:somenode", "_prefer_node:server2"};
|
||||||
for (String pref : preferences) {
|
for (String pref : preferences) {
|
||||||
SearchResponse searchResponse = client().prepareSearch().setSearchType(SearchType.COUNT).setPreference(pref).execute().actionGet();
|
SearchResponse searchResponse = client().prepareSearch().setSize(0).setPreference(pref).execute().actionGet();
|
||||||
assertThat(RestStatus.OK, equalTo(searchResponse.status()));
|
assertThat(RestStatus.OK, equalTo(searchResponse.status()));
|
||||||
assertThat(pref, searchResponse.getFailedShards(), greaterThanOrEqualTo(0));
|
assertThat(pref, searchResponse.getFailedShards(), greaterThanOrEqualTo(0));
|
||||||
searchResponse = client().prepareSearch().setPreference(pref).execute().actionGet();
|
searchResponse = client().prepareSearch().setPreference(pref).execute().actionGet();
|
||||||
@ -59,7 +59,7 @@ public class SearchPreferenceTests extends ElasticsearchIntegrationTest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
//_only_local is a stricter preference, we need to send the request to a data node
|
//_only_local is a stricter preference, we need to send the request to a data node
|
||||||
SearchResponse searchResponse = dataNodeClient().prepareSearch().setSearchType(SearchType.COUNT).setPreference("_only_local").execute().actionGet();
|
SearchResponse searchResponse = dataNodeClient().prepareSearch().setSize(0).setPreference("_only_local").execute().actionGet();
|
||||||
assertThat(RestStatus.OK, equalTo(searchResponse.status()));
|
assertThat(RestStatus.OK, equalTo(searchResponse.status()));
|
||||||
assertThat("_only_local", searchResponse.getFailedShards(), greaterThanOrEqualTo(0));
|
assertThat("_only_local", searchResponse.getFailedShards(), greaterThanOrEqualTo(0));
|
||||||
searchResponse = dataNodeClient().prepareSearch().setPreference("_only_local").execute().actionGet();
|
searchResponse = dataNodeClient().prepareSearch().setPreference("_only_local").execute().actionGet();
|
||||||
|
@ -66,6 +66,7 @@ public class SearchScanTests extends ElasticsearchIntegrationTest {
|
|||||||
.execute().actionGet();
|
.execute().actionGet();
|
||||||
|
|
||||||
assertThat(searchResponse.getHits().totalHits(), equalTo((long)builders.length/2));
|
assertThat(searchResponse.getHits().totalHits(), equalTo((long)builders.length/2));
|
||||||
|
assertThat(searchResponse.getHits().getHits().length, equalTo(0));
|
||||||
|
|
||||||
// start scrolling, until we get not results
|
// start scrolling, until we get not results
|
||||||
while (true) {
|
while (true) {
|
||||||
|
@ -427,7 +427,7 @@ public class SearchScrollTests extends ElasticsearchIntegrationTest {
|
|||||||
.setQuery(QueryBuilders.matchAllQuery())
|
.setQuery(QueryBuilders.matchAllQuery())
|
||||||
.setSize(Integer.MAX_VALUE);
|
.setSize(Integer.MAX_VALUE);
|
||||||
|
|
||||||
if (searchType == SearchType.SCAN || searchType != SearchType.COUNT && randomBoolean()) {
|
if (searchType == SearchType.SCAN || randomBoolean()) {
|
||||||
builder.setScroll("1m");
|
builder.setScroll("1m");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -272,13 +272,13 @@ public class SuggestSearchTests extends ElasticsearchIntegrationTest {
|
|||||||
|
|
||||||
phraseSuggestion.field("nosuchField");
|
phraseSuggestion.field("nosuchField");
|
||||||
{
|
{
|
||||||
SearchRequestBuilder suggestBuilder = client().prepareSearch().setSearchType(SearchType.COUNT);
|
SearchRequestBuilder suggestBuilder = client().prepareSearch().setSize(0);
|
||||||
suggestBuilder.setSuggestText("tetsting sugestion");
|
suggestBuilder.setSuggestText("tetsting sugestion");
|
||||||
suggestBuilder.addSuggestion(phraseSuggestion);
|
suggestBuilder.addSuggestion(phraseSuggestion);
|
||||||
assertThrows(suggestBuilder, SearchPhaseExecutionException.class);
|
assertThrows(suggestBuilder, SearchPhaseExecutionException.class);
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
SearchRequestBuilder suggestBuilder = client().prepareSearch().setSearchType(SearchType.COUNT);
|
SearchRequestBuilder suggestBuilder = client().prepareSearch().setSize(0);
|
||||||
suggestBuilder.setSuggestText("tetsting sugestion");
|
suggestBuilder.setSuggestText("tetsting sugestion");
|
||||||
suggestBuilder.addSuggestion(phraseSuggestion);
|
suggestBuilder.addSuggestion(phraseSuggestion);
|
||||||
assertThrows(suggestBuilder, SearchPhaseExecutionException.class);
|
assertThrows(suggestBuilder, SearchPhaseExecutionException.class);
|
||||||
@ -815,13 +815,13 @@ public class SuggestSearchTests extends ElasticsearchIntegrationTest {
|
|||||||
refresh();
|
refresh();
|
||||||
|
|
||||||
// When searching on a shard with a non existing mapping, we should fail
|
// When searching on a shard with a non existing mapping, we should fail
|
||||||
SearchRequestBuilder request = client().prepareSearch().setSearchType(SearchType.COUNT)
|
SearchRequestBuilder request = client().prepareSearch().setSize(0)
|
||||||
.setSuggestText("tetsting sugestion")
|
.setSuggestText("tetsting sugestion")
|
||||||
.addSuggestion(phraseSuggestion("did_you_mean").field("fielddoesnotexist").maxErrors(5.0f));
|
.addSuggestion(phraseSuggestion("did_you_mean").field("fielddoesnotexist").maxErrors(5.0f));
|
||||||
assertThrows(request, SearchPhaseExecutionException.class);
|
assertThrows(request, SearchPhaseExecutionException.class);
|
||||||
|
|
||||||
// When searching on a shard which does not hold yet any document of an existing type, we should not fail
|
// When searching on a shard which does not hold yet any document of an existing type, we should not fail
|
||||||
SearchResponse searchResponse = client().prepareSearch().setSearchType(SearchType.COUNT)
|
SearchResponse searchResponse = client().prepareSearch().setSize(0)
|
||||||
.setSuggestText("tetsting sugestion")
|
.setSuggestText("tetsting sugestion")
|
||||||
.addSuggestion(phraseSuggestion("did_you_mean").field("name").maxErrors(5.0f))
|
.addSuggestion(phraseSuggestion("did_you_mean").field("name").maxErrors(5.0f))
|
||||||
.get();
|
.get();
|
||||||
@ -864,7 +864,7 @@ public class SuggestSearchTests extends ElasticsearchIntegrationTest {
|
|||||||
refresh();
|
refresh();
|
||||||
|
|
||||||
SearchResponse searchResponse = client().prepareSearch()
|
SearchResponse searchResponse = client().prepareSearch()
|
||||||
.setSearchType(SearchType.COUNT)
|
.setSize(0)
|
||||||
.setSuggestText("tetsting sugestion")
|
.setSuggestText("tetsting sugestion")
|
||||||
.addSuggestion(phraseSuggestion("did_you_mean").field("name").maxErrors(5.0f))
|
.addSuggestion(phraseSuggestion("did_you_mean").field("name").maxErrors(5.0f))
|
||||||
.get();
|
.get();
|
||||||
@ -1268,7 +1268,7 @@ public class SuggestSearchTests extends ElasticsearchIntegrationTest {
|
|||||||
|
|
||||||
protected Suggest searchSuggest(String suggestText, int expectShardsFailed, SuggestionBuilder<?>... suggestions) {
|
protected Suggest searchSuggest(String suggestText, int expectShardsFailed, SuggestionBuilder<?>... suggestions) {
|
||||||
if (randomBoolean()) {
|
if (randomBoolean()) {
|
||||||
SearchRequestBuilder builder = client().prepareSearch().setSearchType(SearchType.COUNT);
|
SearchRequestBuilder builder = client().prepareSearch().setSize(0);
|
||||||
if (suggestText != null) {
|
if (suggestText != null) {
|
||||||
builder.setSuggestText(suggestText);
|
builder.setSuggestText(suggestText);
|
||||||
}
|
}
|
||||||
|
Loading…
x
Reference in New Issue
Block a user