This commit enables completion suggester to return documents
associated with suggestions. Now the document source is returned
with every suggestion, which respects source filtering options.
In case of suggest queries spanning more than one shard, the
suggest is executed in two phases, where the last phase fetches
the relevant documents from shards, implying executing suggest
requests against a single shard is more performant due to the
document fetch overhead when the suggest spans multiple shards.
Adds `warnings` syntax to the yaml test that allows you to expect
a `Warning` header that looks like:
```
- do:
warnings:
- '[index] is deprecated'
- quotes are not required because yaml
- but this argument is always a list, never a single string
- no matter how many warnings you expect
get:
index: test
type: test
id: 1
```
These are accessible from the docs with:
```
// TEST[warning:some warning]
```
This should help to force you to update the docs if you deprecate
something. You *must* add the warnings marker to the docs or the build
will fail. While you are there you *should* update the docs to add
deprecation warnings visible in the rendered results.
Invocation counts can be used to help judge the selectivity of individual query components in the context of the entire query. E.g. a query may not look selective when run by itself (matches most of the index), but when run in context of a full search request, is evaluated only rarely due to execution order
Since this is modifying the base timing class, it'll enrich both query and agg profiles (as well as future profile results)
Rename `fields` to `stored_fields` and add `docvalue_fields`
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
This pull request adds two util functions to the Mustache templating engine:
- {{#toJson}}my_map{{/toJson}} to render a Map parameter as a JSON string
- {{#join}}my_iterable{{/join}} to render any iterable (including arrays) as a comma separated list of values like `1, 2, 3`. It's also possible de change the default delimiter (comma) to something else.
closes#18970
This commit moves template support out of the Search API to its own dedicated Search Template API in the lang-mustache module. It provides a new SearchTemplateAction that can be used to render templates before it gets delegated to the usual Search API. The current REST endpoint are identical, but the Render Search Template endpoint now uses the same Search Template API with a new "simulate" option. When this option is enabled, the Search Template API only renders template and returns immediatly, without executing the search.
Closes#17906
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
This commit removes the search preference _only_node as the same
functionality can be obtained by using the search preference
_only_nodes. This commit also adds a test that ensures that _only_nodes
will continue to support specifying node IDs.
Relates #18875
The search preference _prefer_node allows specifying a single node to
prefer when routing a request. This functionality can be enhanced by
permitting multiple nodes to be preferred. This commit replaces the
search preference _prefer_node with the search preference _prefer_nodes
which supplants the former by specifying a single node and otherwise
adds functionality.
Relates #18872
By default the number of searches msearch executes is capped by the number of
nodes multiplied with the default size of the search threadpool. This default can be
overwritten by using the newly added `max_concurrent_searches` parameter.
Before the msearch api would concurrently execute all searches concurrently. If many large
msearch requests would be executed this could lead to some searches being rejected
while other searches in the msearch request would succeed.
The goal of this change is to avoid this exhausting of the search TP.
Closes#17926
API:
```
curl -XGET 'localhost:9200/twitter/tweet/_search?scroll=1m' -d '{
"slice": {
"field": "_uid", <1>
"id": 0, <2>
"max": 10 <3>
},
"query": {
"match" : {
"title" : "elasticsearch"
}
}
}
```
<1> (optional) The field name used to do the slicing (_uid by default)
<2> The id of the slice
By default the splitting is done on the shards first and then locally on each shard using the _uid field
with the following formula:
`slice(doc) = floorMod(hashCode(doc._uid), max)`
For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned
to the first shard and the slices 1 and 3 are assigned to the second shard.
Each scroll is independent and can be processed in parallel like any scroll request.
Closes#13494
Two of the snippets in validate weren't working properly so they are
marked as skip and linked to this:
https://github.com/elastic/elasticsearch/issues/18254
We didn't properly handle empty parameter values. We were sending
them as the literal string "null". Now we do better and send them
as the empty string.
We should prevent highlighting if a field is anything but a text or keyword field.
However, someone might implement a custom field type that has text and still want to
highlight on that. We cannot know in advance if the highlighter will be able to
highlight such a field and so we do the following:
If the field is only highlighted because the field matches a wildcard we assume
it was a mistake and do not process it.
If the field was explicitly given we assume that whoever issued the query knew
what they were doing and try to highlight anyway.
closes#17537
Fix a limitation that prevent from hierarchical inner hits be defined in query dsl.
Removed the nested_path, parent_child_type and query options from inner hits dsl. These options are only set by ES
upon parsing the has_child, has_parent and nested queries are using their respective query builders.
These options are still used internally, when these options are set a new private copy is created based on the
provided InnerHitBuilder and configuring either nested_path or parent_child_type and the inner query of the query builder
being used.
Closes#11118
* Add isSearchable and isAggregatable (collapsed to true if any of the instances of that field are searchable or aggregatable).
* Accept wildcards in field names.
* Add a section named conflicts for fields with the same name but with incompatible types (instead of throwing an exception).
IMHO the original text here was incomplete. Adding the simple words 'in the index mapping' makes this sentence more clear. Perhaps a be more clear to make this a link.
This commit adds the new `action.search.shard_count.limit` setting which
configures the maximum number of shards that can be queried in a single search
request. It has a default value of 1000.
Both top level and inline inner hits are now covered by InnerHitBuilder.
Although there are differences between top level and inline inner hits,
they now make use of the same builder logic.
The parsing of top level inner hits slightly changed to be more readable.
Before the nested path or parent/child type had to be specified as encapsuting
json object, now these settings are simple fields. Before this was required
to allow streaming parsing of inner hits without missing contextual information.
Once some issues are fixed with inline inner hits (around multi level hierachy of inner hits),
top level inner hits will be deprecated and removed in the next major version.
Also replaced the PercolatorQueryRegistry with the new PercolatorQueryCache.
The PercolatorFieldMapper stores the rewritten form of each percolator query's xcontext
in a binary doc values field. This make sure that the query rewrite happens only during
indexing (some queries for example fetch shapes, terms in remote indices) and
the speed up the loading of the queries in the percolator query cache.
Because the percolator now works inside the search infrastructure a number of features
(sorting fields, pagination, fetch features) are available out of the box.
The following feature requests are automatically implemented via this refactoring:
Closes#10741Closes#7297Closes#13176Closes#13978Closes#11264Closes#10741Closes#4317
The search_after parameter provides a way to efficiently paginate from one page to the next. This parameter accepts an array of sort values, those values are then used by the searcher to sort the top hits from the first document that is greater to the sort values.
This parameter must be used in conjunction with the sort parameter, it must contain exactly the same number of values than the number of fields to sort on.
NOTE: A field with one unique value per document should be used as the last element of the sort specification. Otherwise the sort order for documents that have the same sort values would be undefined. The recommended way is to use the field `_uuid` which is certain to contain one unique value for each document.
Fixes#8192
* Added percolator field mapper that extracts the query terms and indexes these terms with the percolator query.
* At percolate time these extracted terms are used to query percolator queries that are like to be evaluated. This can significantly cut down the time it takes to percolate. Whereas before all percolator queries were evaluated if they matches with the document being percolated.
* Changes made to percolator queries are no longer immediately visible, a refresh needs to happen before the changes are visible.
* By default the percolate api only returns upto 10 matches instead of returning all matching percolator queries.
* Made percolate more modular, so that it is easier to add unit tests.
* Added unit tests for the percolator.
Closes#12664Closes#13646
Provides a new flag which can be enabled on a per-request basis.
When `"profile": true` is set, the search request will execute in a
mode that collects detailed timing information for query components.
```
GET /test/test/_search
{
"profile": true,
"query": {
"match": {
"foo": "bar"
}
}
}
```
Closes#14889
Squashed commit of the following:
commit a92db5723d2c61b8449bd163d2f006d12f9889ad
Merge: 117dd99 3f87b08
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Dec 17 09:44:10 2015 -0500
Merge remote-tracking branch 'upstream/master' into query_profiler
commit 117dd9992e8014b70203c6110925643769d80e62
Merge: 9b29d68 82a64fd
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Dec 15 13:27:18 2015 -0500
Merge remote-tracking branch 'upstream/master' into query_profiler
Conflicts:
core/src/main/java/org/elasticsearch/search/SearchService.java
commit 9b29d6823a71140ecd872df25ff9f7478e7fe766
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Dec 14 16:13:23 2015 -0500
[TEST] Profile flag needs to be set, ensure searches go against primary only for consistency
commit 4d602d8ad1f8cbc7b475450921fa3bc7d395b34f
Merge: 8b48e87 7742c1e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Dec 14 10:56:25 2015 -0500
Merge remote-tracking branch 'upstream/master' into query_profiler
commit 8b48e876348b163ab730eeca7fa35142165b05f9
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Dec 14 10:56:01 2015 -0500
Delegate straight to in.matchCost, no need for profiling
commit fde3b0587911f0b5f15e779c671d0510cbd568a9
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Dec 14 10:28:23 2015 -0500
Documentation tweaks, renaming build_weight -> create_weight
commit 46f5e011ee23fe9bb8a1f11ceb4fa9d19fe48e2e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Dec 14 10:27:52 2015 -0500
Profile TwoPhaseIterator should override matchCost()
commit b59f894ddb11b2a7beebba06c4ec5583ff91a7b2
Merge: 9aa1a3a b4e0c87
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Dec 9 14:23:26 2015 -0500
Merge remote-tracking branch 'upstream/master' into query_profiler
commit 9aa1a3a25c34c9cd9fffaa6114c25a0ec791307d
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Dec 9 13:41:48 2015 -0500
Revert "Move some of the collector wrapping logic into ProfileCollectorBuilder"
This reverts commit 02cc31767fb76a7ecd44a302435e93a05fb4220e.
commit 57f7c04cea66b3f98ba2bec4879b98e4fba0b3c0
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Dec 9 13:41:31 2015 -0500
Revert "Rearrange if/else to make intent clearer"
This reverts commit 59b63c533fcaddcdfe4656e86a6f6c4cb1bc4a00.
commit 2874791b9c9cd807113e75e38be465f3785c154e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Dec 9 13:38:13 2015 -0500
Revert "Move state into ProfileCollectorBuilder"
This reverts commit 0bb3ee0dd96170b06f07ec9e2435423d686a5ae6.
commit 0bb3ee0dd96170b06f07ec9e2435423d686a5ae6
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Dec 3 11:21:55 2015 -0500
Move state into ProfileCollectorBuilder
commit 59b63c533fcaddcdfe4656e86a6f6c4cb1bc4a00
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Dec 2 17:21:12 2015 -0500
Rearrange if/else to make intent clearer
commit 511db0af2f3a86328028b88a6b25fa3dfbab963b
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Dec 2 17:12:06 2015 -0500
Rename WEIGHT -> BUILD_WEIGHT
commit 02cc31767fb76a7ecd44a302435e93a05fb4220e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Dec 2 17:11:22 2015 -0500
Move some of the collector wrapping logic into ProfileCollectorBuilder
commit e69356d3cb4c60fa281dad36d84faa64f5c32bc4
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 15:12:35 2015 -0500
Cleanup imports
commit c1b4f284f16712be60cd881f7e4a3e8175667d62
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 15:11:25 2015 -0500
Review cleanup: Make InternalProfileShardResults writeable
commit 9e61c72f7e1787540f511777050a572b7d297636
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 15:01:22 2015 -0500
Review cleanup: Merge ProfileShardResult, InternalProfileShardResult. Convert to writeable
commit 709184e1554f567c645690250131afe8568a5799
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 14:38:08 2015 -0500
Review cleanup: Merge ProfileResult, InternalProfileResult. Convert to writeable
commit 7d72690c44f626c34e9c608754bc7843dd7fd8fe
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 14:01:34 2015 -0500
Review cleanup: use primitive (and default) for profile flag
commit 97d557388541bbd3388cdcce7d9718914d88de6d
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 13:09:12 2015 -0500
Review cleanup: Use Collections.emptyMap() instead of building an empty one explicitly
commit 219585b8729a8b0982e653d99eb959efd0bef84e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 13:08:12 2015 -0500
Add todo to revisit profiler architecture in the future
commit b712edb2160e032ee4b2f2630fadf131a0936886
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 30 13:05:32 2015 -0500
Split collector serialization from timing, use writeable instead of streamable
Previously, the collector timing was done in the same class that was serialized, which required
leaving the collector null when serializing. Besides being a bit gross, this made it difficult to
change the class to Writeable.
This splits up the timing (InternalProfileCollector + ProfileCollector) and the serialization of
the times (CollectorResult). CollectorResult is writeable, and also acts as the public interface.
commit 6ddd77d066262d4400e3d338b11cebe7dd27ca78
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Nov 25 13:15:12 2015 -0500
Remove dead code
commit 06033f8a056e2121d157654a65895c82bbe93a51
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Nov 25 12:49:51 2015 -0500
Review cleanup: Delegate to in.getProfilers()
Note: Need to investigate how this is used exactly so we can add a test, it isn't touched by a
normal inner_hits query...
commit a77e13da21b4bad1176ca2b5d5b76034fb12802f
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Nov 25 11:59:58 2015 -0500
Review cleanup: collapse to single `if` statement
commit e97bb6215a5ebb508b0293ac3acd60d5ae479be1
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Nov 25 11:39:43 2015 -0500
Review cleanup: Return empty map instead of null for profile results
Note: we still need to check for nullness in SearchPhaseController, since an empty/no-hits result
won't have profiling instantiated (or any other component like aggs or suggest). Therefore
QuerySearchResult.profileResults() is still @Nullable
commit db8e691de2a727389378b459fa76c942572e6015
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Nov 25 10:14:47 2015 -0500
Review cleanup: renaming, formatting fixes, assertions
commit 9011775fe80ba22c2fd948ca64df634b4e32772d
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Nov 19 20:09:52 2015 -0500
[DOCS] Add missing annotation
commit 4b58560b06f08d4b99b149af20916ee839baabd7
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Nov 19 20:07:17 2015 -0500
[DOCS] Update documentation for new format
commit f0458c58e5538ed8ec94849d4baf3250aa9ec841
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Nov 17 10:14:09 2015 +0100
Reduce visibility of internal classes.
commit d0a7d319098e60b028fa772bf8a99b2df9cf6146
Merge: e158070 1bdf29e
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Nov 17 10:09:18 2015 +0100
Merge branch 'master' into query_profiler
commit e158070a48cb096551f3bb3ecdcf2b53bbc5e3c5
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Nov 17 10:08:48 2015 +0100
Fix compile error due to bad html in javadocs.
commit a566b5d08d659daccb087a9afbe908ec3d96cd6e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 16 17:48:37 2015 -0500
Remove unused collector
commit 4060cd72d150cc68573dbde62ca7321c47f75703
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 16 17:48:10 2015 -0500
Comment cleanup
commit 43137952bf74728f5f5d5a8d1bfc073e0f9fe4f9
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Nov 16 17:32:06 2015 -0500
Fix negative formatted time
commit 5ef3a980266326aff12d4fe380f73455ff28209f
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 17:10:17 2015 +0100
Fix javadocs.
commit 276114d29e4b17a0cc0982cfff51434f712dc59e
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 16:25:23 2015 +0100
Fix: include rewrite time as well...
commit 21d9e17d05487bf4903ae3d2ab6f429bece2ffef
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 15:10:15 2015 +0100
Remove TODO about profiling explain.
commit 105a31e8e570efb879447159c3852871f5cf7db4
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 14:59:30 2015 +0100
Fix nocommit now that the global collector is a root collector.
commit 2e8fc5cf84adb1bfaba296808c329e5f982c9635
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 14:53:38 2015 +0100
Make collector wrapping more explicit/robust (and a bit less magical).
commit 5e30b570b0835e1ce79a57933a31b6a2d0d58e2d
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 12:44:03 2015 +0100
Simplify recording API a bit.
commit 9b453afced6adc0a59ca1d67d90c28796b105185
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 10:54:25 2015 +0100
Fix serialization-related nocommits.
commit ad97b200bb123d4e9255e7c8e02f7e43804057a5
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Nov 13 10:46:30 2015 +0100
Fix DFS.
commit a6de06986cd348a831bd45e4f524d2e14d9e03c3
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Nov 12 19:29:16 2015 +0100
Remove forbidden @Test annotation.
commit 4991a28e19501109af98026e14756cb25a56f4f4
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Nov 12 19:25:59 2015 +0100
Limit the impact of query profiling on the SearchContext API.
Rule is: we can put as much as we want in the search.profile package but should
aim at touching as little as possible other areas of the code base.
commit 353d8d75a5ce04d9c3908a0a63d4ca6e884c519a
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Nov 12 18:05:09 2015 +0100
Remove dead code.
commit a3ffafb5ddbb5a2acf43403c946e5ed128f47528
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Nov 12 15:30:35 2015 +0100
Remove call to forbidden String.toLowerCase() API.
commit 1fa8c7a00324fa4e32bd24135ebba5ecf07606f1
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Nov 12 15:30:27 2015 +0100
Fix compilation.
commit 2067f1797e53bef0e1a8c9268956bc5fb8f8ad97
Merge: 22e631f fac472f
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Nov 12 15:21:12 2015 +0100
Merge branch 'master' into query_profiler
commit 22e631fe6471fed19236578e97c628d5cda401a9
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Nov 3 18:52:05 2015 -0500
Fix and simplify serialization of shard profile results
commit 461da250809451cd2b47daf647343afbb4b327f2
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Nov 3 18:32:22 2015 -0500
Remove helper methods, simpler without them
commit 5687aa1c93d45416d895c2eecc0e6a6b302139f2
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Nov 3 18:29:32 2015 -0500
[TESTS] Fix tests for new rewrite format
commit ba9e82857fc6d4c7b72ef4d962d2102459365299
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 30 15:28:14 2015 -0400
Rewrites begone! Record all rewrites as a single time metric
commit 5f28d7cdff9ee736651d564f71f713bf45fb1d91
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Oct 29 15:36:06 2015 -0400
Merge duplicate rewrites into one entry
By using the Query as the key in a map, we can easily merge rewrites together. This means
the preProcess(), assertion and main query rewrites all get merged together. Downside is that
rewrites of the same Query (hashcode) but in different places get lumped together. I think the
simplicity of the solution is better than the slight loss in output fidelity.
commit 9a601ea46bb21052746157a45dcc6de6bc350e9c
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Oct 29 15:28:27 2015 -0400
Allow multiple "searches" per profile (e.g. query + global agg)
commit ee30217328381cd83f9e653d3a4d870c1d2bdfce
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Oct 29 11:29:18 2015 -0400
Update comment, add nullable annotation
commit 405c6463a64e118f170959827931e8c6a1661f13
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Oct 29 11:04:30 2015 -0400
remove out-dated comment
commit 2819ae8f4cf1bfd5670dbd1c0e06195ae457b58f
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Oct 27 19:50:47 2015 +0100
Don't render children in the profiles when there are no children.
commit 7677c2ddefef321bbe74660471603d202a4ab66f
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Oct 27 19:50:35 2015 +0100
Set the profiler on the ContextIndexSearcher.
commit 74a4338c35dfed779adc025ec17cfd4d1c9f66f5
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Oct 27 19:50:01 2015 +0100
Fix json rendering.
commit 6674d5bebe187b0b0d8b424797606fdf2617dd27
Author: Adrien Grand <jpountz@gmail.com>
Date: Tue Oct 27 19:20:19 2015 +0100
Revert "nocommit - profiling never enabled because setProfile() on ContextIndexSearcher never called"
This reverts commit d3dc10949024342055f0d4fb7e16c7a43423bfab.
commit d3dc10949024342055f0d4fb7e16c7a43423bfab
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 17:20:57 2015 -0400
nocommit - profiling never enabled because setProfile() on ContextIndexSearcher never called
Previously, it was enabled by using DefaultSearchContext as a third-party "proxy", but since
the refactor to make it unit testable, setProfile() needs to be called explicitly. Unfortunately,
this is not possible because SearchService only has access to an IndexSearcher. And it is not
cast'able to a DefaultIndexSearcher.
commit b9ba9c5d1f93b9c45e97b0a4e35da6f472c9ea53
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 16:27:00 2015 -0400
[TESTS] Fix unit tests
commit cf5d1e016b2b4a583175e07c16c7152f167695ce
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 16:22:34 2015 -0400
Increment token after recording a rewrite
commit b7d08f64034e498533c4a81bff8727dd8ac2843e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 16:14:09 2015 -0400
Fix NPE if a top-level root doesn't have children
commit e4d3b514bafe2a3a9db08438c89f0ed68628f2d6
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 16:05:47 2015 -0400
Fix NPE when profiling is disabled
commit 445384fe48ed62fdd01f7fc9bf3e8361796d9593
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 16:05:37 2015 -0400
[TESTS] Fix integration tests
commit b478296bb04fece827a169e7522df0a5ea7840a3
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 15:43:24 2015 -0400
Move rewrites to their own section, remove reconciliation
Big commit because the structural change affected a lot of the wrapping code. Major changes:
- Rewrites are now in their own section in the response
- Reconciliation is gone...we don't attempt to roll the rewrites into the query tree structure
- InternalProfileShardResults (plural) simply holds a Map<String, InternalProfileShardResult> and
helps to serialize / ToXContent
- InternalProfileShardResult (singular) is now the holder for all shard-level profiling details. Currently
this includes query, collectors and rewrite. In the future it would hold suggest, aggs, etc
- When the user requests the profiled results, they get back a Map<String, ProfileShardResult>
instead of doing silly helper methods to convert to maps, etc
- Shard details are baked into a string instead of serializing the object
commit 24819ad094b208d0e94f17ce9c3f7c92f7414124
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Oct 23 10:25:38 2015 -0400
Make Profile results immutable by removing relative_time
commit bfaf095f45fed74194ef78160a8e5dcae1850f9e
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 23 10:54:59 2015 +0200
Add nocommits.
commit e9a128d0d26d5b383b52135ca886f2c987850179
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 23 10:39:37 2015 +0200
Move all profile-related classes to the same package.
commit f20b7c7fdf85384ecc37701bb65310fb8c20844f
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Oct 23 10:33:14 2015 +0200
Reorganize code a bit to ease unit testing of ProfileCollector.
commit 3261306edad6a0c70f59eaee8fe58560f61a75fd
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 18:07:28 2015 +0200
Remove irrelevant nocommit.
commit a6ac868dad12a2e17929878681f66dbd0948d322
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 18:06:45 2015 +0200
Make CollectorResult's reason a free-text field to ease bw compat.
commit 5d0bf170781a950d08b81871cd1e403e49f3cc12
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 16:50:52 2015 +0200
Add unit tests for ProfileWeight/ProfileScorer.
commit 2cd88c412c6e62252504ef69a59216adbb574ce4
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 15:55:17 2015 +0200
Rename InternalQueryProfiler to Profiler.
commit 84f5718fa6779f710da129d9e0e6ff914fd85e36
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 15:53:58 2015 +0200
Merge InternalProfileBreakdown into ProfileBreakdown.
commit 135168eaeb8999c8117ea25288104b0961ce9b35
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 13:56:57 2015 +0200
Make it possible to instantiate a ContextIndexSearcher without SearchContext.
commit 5493fb52376b48460c4ce2dedbe00cc5f6620499
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 11:53:29 2015 +0200
Move ProfileWeight/Scorer to their own files.
commit bf2d917b9dae3b32dfc29c35a7cac4ccb7556cce
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 11:38:24 2015 +0200
Fix bug that caused phrase queries to fail.
commit b2bb0c92c343334ec1703a221af24a1b55e36d53
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 11:36:17 2015 +0200
Parsing happens on the coordinating node now.
commit 416cabb8621acb5cd8dfa77374fd23e428f52fe9
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 11:22:17 2015 +0200
Fix compilation (in particular remove guava deps).
commit f996508645f842629d403fc2e71c1890c0e2cac9
Merge: 4616a25 bc3b91e
Author: Adrien Grand <jpountz@gmail.com>
Date: Thu Oct 22 10:44:38 2015 +0200
Merge branch 'master' into query_profiler
commit 4616a25afffe9c24c6531028f7fccca4303d2893
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Oct 20 12:11:32 2015 -0400
Make Java Count API compatible with profiling
commit cbfba74e16083d719722500ac226efdb5cb2ff55
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Oct 20 12:11:19 2015 -0400
Fix serialization of profile query param, NPE
commit e33ffac383b03247046913da78c8a27e457fae78
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Oct 20 11:17:48 2015 -0400
TestSearchContext should return null Profiler instead of exception
commit 73a02d69b466dc1a5b8a5f022464d6c99e6c2ac3
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Oct 19 12:07:29 2015 -0400
[DOCS] Update docs to reflect new ID format
commit 36248e388c354f954349ecd498db7b66f84ce813
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Oct 19 12:03:03 2015 -0400
Use the full [node][index][shard] string as profile result ID
commit 5cfcc4a6a6b0bcd6ebaa7c8a2d0acc32529a80e1
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Oct 15 17:51:40 2015 -0400
Add failing test for phrase matching
Stack trace generated:
[2015-10-15 17:50:54,438][ERROR][org.elasticsearch.search.profile] shard [[JNj7RX_oSJikcnX72aGBoA][test][2]], reason [RemoteTransportException[[node_s0][local[1]][indices:data/read/search[phase/query]]]; nested: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: AssertionError[nextPosition() called more than freq() times!]; ], cause [java.lang.AssertionError: nextPosition() called more than freq() times!
at org.apache.lucene.index.AssertingLeafReader$AssertingPostingsEnum.nextPosition(AssertingLeafReader.java:353)
at org.apache.lucene.search.ExactPhraseScorer.phraseFreq(ExactPhraseScorer.java:132)
at org.apache.lucene.search.ExactPhraseScorer.access$000(ExactPhraseScorer.java:27)
at org.apache.lucene.search.ExactPhraseScorer$1.matches(ExactPhraseScorer.java:69)
at org.elasticsearch.common.lucene.search.ProfileQuery$ProfileScorer$2.matches(ProfileQuery.java:226)
at org.apache.lucene.search.ConjunctionDISI$TwoPhaseConjunctionDISI.matches(ConjunctionDISI.java:175)
at org.apache.lucene.search.ConjunctionDISI$TwoPhase.matches(ConjunctionDISI.java:213)
at org.apache.lucene.search.ConjunctionDISI.doNext(ConjunctionDISI.java:128)
at org.apache.lucene.search.ConjunctionDISI.nextDoc(ConjunctionDISI.java:151)
at org.apache.lucene.search.ConjunctionScorer.nextDoc(ConjunctionScorer.java:62)
at org.elasticsearch.common.lucene.search.ProfileQuery$ProfileScorer$1.nextDoc(ProfileQuery.java:205)
at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:224)
at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:169)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:795)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:509)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:347)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:111)
at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:366)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:378)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.local.LocalTransport$2.doRun(LocalTransport.java:280)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
commit 889fe6383370fe919aaa9f0af398e3040209e40b
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Oct 15 17:30:38 2015 -0400
[DOCS] More docs
commit 89177965d031d84937753538b88ea5ebae2956b0
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Oct 15 09:59:09 2015 -0400
Fix multi-stage rewrites to recursively find most appropriate descendant rewrite
Previously, we chose the first rewrite that matched. But in situations where a query may
rewrite several times, this won't build the tree correctly. Instead we need to recurse
down all the rewrites until we find the most appropriate "leaf" rewrite
The implementation of this is kinda gross: we recursively call getRewrittenParentToken(),
which does a linear scan over the rewriteMap and tries to find rewrites with a larger token
value (since we know child tokens are always larger). Can almost certainly find a better
way to do this...
commit 0b4d782b5348e5d03fd26f7d91bc4a3fbcb7f6a5
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Oct 14 19:30:06 2015 -0400
[Docs] Documentation checkpoint
commit 383636453f6610fcfef9070c21ae7ca11346793e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Sep 16 16:02:22 2015 -0400
Comments
commit a81e8f31e681be16e89ceab9ba3c3e0a018f18ef
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Sep 16 15:48:49 2015 -0400
[TESTS] Ensure all tests use QUERY_THEN_FETCH, DFS does not profile
commit 1255c2d790d85fcb9cbb78bf2a53195138c6bc24
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Sep 15 16:43:46 2015 -0400
Refactor rewrite handling to handle identical rewrites
commit 85b7ec82eb0b26a6fe87266b38f5f86f9ac0c44f
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Sep 15 08:51:14 2015 -0400
Don't update parent when a token is added as root -- Fixes NPE
commit 109d02bdbc49741a3b61e8624521669b0968b839
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Sep 15 08:50:40 2015 -0400
Don't set the rewritten query if not profiling -- Fixes NPE
commit 233cf5e85f6f2c39ed0a2a33d7edd3bbd40856e8
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Sep 14 18:04:51 2015 -0400
Update tests to new response format
commit a930b1fc19de3a329abc8ffddc6711c1246a4b15
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Sep 14 18:03:58 2015 -0400
Fix serialization
commit 69afdd303660510c597df9bada5531b19d134f3d
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Sep 14 15:11:31 2015 -0400
Comments and cleanup
commit 64e7ca7f78187875378382ec5d5aa2462ff71df5
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Sep 14 14:40:21 2015 -0400
Move timing into dedicated class, add proper rewrite integration
commit b44ff85ddbba0a080e65f2e7cc8c50d30e95df8e
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Sep 14 12:00:38 2015 -0400
Checkpoint - Refactoring to use a token-based dependency tree
commit 52cedd5266d6a87445c6a4cff3be8ff2087cd1b7
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Fri Sep 4 19:18:19 2015 -0400
Need to set context profiling flag before calling queryPhase.preProcess
commit c524670cb1ce29b4b3a531fa2bff0c403b756f46
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Sep 4 18:00:37 2015 +0200
Reduce profiling overhead a bit.
This removes hash-table lookups everytime we start/stop a profiling clock.
commit 111444ff8418737082236492b37321fc96041e09
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Sep 4 16:18:59 2015 +0200
Add profiling of two-phase iterators.
This is useful for eg. phrase queries or script filters, since they are
typically consumed through their two-phase iterator instead of the scorer.
commit f275e690459e73211bc8494c6de595c0320f4c0b
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Sep 4 16:03:21 2015 +0200
Some more improvements.
I changed profiling to disable bulk scoring, since it makes it impossible to
know where time is spent. Also I removed profiling of operations that are
always fast (eg. normalization) and added nextDoc/advance.
commit 3c8dcd872744de8fd76ce13b6f18f36f8de44068
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Sep 4 14:39:50 2015 +0200
Remove println.
commit d68304862fb38a3823aebed35a263bd9e2176c2f
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Sep 4 14:36:03 2015 +0200
Fix some test failures introduced by the rebase...
commit 04d53ca89fb34b7a21515d770c32aaffcc513b90
Author: Adrien Grand <jpountz@gmail.com>
Date: Fri Sep 4 13:57:35 2015 +0200
Reconcile conflicting changes after rebase
commit fed03ec8e2989a0678685cd6c50a566cec42ea4f
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Thu Aug 20 22:40:39 2015 -0400
Add Collectors to profile results
Profile response element has now been re-arranged so that everything is listed per-shard to
facilitate grouping elements together. The new `collector` element looks like this:
```
"profile": {
"shards": [
{
"shard_id": "keP4YFywSXWALCl4m4k24Q",
"query": [...],
"collector": [
{
"name": "MultiCollector",
"purpose": "search_multi",
"time": "16.44504400ms",
"relative_time": "100.0000000%",
"children": [
{
"name": "FilteredCollector",
"purpose": "search_post_filter",
"time": "4.556013000ms",
"relative_time": "27.70447437%",
"children": [
{
"name": "SimpleTopScoreDocCollector",
"purpose": "search_sorted",
"time": "1.352166000ms",
"relative_time": "8.222331299%",
"children": []
}
]
},
{
"name": "BucketCollector: [[non_global_term, another_agg]]",
"purpose": "aggregation",
"time": "10.40379400ms",
"relative_time": "63.26400829%",
"children": []
},
...
```
commit 1368b495c934be642c00f6cbf9fc875d7e6c07ff
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Aug 19 12:43:03 2015 -0400
Move InternalProfiler to profile package
commit 53584de910db6d4a6bb374c9ebb954f204882996
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 18:34:58 2015 -0400
Only reconcile rewrite timing when rewritten query is different from original
commit 9804c3b29d2107cd97f1c7e34d77171b62cb33d0
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 16:40:15 2015 -0400
Comments and cleanup
commit 8e898cc7c59c0c1cc5ed576dfed8e3034ca0967f
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 14:19:07 2015 -0400
[TESTS] Fix comparison test to ensure results sort identically
commit f402a29001933eef29d5a62e81c8563f1c8d0969
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 14:17:59 2015 -0400
Add note about DFS being unable to profile
commit d446e08d3bc91cd85b24fc908e2d82fc5739d598
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 14:17:23 2015 -0400
Implement some missing methods
commit 13ca94fb86fb037a30d181b73d9296153a63d6e4
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 13:10:54 2015 -0400
[TESTS] Comments & cleanup
commit c76c8c771fdeee807761c25938a642612a6ed8e7
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 13:06:08 2015 -0400
[TESTS] Fix profileMatchesRegular to handle NaN scores and nearlyEqual floats
commit 7e7a10ecd26677b2239149468e24938ce5cc18e1
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 12:22:16 2015 -0400
Move nearlyEquals() utility function to shared location
commit 842222900095df4b27ff3593dbb55a42549f2697
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 18 12:04:35 2015 -0400
Fixup rebase conflicts
commit 674f162d7704dd2034b8361358decdefce1f76ce
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Aug 17 15:29:35 2015 -0400
[TESTS] Update match and bool tests
commit 520380a85456d7137734aed0b06a740e18c9cdec
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Aug 17 15:28:09 2015 -0400
Make naming consistent re: plural
commit b9221501d839bb24d6db575d08e9bee34043fc65
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Aug 17 15:27:39 2015 -0400
Children need to be added to list after serialization
commit 05fa51df940c332fbc140517ee56e849f2d40a72
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Aug 17 15:22:41 2015 -0400
Re-enable bypass for non-profiled queries
commit f132204d264af77a75bd26a02d4e251a19eb411d
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Aug 17 15:21:14 2015 -0400
Fix serialization of QuerySearchResult, InternalProfileResult
commit 27b98fd475fc2e9508c91436ef30624bdbee54ba
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Aug 10 17:39:17 2015 -0400
Start to add back tests, refactor Java api
commit bcfc9fefd49307045108408dc160774666510e85
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Aug 4 17:08:10 2015 -0400
Checkpoint
commit 26a530e0101ce252450eb23e746e48c2fd1bfcae
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Tue Jul 14 13:30:32 2015 -0400
Add createWeight() checkpoint
commit f0dd61de809c5c13682aa213c0be65972537a0df
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Mon Jul 13 12:36:27 2015 -0400
checkpoint
commit 377ee8ce5729b8d388c4719913b48fae77a16686
Author: Zachary Tong <zacharyjtong@gmail.com>
Date: Wed Mar 18 10:45:01 2015 -0400
checkpoint
The completion suggester provides auto-complete/search-as-you-type functionality.
This is a navigational feature to guide users to relevant results as they are typing, improving search precision.
It is not meant for spell correction or did-you-mean functionality like the term or phrase suggesters.
The completions are indexed as a weighted FST (finite state transducer) to provide fast Top N prefix-based
searches suitable for serving relevant results as a user types.
closes#10746
The only way to refer to the plain highlighter is now `plain`, the only way to refer to the fast vector highlighter is `fvh` and the only way to refer to the postings highlighter is `postings`. The name variants like `highlighter`, `postings-highlighter` and `fast-vector-highlighter` have been removed.
we are relying on terminate_after more and more, replaced the limit filter with it and soon it will also replace the search_exists api. At that point we should make it a stable api rather than experimental.
Closes#14183
Requesting a million hits, or page 100,000 is always a bad idea, but users
may not be aware of this. This adds a per-index limit on the maximum size +
from that can be requested which defaults to 10,000.
This should not interfere with deep-scrolling.
Closes#9311
The documentation states that scrolls are automatically closed when all
documents are consumed, but this is not the case. I first tried to fix
the code to close scrolls automatically but this made REST tests fail
because clearing a scroll that is already closed returned a 4xx error
instead of a 2xx code, so this has probably been this way for a very long
time.
Just like specifying `?preference=_primary`, this adds the ability to
specify `?preference=_replica` or `?preference=_replica_first` on
requests that support it.
Resolves#12222
Field stats index constraints allows to omit all field stats for indices that don't match with the constraint. An index
constraint can exclude indices' field stats based on the `min_value` and `max_value` statistic. This option is only
useful if the `level` option is set to `indices`.
For example index constraints can be useful to find out the min and max value of a particular property of your data in
a time based scenario. The following request only returns field stats for the `answer_count` property for indices
holding questions created in the year 2014:
curl -XPOST 'http://localhost:9200/_field_stats?level=indices' -d '{
"fields" : ["answer_count"] <1>
"index_constraints" : { <2>
"creation_date" : { <3>
"min_value" : { <4>
"gte" : "2014-01-01T00:00:00.000Z",
},
"max_value" : {
"lt" : "2015-01-01T00:00:00.000Z"
}
}
}
}'
Closes#11187
In order to be more consistent with what they do, the query cache has been
renamed to request cache and the filter cache has been renamed to query
cache.
A known issue is that package/logger names do no longer match settings names,
please speak up if you think this is an issue.
Here are the settings for which I kept backward compatibility. Note that they
are a bit different from what was discussed on #11569 but putting `cache` before
the name of what is cached has the benefit of making these settings consistent
with the fielddata cache whose size is configured by
`indices.fielddata.cache.size`:
* index.cache.query.enable -> index.requests.cache.enable
* indices.cache.query.size -> indices.requests.cache.size
* indices.cache.filter.size -> indices.queries.cache.size
Close#11569
This change unifies the way scripts and templates are specified for all instances in the codebase. It builds on the Script class added previously and adds request building and parsing support as well as the ability to transfer script objects between nodes. It also adds a Template class which aims to provide the same functionality for template APIs
Closes#11091
The default `false` for `require_field_match` is a bit odd and confusing for users, given that field names get ignored by default and every field gets highlighted if it contains terms extracted out of the query, regardless of which fields were queries. Changed the default to `true`, it can always be changed per request.
Closes#10627Closes#11067
Our own fork of the lucene PostingsHighlighter is not easy to maintain and doesn't give us any added value at this point. In particular, it was introduced to support the require_field_match option and discrete per value highlighting, used in case one wants to highlight the whole content of a field, but get back one snippet per value. These two features won't
make it into lucene as they slow things down and shouldn't have been supported from day one on our end probably.
One other customization we had was support for a wider range of queries via custom rewrite etc. (yet another way to slow
things down), which got added to lucene and works much much better than what we used to do (instead of or rewrite, term
s are pulled out of the automata for multi term queries).
Removing our fork means the following in terms of features:
- dropped support for require_field_match: the postings highlighter will only highlight fields that were queried
- some custom es queries won't be supported anymore, meaning they won't be highlighted. The only one I found up until now is the phrase_prefix. Postings highlighter rewrites against an empty reader to avoid slow operations (like the ones that we were performing with the fork that we are removing here), thus the prefix will not be expanded to any term. What the postings highlighter does instead is pulling the automata out of multi term queries, but this is not supported at the moment with our MultiPhrasePrefixQuery.
Closes#10625Closes#11077
Previously, collate feature would be executed on all shards of an index using the client,
this leads to a deadlock when concurrent collate requests are run from the _search API,
due to the fact that both the external request and internal collate requests use the
same search threadpool.
As phrase suggestions are generated from the terms of the local shard, in most cases the
generated suggestion, which does not yield a hit for the collate query on the local shard
would not yield a hit for collate query on non-local shards.
Instead of using the client for collating suggestions, collate query is executed against
the ContextIndexSearcher. This PR removes the ability to specify a preference for a collate
query, as the collate query is only run on the local shard.
closes#9377
Add methods to operate on multi-valued fields in the expressions language.
Note that users will still not be able to access individual values
within a multi-valued field.
The following methods will be included:
* min
* max
* avg
* median
* count
* sum
Additionally, changes have been made to MultiValueMode to support the
new median method.
closes#11105
Removes the More Like This API, users should now use the More Like This query.
The MLT API tests were converted to their query equivalent. Also some clean
ups in MLT tests.
Closes#10736Closes#11003
The assumption is that gaps in histogram are generally undesirable, for instance
if you want to build a visualization from it. Additionally, we are building new
aggregations that require that there are no gaps to work correctly (eg.
derivatives).
Remove the ability to specify search type ‘query_and_fetch’ and
‘df_query_and_fetch’ from the REST API.
- Adds REST tests
- Updates REST API spec to remove ‘query_and_fetch’ and
‘df_query_and_fetch’ as options
- Removes documentation for these options
Closes#9606
* Removed the docs for `index.compound_format` and `index.compound_on_flush` - these are expert settings which should probably be removed (see https://github.com/elastic/elasticsearch/issues/10778)
* Removed the docs for `index.index_concurrency` - another expert setting
* Labelled the segments verbose output as experimental
* Marked the `compression`, `precision_threshold` and `rehash` options as experimental in the cardinality and percentile aggs
* Improved the experimental text on `significant_terms`, `execution_hint` in the terms agg, and `terminate_after` param on count and search
* Removed the experimental flag on the `geobounds` agg
* Marked the settings in the `merge` and `store` modules as experimental, rather than the modules themselves
Closes#10782
The field stats api returns field level statistics such as lowest, highest values and number of documents that have at least one value for a field.
An api like this can be useful to explore a data set you don't know much about. For example you can figure at with the lowest and highest response times are, so that you can create a histogram or range aggregation with sane settings.
This api doesn't run a search to figure this statistics out, but rather use the Lucene index look these statics up (using Terms class in Lucene). So finding out these stats for fields is cheap and quick.
The min/max values are based on the type of the field. So for a numeric field min/max are numbers and date field the min/max date and other fields the min/max are term based.
Closes#10523
This commit adds a `rewrite` parameter to the validate API in order to shown
how the given query is re-written into primitive queries. For example, an MLT
query is re-written into a disjunction of the selected terms. Other use cases
include `fuzzy`, `common_terms`, or `match` query especially with a
`cutoff_frequency` parameter. Note that the explanation is only given for a
single randomly chosen shard only, so the output may vary from one shard to
another.
Relates #1412Closes#10147
Today we check every regular expression eagerly against every possible term.
This can be very slow if you have lots of unique terms, and even the bottleneck
if your query is selective.
This commit switches to Lucene regular expressions instead of Java (not exactly
the same syntax yet most existing regular expressions should keep working) and
uses the same logic as RegExpQuery to intersect the regular expression with the
terms dictionary. I wrote a quick benchmark (in the PR) to make sure it made
things faster and the same request that took 750ms on master now takes 74ms with
this change.
Close#7526
This commit brings the benefits of the `count` search type to search requests
that have a `size` of 0:
- a single round-trip to shards (no fetch phase)
- ability to use the query cache
Since `count` now provides no benefits over `query_then_fetch`, it has been
deprecated.
Close#7630
Allow to on/off scripting based on their source (where they get loaded from), the operation that executes them and their language.
The settings cover the following combinations:
- mode: on, off, sandbox
- source: indexed, dynamic, file
- engine: groovy, expressions, mustache, etc
- operation: update, search, aggs, mapping
The following settings are supported for every engine:
script.engine.groovy.indexed.update: sandbox/on/off
script.engine.groovy.indexed.search: sandbox/on/off
script.engine.groovy.indexed.aggs: sandbox/on/off
script.engine.groovy.indexed.mapping: sandbox/on/off
script.engine.groovy.dynamic.update: sandbox/on/off
script.engine.groovy.dynamic.search: sandbox/on/off
script.engine.groovy.dynamic.aggs: sandbox/on/off
script.engine.groovy.dynamic.mapping: sandbox/on/off
script.engine.groovy.file.update: sandbox/on/off
script.engine.groovy.file.search: sandbox/on/off
script.engine.groovy.file.aggs: sandbox/on/off
script.engine.groovy.file.mapping: sandbox/on/off
For ease of use, the following more generic settings are supported too:
script.indexed: sandbox/on/off
script.dynamic: sandbox/on/off
script.file: sandbox/on/off
script.update: sandbox/on/off
script.search: sandbox/on/off
script.aggs: sandbox/on/off
script.mapping: sandbox/on/off
These will be used to calculate the more specific settings, using the stricter setting of each combination. Operation based settings have precedence over conflicting source based ones.
Note that the `mustache` engine is affected by generic settings applied to any language, while native scripts aren't as they are static by definition.
Also, the previous `script.disable_dynamic` setting can now be deprecated.
Closes#6418Closes#10116Closes#10274
The behaviour is better in the case someone has multiple levels of nested object fields defined in the mapping and like to define a single inner_hits definition that is two or more levels deep.
If someone wants inner hits on a nested field that is 2 levels deep the following would need to be defined:
```
{
...
"inner_hits" : {
"path" : {
"level1" : {
"inner_hits" : {
"path" : {
"level2" : {
"query" : { .... }
}
}
}
}
}
}
}
```
With this change the above can be defined as:
```
{
...
"inner_hits" : {
"path" : {
"level1.level2" : {
"query" : { .... }
}
}
}
}
```
Closes#9251
The analysis chain should be used instead of relying on this, as it is
confusing when dealing with different per-field analysers.
The `locale` option was only used for `lowercase_expanded_terms`, which,
once removed, is no longer needed, so it was removed as well.
Fixes#9978
Relates to #9973
This commit adds scripting capability to significant_terms.
Custom heuristics can be implemented with a script that provides
parameters subset_freq, superset_freq,subset_size, superset_size.
closes#7850
Changed search_type docs to reflect that the `(dfs_)query_and_fetch` modes are an internal optimization and should not be specified explicitly by the user.
Relates to #9606
Removed the existing `pre_zone` and `post_zone` option in `date_histogram` in favor of
the simpler `time_zone` option. Previously, specifying different values for these could
lead to confusing scenarios where ES would return bucket keys that are not UTC.
Now `time_zone` is the only option setting, the calculation of date buckets to take place in the
preferred time zone, but after rounding converting the bucket key values back to UTC.
Closes#9062Closes#9637
Add offset option to 'date_histogram' replacing and simplifying the previous 'pre_offset' and 'post_offset' options.
This change is part of a larger clean up task for `date_histogram` from issue #9062.
We now have a very useful annotation to mark features or parameters as
experimental. Let's use it! This commit replaces some custom text warnings with
this annotation and adds this annotation to some existing features/parameters:
- inner_hits (unreleased yet)
- terminate_after (released in 1.4)
- per-bucket doc count errors in the terms agg (released in 1.4)
I also tagged with this annotation settings which should either be not needed
(like the ability to evict entries from the filter cache based on time) or that
are too deep into the way that Elasticsearch works like the Directory
implementation or merge settings.
Close#9563
These aggregations are not experimental anymore but some of their parameters
still are:
- `precision_threshold` and `rehash` on `cardinality`
- `compression` on percentiles(-ranks)
Close#9560
Histogram aggregation supports an 'offset' option to move bucket boundaries.
In a histogram with buckets of size X these can be moved from 0, X, 2X, 3X,...
by an offset value of Y to Y, X+Y, 2X+Y, 3X+Y... by using the 'offset' option.
The previous 'pre_offset' and 'post_offset' options are removed in favour of
the simplified 'offset' option.
Closes#9417Closes#9505
The `analyzer` setting is now the base setting, and `search_analyzer`
is simply an override of the search time analyzer. When setting
`search_analyzer`, `analyzer` must be set.
closes#9371
Extended_stats now displays the upper and lower bounds on standard deviations (e.g. avg +/- std).
Default is to show 2 std above/below, but can be changed using the `sigma` parameter.
Accepts non-negative doubles
Closes#9356
Inner hits allows to embed nested inner objects, children documents or the parent document that contributed to the matching of the returned search hit as inner hits, which would otherwise be hidden.
Closes#8153Closes#3022Closes#3152
This commit adds the ability to associate a bit of state with each
individual aggregation.
The aggregation response can be hard to stitch back together without
having a reference to the aggregation request. In many cases this is not
available, many json serializer frameworks cache types globally or have a
static deserialisation override mechanism. In these cases making the
original request available, if at all possible, would be a hack.
The old facets returned `_type` which was just enough metadata to know
what the originating facet type in the request was.
This PR takes `_type` one step further by introducing ANY arbitrary meta
data. This could be further <strike>ab</strike>used for instance by
generic/automated aggregations that include UI state (color information,
thresholds, user input states, etc) per aggregation.
This commit adds a new field to the response of the terms aggregation called
`sum_other_doc_count` which is equal to the sum of the doc counts of the buckets
that did not make it to the list of top buckets. It is typically useful to have
a sector called eg. `other` when using terms aggregations to build pie charts.
Example query and response:
```json
GET test/_search?search_type=count
{
"aggs": {
"colors": {
"terms": {
"field": "color",
"size": 3
}
}
}
}
```
```json
{
[...],
"aggregations": {
"colors": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 4,
"buckets": [
{
"key": "blue",
"doc_count": 65
},
{
"key": "red",
"doc_count": 14
},
{
"key": "brown",
"doc_count": 3
}
]
}
}
}
```
Close#8213
This is functionally equivalent to before, so there should be no
user-visible impact, except I added a NOTE in the docs warning about
the interaction of pagination and rescoring.
Closes#6232Closes#7707
By letting the fetch phase understand the nested docs structure we can serve nested docs as hits.
The `top_hits` aggregation can because of this commit be placed in a `nested` or `reverse_nested` aggregation.
Closes#7164
This change removes the script_type parameter form the Scripted Metric Aggregation and adds support for _file and _id suffixes to the init_script, map_script, combine_script and reduce_script parameters to make defining the source of the script consistent with the other APIs which use the ScriptService
Changes the name of the field in the scripted metrics aggregation from 'aggregation' to 'value' to be more in line with the other metrics aggregations like 'avg'
The terms aggregation can now support sorting on multiple criteria by replacing the sort object with an array or sort object whose order signifies the priority of the sort. The existing syntax for sorting on a single criteria also still works.
Contributes to #6917
Replaces #7588
The terms aggregation can now support sorting on multiple criteria by replacing the sort object with an array or sort object whose order signifies the priority of the sort. The existing syntax for sorting on a single criteria also still works.
Contributes to #6917
Aggregations are collection-wide statistics, which is incompatible with the
collection mode of search_type=SCAN since it doesn't collect all matches on
calls to the search API.
Close#7429
Aggregations are collection-wide statistics so they would always be the same.
In order to save CPU/bandwidth, we can just return them on the first page.
Same as #1642 but for aggregations.
The current implementation of 'date_histogram' does not understand
the `factor` parameter. Since the docs shouldn't raise false hopes,
I removed the section.
Closes#7277
A multi-bucket aggregation where multiple filters can be defined (each filter defines a bucket). The buckets will collect all the documents that match their associated filter.
This aggregation can be very useful when one wants to compare analytics between different criterias. It can also be accomplished using multiple definitions of the single filter aggregation, but here, the user will only need to define the sub-aggregations only once.
Closes#6118
Implements a new Exists API allowing users to do fast exists check on any matched documents for a given query.
This API should be faster then using the Count API as it will:
- early terminate the search execution once any document is found to exist
- return the response as soon as the first shard reports matched documents
closes#6995
This is only applicable when the order is set to _count. The upper bound of the error in the doc count is calculated by summing the doc count of the last term on each shard which did not return the term. The implementation calculates the error by summing the doc count for the last term on each shard for which the term IS returned and then subtracts this value from the sum of the doc counts for the last term from ALL shards.
Closes#6696
Allow users to control document collection termination, if a specified terminate_after number is
set. Upon setting the newly added parameter, the response will include a boolean terminated_early
flag, indicating if the document collection for any shard terminated early.
closes#6876
A new option `prune` has been added to allow users to control phrase suggestion pruning when `collate`
is set. If the new option is set, the phrase suggestion option will contain a boolean `collate_match`
indicating whether the respective result had hits in collation.
CLoses#6927
The newly added collate option will let the user provide a template query/filter which will be executed for every phrase suggestions generated to ensure that the suggestion matches at least one document for the filter/query.
The user can also add routing preference `preference` to route the collate query/filter and additional `params` to inject into the collate template.
Closes#3482
This commit adds the infrastructure to allow pluging in different
measures for computing the significance of a term.
Significance measures can be provided externally by overriding
- SignificanceHeuristic
- SignificanceHeuristicBuilder
- SignificanceHeuristicParser
closes#6561
Percentile Rank Aggregation is the reverse of the Percetiles aggregation. It determines the percentile rank (the proportion of values less than a given value) of the provided array of values.
Closes#6386
A new "breadth_first" results collection mode allows upper branches of aggregation tree to be calculated and then pruned
to a smaller selection before advancing into executing collection on child branches.
Closes#6128
The existing Note about the shorthand suggest syntax was poorly worded and confusing. Please check whether the way I've phrased it now is still correct as to what the shorthand form actually does and doesn't do: the original wording did not provide me enough information to be sure.
Thanks!
The GeoBounds Aggregation is a new single bucket aggregation which outputs the coordinates of a bounding box containing all the points from all the documents passed to the aggregation as well as the doc count. Geobound Aggregation also use a wrap_logitude parameter which specifies whether the resulting bounding box is permitted to overlap the international date line. This option defaults to true.
This aggregation introduces the idea of MetricsAggregation which do not return double values and cannot be used for sorting. The existing MetricsAggregation has been renamed to NumericMetricsAggregation and is a subclass of MetricsAggregation. MetricsAggregations do not store doc counts and do not support child aggregations.
Closes#5634
Because json objects are unordered this also adds an explicit order syntax
that looks like
"highlight": {
"fields": [
{"title":{ /*params*/ }},
{"text":{ /*params*/ }}
]
}
This is not useful for any of the builtin highlighters but will be useful
in plugins.
Closes#4649
Our improvements to t-digest have been pushed upstream and t-digest also got
some additional nice improvements around memory usage and speedups of quantile
estimation. So it makes sense to use it as a dependency now.
This also allows to remove the test dependency on Apache Mahout.
Close#6142
By default More Like This API excludes the queried document from the response.
However, when debugging or when comparing scores across different queries, it
could be useful to have the best possible matched hit. So this option lets users
explicitly specify the desired behavior.
Closes#6067
In the Google Groups forum there appears to be some confusion as to what mlt
does. This documentation update should hopefully help demystifying this
feature, and provide some understanding as to how to use its parameters.
Closes#6092
- Randomized integration tests for the benchmark API.
- Negative tests for cases where the cluster cannot run benchmarks.
- Return 404 on missing benchmark name.
- Allow to specify 'types' as an array in the JSON syntax when describing a benchmark competition.
- Don't record slowest for single-request competitions.
Closes#6003, #5906, #5903, #5904
Significant terms internally maintain a priority queue per shard with a size potentially
lower than the number of terms. This queue uses the score as criterion to determine if
a bucket is kept or not. If many terms with low subsetDF score very high
but the `min_doc_count` is set high, this might result in no terms being
returned because the pq is filled with low frequent terms which are all sorted
out in the end.
This can be avoided by increasing the `shard_size` parameter to a higher value.
However, it is not immediately clear to which value this parameter must be set
because we can not know how many terms with low frequency are scored higher that
the high frequent terms that we are actually interested in.
On the other hand, if there is no routing of docs to shards involved, we can maybe
assume that the documents of classes and also the terms therein are distributed evenly
across shards. In that case it might be easier to not add documents to the pq that have
subsetDF <= `shard_min_doc_count` which can be set to something like
`min_doc_count`/number of shards because we would assume that even when summing up
the subsetDF across shards `min_doc_count` will not be reached.
closes#5998closes#6041
The default precision was way too exact and could lead people to
think that geo context suggestions are not working. This patch now
requires you to set the precision in the mapping, as elasticsearch itself
can never tell exactly, what the required precision for the users
suggestions are.
Closes#5621
By default the date_/histogram returns all the buckets within the range of the data itself, that is, the documents with the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when when requesting empty buckets (min_doc_count : 0), this causes a confusion, specifically, when the data is also filtered.
To understand why, let's look at an example:
Lets say the you're filtering your request to get all docs from the last month, and in the date_histogram aggs you'd like to slice the data per day. You also specify min_doc_count:0 so that you'd still get empty buckets for those days to which no document belongs. By default, if the first document that fall in this last month also happen to fall on the first day of the **second week** of the month, the date_histogram will **not** return empty buckets for all those days prior to that second week. The reason for that is that by default the histogram aggregations only start building buckets when they encounter documents (hence, missing on all the days of the first week in our example).
With extended_bounds, you now can "force" the histogram aggregations to start building buckets on a specific min values and also keep on building buckets up to a max value (even if there are no documents anymore). Using extended_bounds only makes sense when min_doc_count is 0 (the empty buckets will never be returned if the min_doc_count is greater than 0).
Note that (as the name suggest) extended_bounds is **not** filtering buckets. Meaning, if the min bounds is higher than the values extracted from the documents, the documents will still dictate what the min bucket will be (and the same goes to the extended_bounds.max and the max bucket). For filtering buckets, one should nest the histogram agg under a range filter agg with the appropriate min/max.
Closes#5224
Significance is related to the changes in document frequency observed between everyday use in the corpus and
frequency observed in the result set. The asciidocs include extensive details on the applications of this feature.
Closes#5146
This aggregation computes unique term counts using the hyperloglog++ algorithm
which uses linear counting to estimate low cardinalities and hyperloglog on
higher cardinalities.
Since this algorithm works on hashes, it is useful for high-cardinality fields
to store the hash of values directly in the index, which is the purpose of
the new `murmur3` field type. This is less necessary on low-cardinality
string fields because the aggregator is smart enough to only compute the hash
once per unique value per segment thanks to ordinals, or on numeric fields
since hashing them is very fast.
Close#5426
================
This commit extends the `CompletionSuggester` by context
informations. In example such a context informations can
be a simple string representing a category reducing the
suggestions in order to this category.
Three base implementations of these context informations
have been setup in this commit.
- a Category Context
- a Geo Context
All the mapping for these context informations are
specified within a context field in the completion
field that should use this kind of information.
Supports sorting on sub-aggs down the current hierarchy. This is supported as long as the aggregation in the specified order path are of a single-bucket type, where the last aggregation in the path points to either a single-bucket aggregation or a metrics one. If it's a single-bucket aggregation, the sort will be applied on the document count in the bucket (i.e. doc_count), and if it is a metrics type, the sort will be applied on the pointed out metric (in case of a single-metric aggregations, such as avg, the sort will be applied on the single metric value)
NOTE: this commit adds a constraint on what should be considered a valid aggregation name. Aggregations names must be alpha-numeric and may contain '-' and '_'.
Closes#5253
In #4052 we added support for highlighting multi term queries using the postings highlighter. That worked only for top-level queries though, and not for multi term queries that are nested for instance within a bool query, or filtered query, or a constant score query.
The way we make this work is by walking the query structure and temporarily overriding the query rewrite method with a method that allows for multi terms extraction.
Closes#5102
* Mostly minor things like typos and grammar stuff
* Some clarifications
* The note on the deprecation was ambiguous. I've removed the problematic part so that it now definitely says it's deprecated
Detects if rescores arrive as an array instead of a plain object. If so
then parse each element of the array as a separate rescore to be executed
one after another. It looks like this:
"rescore" : [ {
"window_size" : 100,
"query" : {
"rescore_query" : {
"match" : {
"field1" : {
"query" : "the quick brown",
"type" : "phrase",
"slop" : 2
}
}
},
"query_weight" : 0.7,
"rescore_query_weight" : 1.2
}
}, {
"window_size" : 10,
"query" : {
"score_mode": "multiply",
"rescore_query" : {
"function_score" : {
"script_score": {
"script": "log10(doc['numeric'].value + 2)"
}
}
}
}
} ]
Rescores as a single object are still supported.
Closes#4748
Terms aggregations return up to `size` terms, so up to now, the way to get all
matching terms back was to set `size` to an arbitrary high number that would be
larger than the number of unique terms.
Terms aggregators already made sure to not allocate memory based on the `size`
parameter so this commit mostly consists in making `0` an alias for the
maximum integer value in the TermsParser.
Close#4837
Adds a new FetchSubPhase, FieldDataFieldsFetchSubPhase, which loads the
field data cache for a field and returns an array of values for the
field.
Also removes `doc['<field>']` and `_source.<field>` workaround no longer
needed in field name resolving.
Closes#4492
* Make it clearer that `aggs` is an allowed synomym
for the `aggregations` key
* Fix broken example in for datehistogram, `1.5M` is
not an allowed interval
* Make use of colon before examples consistent
* Fix typos
- Removed "ok": true from response examples
- Added "created" flag to index response examples
- Replaced exists flag with found in delete response examples
Java Builder apis drop old “len” methods in favour of new “length”
Rest APIs support both old “len: and new “length” forms using new ParseField class to a) provide compiler-checked consistency between Builder and Parser classes and
b) a common means of handling deprecated syntax in the DSL.
Documentation and rest specs only document the new “*length” forms
Closes#4083
`min_doc_count` is the minimum number of hits that a term or histogram key
should match in order to appear in the response.
`min_doc_count=0` replaces `compute_empty_buckets` for histograms and will
behave exactly like facets' `all_terms=true` for terms aggregations.
Close#4662
When upgrading to ES 1.0 the existing mappings with a multi-field type automatically get replaced to a core field with the new `fields` option.
If a `multi_field` type-ed field doesn't have a main / default field, a default field will be chosen for the multi fields syntax. The new main field type
will be equal to the first `multi_field` fields' field or type string if no fields have been configured for the `multi_field` field and in both cases
the default index will not be indexed (`index=no` is set on the default field).
If a `multi_field` typed field has a default field, that field will replace the `multi_field` typed field.
Closes to #4521
============
The default unit for measuring distances is *MILES* in most cases. This commit moves ES
over to the *International System of Units* and make it work on a default which relates
to *METERS* . Also the current structures of the `GeoBoundingBox Filter` changed in
order to define the *Bounding* by setting abitrary corners.
Distances
---------
Since the default unit for measuring distances has changed to a default unit
`DistanceUnit.DEFAULT` relating to *meters*, the **REST API** has changed at the
following places:
* `ScriptDocValues.factorDistance()` returns *meters* instead of *miles*
* `ScriptDocValues.factorDistanceWithDefault()` returns *meters* instead of *miles*
* `ScriptDocValues.arcDistance()` returns *meters* instead of *miles*
one might use `ScriptDocValues.arcDistanceInMiles()`
* `ScriptDocValues.arcDistanceWithDefault()` returns *meters* instead of *miles*
* `ScriptDocValues.distance()` returns *meters* instead of *miles*
one might use `ScriptDocValues.distanceInMiles()`
* `ScriptDocValues.distanceWithDefault()` returns *meters* instead of *miles*
one might use `ScriptDocValues.distanceInMilesWithDefault()`
* `GeoDistanceFilter` default unit changes from *kilometers* to *meters*
* `GeoDistanceRangeFilter` default unit changes from *miles* to *meters*
* `GeoDistanceFacet` default unit changes from *miles* to *meters*
Geo Bounding Box Filter
-----------------------
The naming of the GeoBoundingBoxFilter properties allows to set arbitrary corners
(see #4084) namely `top_right`, `top_left`, `bottom_right` and `bottom_left`. This
change also includes the fields `topRight` and `bottomLeft` Also it is be possible to
set the single values by using just `top`, `bottom`, `left` and `right` parameters.
Closes#4515, #4084