Igor Motov da6b61e8ef
Make Geo Context Mapping Parsing More Strict (#32821)
Currently, if geo context is represented by something other than
geo_point or an object with lat and lon fields, the parsing of it
as a geo context can result in ignoring the context altogether,
returning confusing errors such as number_format_exception or trying
to parse the number specifying as long-encoded hash code. It would also
fail if the geo_point was stored.

This commit makes the mapping parsing more strict and will fail during
mapping update or index creation if the geo context doesn't point to
a geo_point field.

Supersedes #32412

Closes #32202
2018-08-17 08:13:16 -07:00

103 lines
4.0 KiB
Plaintext

[[breaking_70_search_changes]]
=== Search and Query DSL changes
==== Changes to queries
* The default value for `transpositions` parameter of `fuzzy` query
has been changed to `true`.
* The `query_string` options `use_dismax`, `split_on_whitespace`,
`all_fields`, `locale`, `auto_generate_phrase_query` and
`lowercase_expanded_terms` deprecated in 6.x have been removed.
* Purely negative queries (only MUST_NOT clauses) now return a score of `0`
rather than `1`.
* The boundary specified using geohashes in the `geo_bounding_box` query
now include entire geohash cell, instead of just geohash center.
* Attempts to generate multi-term phrase queries against non-text fields
with a custom analyzer will now throw an exception
==== Adaptive replica selection enabled by default
Adaptive replica selection has been enabled by default. If you wish to return to
the older round robin of search requests, you can use the
`cluster.routing.use_adaptive_replica_selection` setting:
[source,js]
--------------------------------------------------
PUT /_cluster/settings
{
"transient": {
"cluster.routing.use_adaptive_replica_selection": false
}
}
--------------------------------------------------
// CONSOLE
==== Search API returns `400` for invalid requests
The Search API returns `400 - Bad request` while it would previously return
`500 - Internal Server Error` in the following cases of invalid request:
* the result window is too large
* sort is used in combination with rescore
* the rescore window is too large
* the number of slices is too large
* keep alive for scroll is too large
* number of filters in the adjacency matrix aggregation is too large
* script compilation errors
==== Scroll queries cannot use the `request_cache` anymore
Setting `request_cache:true` on a query that creates a scroll (`scroll=1m`)
has been deprecated in 6 and will now return a `400 - Bad request`.
Scroll queries are not meant to be cached.
==== Term Suggesters supported distance algorithms
The following string distance algorithms were given additional names in 6.2 and
their existing names were deprecated. The deprecated names have now been
removed.
* `levenstein` - replaced by `levenshtein`
* `jarowinkler` - replaced by `jaro_winkler`
==== Limiting the number of terms that can be used in a Terms Query request
Executing a Terms Query with a lot of terms may degrade the cluster performance,
as each additional term demands extra processing and memory.
To safeguard against this, the maximum number of terms that can be used in a
Terms Query request has been limited to 65536. This default maximum can be changed
for a particular index with the index setting `index.max_terms_count`.
==== Limiting the length of regex that can be used in a Regexp Query request
Executing a Regexp Query with a long regex string may degrade search performance.
To safeguard against this, the maximum length of regex that can be used in a
Regexp Query request has been limited to 1000. This default maximum can be changed
for a particular index with the index setting `index.max_regex_length`.
==== Invalid `_search` request body
Search requests with extra content after the main object will no longer be accepted
by the `_search` endpoint. A parsing exception will be thrown instead.
==== Context Completion Suggester
The ability to query and index context enabled suggestions without context,
deprecated in 6.x, has been removed. Context enabled suggestion queries
without contexts have to visit every suggestion, which degrades the search performance
considerably.
For geo context the value of the `path` parameter is now validated against the mapping,
and the context is only accepted if `path` points to a field with `geo_point` type.
==== Semantics changed for `max_concurrent_shard_requests`
`max_concurrent_shard_requests` used to limit the total number of concurrent shard
requests a single high level search request can execute. In 7.0 this changed to be the
max number of concurrent shard requests per node. The default is now `5`.