OpenSearch/docs/reference/migration/migrate_2_0.asciidoc

854 lines
32 KiB
Plaintext
Raw Normal View History

[[breaking-changes-2.0]]
== Breaking changes in 2.0
This section discusses the changes that you need to be aware of when migrating
your application to Elasticsearch 2.0.
=== Networking
Elasticsearch now binds to the loopback interface by default (usually 127.0.0.1
or ::1), the setting `network.host` can be specified to change this behavior.
=== Rivers removal
Elasticsearch does not support rivers anymore. While we had first planned to
keep them around to ease migration, keeping support for rivers proved to be
challenging as it conflicted with other important changes that we wanted to
bring to 2.0 like synchronous dynamic mappings updates, so we eventually
decided to remove them entirely. See
https://www.elastic.co/blog/deprecating_rivers for more background about why
we are moving away from rivers.
=== Indices API
The <<alias-retrieving, get alias api>> will, by default produce an error response
if a requested index does not exist. This change brings the defaults for this API in
line with the other Indices APIs. The <<multi-index>> options can be used on a request
to change this behavior
`GetIndexRequest.features()` now returns an array of Feature Enums instead of an array of String values.
The following deprecated methods have been removed:
* `GetIndexRequest.addFeatures(String[])` - Please use `GetIndexRequest.addFeatures(Feature[])` instead
* `GetIndexRequest.features(String[])` - Please use `GetIndexRequest.features(Feature[])` instead
* `GetIndexRequestBuilder.addFeatures(String[])` - Please use `GetIndexRequestBuilder.addFeatures(Feature[])` instead
* `GetIndexRequestBuilder.setFeatures(String[])` - Please use `GetIndexRequestBuilder.setFeatures(Feature[])` instead
=== Partial fields
Partial fields were deprecated since 1.0.0beta1 in favor of <<search-request-source-filtering,source filtering>>.
=== More Like This
The More Like This API and the More Like This Field query have been removed in
favor of the <<query-dsl-mlt-query, More Like This Query>>.
The parameter `percent_terms_to_match` has been removed in favor of
`minimum_should_match`.
=== Routing
The default hash function that is used for routing has been changed from djb2 to
murmur3. This change should be transparent unless you relied on very specific
properties of djb2. This will help ensure a better balance of the document counts
between shards.
In addition, the following node settings related to routing have been deprecated:
[horizontal]
`cluster.routing.operation.hash.type`::
This was an undocumented setting that allowed to configure which hash function
to use for routing. `murmur3` is now enforced on new indices.
`cluster.routing.operation.use_type`::
This was an undocumented setting that allowed to take the `_type` of the
document into account when computing its shard (default: `false`). `false` is
now enforced on new indices.
=== Async replication
The `replication` parameter has been removed from all CRUD operations (index,
update, delete, bulk). These operations are now synchronous
only, and a request will only return once the changes have been replicated to
all active shards in the shard group.
=== Store
The `memory` / `ram` store (`index.store.type`) option was removed in Elasticsearch 2.0.
=== Term Vectors API
Usage of `/_termvector` is deprecated, and replaced in favor of `/_termvectors`.
=== Script fields
Script fields in 1.x were only returned as a single value. So even if the return
value of a script used to be list, it would be returned as an array containing
a single value that is a list too, such as:
[source,js]
---------------
"fields": {
"my_field": [
[
"v1",
"v2"
]
]
}
---------------
In elasticsearch 2.x, scripts that return a list of values are considered as
multivalued fields. So the same example would return the following response,
with values in a single array.
[source,js]
---------------
"fields": {
"my_field": [
"v1",
"v2"
]
}
---------------
=== Main API
Previously, calling `GET /` was giving back the http status code within the json response
in addition to the actual HTTP status code. We removed `status` field in json response.
=== Java API
`org.elasticsearch.index.queries.FilterBuilders` has been removed as part of the merge of
queries and filters. These filters are now available in `QueryBuilders` with the same name.
All methods that used to accept a `FilterBuilder` now accept a `QueryBuilder` instead.
In addition some query builders have been removed or renamed:
* `commonTerms(...)` renamed with `commonTermsQuery(...)`
* `queryString(...)` renamed with `queryStringQuery(...)`
* `simpleQueryString(...)` renamed with `simpleQueryStringQuery(...)`
* `textPhrase(...)` removed
* `textPhrasePrefix(...)` removed
* `textPhrasePrefixQuery(...)` removed
* `filtered(...)` removed. Use `filteredQuery(...)` instead.
* `inQuery(...)` removed.
=== Aggregations
The `date_histogram` aggregation now returns a `Histogram` object in the response, and the `DateHistogram` class has been removed. Similarly
the `date_range`, `ipv4_range`, and `geo_distance` aggregations all return a `Range` object in the response, and the `IPV4Range`, `DateRange`,
and `GeoDistance` classes have been removed. The motivation for this is to have a single response API for the Range and Histogram aggregations
regardless of the type of data being queried. To support this some changes were made in the `MultiBucketAggregation` interface which applies
to all bucket aggregations:
* The `getKey()` method now returns `Object` instead of `String`. The actual object type returned depends on the type of aggregation requested
(e.g. the `date_histogram` will return a `DateTime` object for this method whereas a `histogram` will return a `Number`).
* A `getKeyAsString()` method has been added to return the String representation of the key.
* All other `getKeyAsX()` methods have been removed.
* The `getBucketAsKey(String)` methods have been removed on all aggregations except the `filters` and `terms` aggregations.
The `histogram` and the `date_histogram` aggregation now support a simplified `offset` option that replaces the previous `pre_offset` and
`post_offset` rounding options. Instead of having to specify two separate offset shifts of the underlying buckets, the `offset` option
moves the bucket boundaries in positive or negative direction depending on its argument.
The `date_histogram` options for `pre_zone` and `post_zone` are replaced by the `time_zone` option. The behavior of `time_zone` is
equivalent to the former `pre_zone` option. Setting `time_zone` to a value like "+01:00" now will lead to the bucket calculations
being applied in the specified time zone but In addition to this, also the `pre_zone_adjust_large_interval` is removed because we
now always return dates and bucket keys in UTC.
Both the `histogram` and `date_histogram` aggregations now have a default `min_doc_count` of `0` instead of `1` previously.
`include`/`exclude` filtering on the `terms` aggregation now uses the same syntax as regexp queries instead of the Java syntax. While simple
regexps should still work, more complex ones might need some rewriting. Also, the `flags` parameter is not supported anymore.
=== Terms filter lookup caching
The terms filter lookup mechanism does not support the `cache` option anymore
and relies on the filesystem cache instead. If the lookup index is not too
large, it is recommended to make it replicated to all nodes by setting
`index.auto_expand_replicas: 0-all` in order to remove the network overhead as
well.
=== Delete by query
The meaning of the `_shards` headers in the delete by query response has changed. Before version 2.0 the `total`,
`successful` and `failed` fields in the header are based on the number of primary shards. The failures on replica
shards aren't being kept track of. From version 2.0 the stats in the `_shards` header are based on all shards
of an index. The http status code is left unchanged and is only based on failures that occurred while executing on
primary shards.
=== Delete api with missing routing when required
Delete api requires a routing value when deleting a document belonging to a type that has routing set to required in its
mapping, whereas previous elasticsearch versions would trigger a broadcast delete on all shards belonging to the index.
A `RoutingMissingException` is now thrown instead.
=== Mappings
* The setting `index.mapping.allow_type_wrapper` has been removed. Documents should always be sent without the type as the root element.
* The delete mappings API has been removed. Mapping types can no longer be deleted.
* The `ignore_conflicts` option of the put mappings API has been removed. Conflicts can't be ignored anymore.
* The `binary` field does not support the `compress` and `compress_threshold` options anymore.
==== Removed type prefix on field names in queries
Types can no longer be specified on fields within queries. Instead, specify type restrictions in the search request.
The following is an example query in 1.x over types `t1` and `t2`:
[source,js]
---------------
curl -XGET 'localhost:9200/index/_search'
{
"query": {
"bool": {
"should": [
{"match": { "t1.field_only_in_t1": "foo" }},
{"match": { "t2.field_only_in_t2": "bar" }}
]
}
}
}
---------------
In 2.0, the query should look like the following:
[source,js]
---------------
curl -XGET 'localhost:9200/index/t1,t2/_search'
{
"query": {
"bool": {
"should": [
{"match": { "field_only_in_t1": "foo" }},
{"match": { "field_only_in_t2": "bar" }}
]
}
}
}
---------------
==== Removed short name field access
Field names in queries, aggregations, etc. must now use the complete name. Use of the short name
caused ambiguities in field lookups when the same name existed within multiple object mappings.
The following example illustrates the difference between 1.x and 2.0.
Given these mappings:
[source,js]
---------------
curl -XPUT 'localhost:9200/index'
{
"mappings": {
"type": {
"properties": {
"name": {
"type": "object",
"properties": {
"first": {"type": "string"},
"last": {"type": "string"}
}
}
}
}
}
}
---------------
The following query was possible in 1.x:
[source,js]
---------------
curl -XGET 'localhost:9200/index/type/_search'
{
"query": {
"match": { "first": "foo" }
}
}
---------------
In 2.0, the same query should now be:
[source,js]
---------------
curl -XGET 'localhost:9200/index/type/_search'
{
"query": {
"match": { "name.first": "foo" }
}
}
---------------
==== Removed support for `.` in field name mappings
Prior to Elasticsearch 2.0, a field could be defined to have a `.` in its name.
Mappings like the one below have been deprecated for some time and they will be
blocked in Elasticsearch 2.0.
[source,js]
---------------
curl -XPUT 'localhost:9200/index'
{
"mappings": {
"type": {
"properties": {
"name.first": {
"type": "string"
}
}
}
}
}
---------------
==== Meta fields have limited configuration
Meta fields (those beginning with underscore) are fields used by elasticsearch
to provide special features. They now have limited configuration options.
* `_id` configuration can no longer be changed. If you need to sort, use `_uid` instead.
* `_type` configuration can no longer be changed.
* `_index` configuration can no longer be changed.
* `_routing` configuration is limited to requiring the field.
* `_boost` has been removed.
* `_field_names` configuration is limited to disabling the field.
* `_size` configuration is limited to enabling the field.
* `_timestamp` configuration is limited to enabling the field, setting format and default value
==== Meta fields in documents
Meta fields can no longer be specified within a document. They should be specified
via the API. For example, instead of adding a field `_parent` within a document,
use the `parent` url parameter when indexing that document.
==== Default date format now is `strictDateOptionalTime`
Instead of `dateOptionalTime` the new default date format now is `strictDateOptionalTime`,
which is more strict in parsing dates. This means, that dates now need to have a four digit year,
a two-digit month, day, hour, minute and second. This means, you may need to preprend a part of the date
with a zero to make it conform or switch back to the old `dateOptionalTime` format.
==== Date format does not support unix timestamps by default
In earlier versions of elasticsearch, every timestamp was always tried to be parsed as
as unix timestamp first. This means, even when specifying a date format like
`dateOptionalTime`, one could supply unix timestamps instead of a ISO8601 formatted
date.
This is not supported anymore. If you want to store unix timestamps, you need to specify
the appropriate formats in the mapping, namely `epoch_second` or `epoch_millis`.
In addition the `numeric_resolution` mapping parameter is ignored. Use the
`epoch_second` and `epoch_millis` date formats instead.
==== Source field limitations
The `_source` field could previously be disabled dynamically. Since this field
is a critical piece of many features like the Update API, it is no longer
possible to disable.
The options for `compress` and `compress_threshold` have also been removed.
The source field is already compressed. To minimize the storage cost,
set `index.codec: best_compression` in index settings.
==== Boolean fields
Boolean fields used to have a string fielddata with `F` meaning `false` and `T`
meaning `true`. They have been refactored to use numeric fielddata, with `0`
for `false` and `1` for `true`. As a consequence, the format of the responses of
the following APIs changed when applied to boolean fields: `0`/`1` is returned
instead of `F`/`T`:
- <<search-request-fielddata-fields,fielddata fields>>
- <<search-request-sort,sort values>>
- <<search-aggregations-bucket-terms-aggregation,terms aggregations>>
In addition, terms aggregations use a custom formatter for boolean (like for
dates and ip addresses, which are also backed by numbers) in order to return
the user-friendly representation of boolean fields: `false`/`true`:
[source,js]
---------------
"buckets": [
{
"key": 0,
"key_as_string": "false",
"doc_count": 42
},
{
"key": 1,
"key_as_string": "true",
"doc_count": 12
}
]
---------------
==== Murmur3 Fields
Fields of type `murmur3` can no longer change `doc_values` or `index` setting.
They are always stored with doc values, and not indexed.
==== Source field configuration
2015-05-05 04:03:15 -04:00
The `_source` field no longer supports `includes` and `excludes` parameters. When
`_source` is enabled, the entire original source will be stored.
==== Config based mappings
The ability to specify mappings in configuration files has been removed. To specify
default mappings that apply to multiple indexes, use index templates.
The following settings are no longer valid:
* `index.mapper.default_mapping_location`
* `index.mapper.default_percolator_mapping_location`
=== Codecs
It is no longer possible to specify per-field postings and doc values formats
in the mappings. This setting will be ignored on indices created before
elasticsearch 2.0 and will cause mapping parsing to fail on indices created on
or after 2.0. For old indices, this means that new segments will be written
with the default postings and doc values formats of the current codec.
It is still possible to change the whole codec by using the `index.codec`
setting. Please however note that using a non-default codec is discouraged as
it could prevent future versions of Elasticsearch from being able to read the
index.
=== Scripting settings
Removed support for `script.disable_dynamic` node setting, replaced by
fine-grained script settings described in the <<enable-dynamic-scripting,scripting docs>>.
The following setting previously used to enable dynamic scripts:
[source,yaml]
---------------
script.disable_dynamic: false
---------------
can be replaced with the following two settings in `elasticsearch.yml` that
achieve the same result:
[source,yaml]
---------------
script.inline: on
script.indexed: on
---------------
=== Script parameters
Deprecated script parameters `id`, `file`, `scriptField`, `script_id`, `script_file`,
`script`, `lang` and `params`. The <<modules-scripting,new script API syntax>> should be used in their place.
The deprecated script parameters have been removed from the Java API so applications using the Java API will
need to be updated.
=== Groovy scripts sandbox
The groovy sandbox and related settings have been removed. Groovy is now a non
sandboxed scripting language, without any option to turn the sandbox on.
=== Plugins making use of scripts
Plugins that make use of scripts must register their own script context through
`ScriptModule`. Script contexts can be used as part of fine-grained settings to
enable/disable scripts selectively.
=== Thrift and memcached transport
The thrift and memcached transport plugins are no longer supported. Instead, use
either the HTTP transport (enabled by default) or the node or transport Java client.
=== `search_type=count` deprecation
The `count` search type has been deprecated. All benefits from this search type can
now be achieved by using the `query_then_fetch` search type (which is the
default) and setting `size` to `0`.
=== The count api internally uses the search api
The count api is now a shortcut to the search api with `size` set to 0. As a
result, a total failure will result in an exception being returned rather
than a normal response with `count` set to `0` and shard failures.
=== JSONP support
JSONP callback support has now been removed. CORS should be used to access Elasticsearch
over AJAX instead:
[source,yaml]
---------------
http.cors.enabled: true
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
---------------
2015-07-08 15:24:14 -04:00
=== CORS allowed origins
The CORS allowed origins setting, `http.cors.allow-origin`, no longer has a default value. Previously, the default value
was `*`, which would allow CORS requests from any origin and is considered insecure. The `http.cors.allow-origin` setting
should be specified with only the origins that should be allowed, like so:
[source,yaml]
---------------
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
---------------
=== Cluster state REST api
The cluster state api doesn't return the `routing_nodes` section anymore when
`routing_table` is requested. The newly introduced `routing_nodes` flag can
be used separately to control whether `routing_nodes` should be returned.
=== Query DSL
Change to ranking behaviour: single-term queries on numeric fields now score in the same way as string fields (use of IDF, norms if enabled).
Previously, term queries on numeric fields were deliberately prevented from using the usual Lucene scoring logic and this behaviour was undocumented and, to some, unexpected.
If the introduction of scoring to numeric fields is undesirable for your query clauses the fix is simple: wrap them in a `constant_score` or use a `filter` expression instead.
The `filtered` query is deprecated. Instead you should use a `bool` query with
a `must` clause for the query and a `filter` clause for the filter. For instance
the below query:
[source,js]
---------------
{
"filtered": {
"query": {
// query
},
"filter": {
// filter
}
}
}
---------------
can be replaced with
[source,js]
---------------
{
"bool": {
"must": {
// query
},
"filter": {
// filter
}
}
}
---------------
and will produce the same scores.
The `fuzzy_like_this` and `fuzzy_like_this_field` queries have been removed.
The `limit` filter is deprecated and becomes a no-op. You can achieve similar
behaviour using the <<search-request-body,terminate_after>> parameter.
`or` and `and` on the one hand and `bool` on the other hand used to have
different performance characteristics depending on the wrapped filters. This is
fixed now, as a consequence the `or` and `and` filters are now deprecated in
favour or `bool`.
The `execution` option of the `terms` filter is now deprecated and ignored if
provided.
The `_cache` and `_cache_key` parameters of filters are deprecated in the REST
layer and removed in the Java API. In case they are specified they will be
ignored. Instead filters are always used as their own cache key and elasticsearch
makes decisions by itself about whether it should cache filters based on how
often they are used.
Java plugins that register custom queries can do so by using the
`IndicesQueriesModule#addQuery(Class<? extends QueryParser>)` method. Other
ways to register custom queries are not supported anymore.
==== Query/filter merge
Elasticsearch no longer makes a difference between queries and filters in the
DSL; it detects when scores are not needed and automatically optimizes the
query to not compute scores and optionally caches the result.
As a consequence the `query` filter serves no purpose anymore and is deprecated.
=== Timezone for date field
Specifying the `time_zone` parameter on queries or aggregations of `date` type fields
must now be either an ISO 8601 UTC offset, or a timezone id. For example, the value
`+1:00` must now be `+01:00`.
=== Snapshot and Restore
Locations of the shared file system repositories and the URL repositories with `file:` URLs has to be now registered
using `path.repo` setting. The `path.repo` setting can contain one or more repository locations:
[source,yaml]
---------------
path.repo: ["/mnt/daily", "/mnt/weekly"]
---------------
If the repository location is specified as an absolute path it has to start with one of the locations
specified in `path.repo`. If the location is specified as a relative path, it will be resolved against the first
location specified in the `path.repo` setting.
URL repositories with `http:`, `https:`, and `ftp:` URLs has to be whitelisted by specifying allowed URLs in the
`repositories.url.allowed_urls` setting. This setting supports wildcards in the place of host, path, query, and
fragment. For example:
[source,yaml]
-----------------------------------
repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"]
-----------------------------------
The obsolete parameters `expand_wildcards_open` and `expand_wildcards_close` are no longer
supported by the snapshot and restore operations. These parameters have been replaced by
a single `expand_wildcards` parameter. See <<multi-index,the multi-index docs>> for more.
=== `_shutdown` API
The `_shutdown` API has been removed without a replacement. Nodes should be managed via operating
systems and the provided start/stop scripts.
=== Analyze API
2015-05-15 11:25:53 -04:00
* The Analyze API return 0 as first Token's position instead of 1.
* The `text()` method on `AnalyzeRequest` now returns `String[]` instead of `String`.
=== Multiple data.path striping
Previously, if the `data.path` setting listed multiple data paths, then a
shard would be ``striped'' across all paths by writing a whole file to each
path in turn (in accordance with the `index.store.distributor` setting). The
result was that the files from a single segment in a shard could be spread
across multiple disks, and the failure of any one disk could corrupt multiple
shards.
This striping is no longer supported. Instead, different shards may be
allocated to different paths, but all of the files in a single shard will be
written to the same path.
If striping is detected while starting Elasticsearch 2.0.0 or later, all of
the files belonging to the same shard will be migrated to the same path. If
there is not enough disk space to complete this migration, the upgrade will be
cancelled and can only be resumed once enough disk space is made available.
The `index.store.distributor` setting has also been removed.
=== Hunspell dictionary configuration
The parameter `indices.analysis.hunspell.dictionary.location` has been removed,
and `<path.conf>/hunspell` is always used.
=== Java API Transport API construction
The `TransportClient` construction code has changed, it now uses the builder
pattern. Instead of using:
[source,java]
--------------------------------------------------
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = new TransportClient(settings);
--------------------------------------------------
Use:
[source,java]
--------------------------------------------------
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = TransportClient.builder().settings(settings).build();
--------------------------------------------------
=== Logging
Log messages are now truncated at 10,000 characters. This can be changed in the
`logging.yml` configuration file.
[float]
=== Removed `top_children` query
The `top_children` query has been removed in favour of the `has_child` query. The `top_children` query wasn't always faster
than the `has_child` query and the `top_children` query was often inaccurate. The total hits and any aggregations in the
same search request will likely be off if `top_children` was used.
=== Removed file based index templates
Index templates can no longer be configured on disk. Use the `_template` API instead.
[float]
=== Removed `id_cache` from stats apis
Removed `id_cache` metric from nodes stats, indices stats and cluster stats apis. This metric has also been removed
from the shards cat, indices cat and nodes cat apis. Parent/child memory is now reported under fielddata, because it
has internally be using fielddata for a while now.
To just see how much parent/child related field data is taking, the `fielddata_fields` option can be used on the stats
apis. Indices stats example:
[source,js]
--------------------------------------------------
curl -XGET "http://localhost:9200/_stats/fielddata?pretty&human&fielddata_fields=_parent"
--------------------------------------------------
Parent/child is using field data for the `_parent` field since version `1.1.0`, but the memory stats for the `_parent`
field were still shown under `id_cache` metric in the stats apis for backwards compatible reasons between 1.x versions.
Before version `1.1.0` the parent/child had its own in-memory data structures for id values in the `_parent` field.
[float]
=== Removed `id_cache` from clear cache api
Removed `id_cache` option from the clear cache apis. The `fielddata` option should be used to clear `_parent` field
from fielddata.
Highlighting: nuke XPostingsHighlighter Our own fork of the lucene PostingsHighlighter is not easy to maintain and doesn't give us any added value at this point. In particular, it was introduced to support the require_field_match option and discrete per value highlighting, used in case one wants to highlight the whole content of a field, but get back one snippet per value. These two features won't make it into lucene as they slow things down and shouldn't have been supported from day one on our end probably. One other customization we had was support for a wider range of queries via custom rewrite etc. (yet another way to slow things down), which got added to lucene and works much much better than what we used to do (instead of or rewrite, term s are pulled out of the automata for multi term queries). Removing our fork means the following in terms of features: - dropped support for require_field_match: the postings highlighter will only highlight fields that were queried - some custom es queries won't be supported anymore, meaning they won't be highlighted. The only one I found up until now is the phrase_prefix. Postings highlighter rewrites against an empty reader to avoid slow operations (like the ones that we were performing with the fork that we are removing here), thus the prefix will not be expanded to any term. What the postings highlighter does instead is pulling the automata out of multi term queries, but this is not supported at the moment with our MultiPhrasePrefixQuery. Closes #10625 Closes #11077
2015-05-09 08:17:41 -04:00
[float]
=== Highlighting
The default value for the `require_field_match` option is `true` rather than
`false`, meaning that the highlighters will take the fields that were queried
into account by default. That means for instance that highlighting any field
when querying the `_all` field will produce no highlighted snippets by default,
given that the match was on the `_all` field only. Querying the same fields
that need to be highlighted is the cleaner solution to get highlighted snippets
back. Otherwise `require_field_match` option can be set to `false` to ignore
field names completely when highlighting.
Highlighting: nuke XPostingsHighlighter Our own fork of the lucene PostingsHighlighter is not easy to maintain and doesn't give us any added value at this point. In particular, it was introduced to support the require_field_match option and discrete per value highlighting, used in case one wants to highlight the whole content of a field, but get back one snippet per value. These two features won't make it into lucene as they slow things down and shouldn't have been supported from day one on our end probably. One other customization we had was support for a wider range of queries via custom rewrite etc. (yet another way to slow things down), which got added to lucene and works much much better than what we used to do (instead of or rewrite, term s are pulled out of the automata for multi term queries). Removing our fork means the following in terms of features: - dropped support for require_field_match: the postings highlighter will only highlight fields that were queried - some custom es queries won't be supported anymore, meaning they won't be highlighted. The only one I found up until now is the phrase_prefix. Postings highlighter rewrites against an empty reader to avoid slow operations (like the ones that we were performing with the fork that we are removing here), thus the prefix will not be expanded to any term. What the postings highlighter does instead is pulling the automata out of multi term queries, but this is not supported at the moment with our MultiPhrasePrefixQuery. Closes #10625 Closes #11077
2015-05-09 08:17:41 -04:00
The postings highlighter doesn't support the `require_field_match` option
anymore, it will only highlight fields that were queried.
The `match` query with type set to `match_phrase_prefix` is not supported by the
postings highlighter. No highlighted snippets will be returned.
[float]
=== Parent/child
Parent/child has been rewritten completely to reduce memory usage and to execute
`has_child` and `has_parent` queries faster and more efficient. The `_parent` field
uses doc values by default. The refactored and improved implementation is only active
for indices created on or after version 2.0.
In order to benefit for all performance and memory improvements we recommend to reindex all
indices that have the `_parent` field created before was upgraded to 2.0.
The following breaks in backwards compatability have been made on indices with the `_parent` field
created on or after clusters with version 2.0:
* The `type` option on the `_parent` field can only point to a parent type that doesn't exist yet,
so this means that an existing type/mapping can no longer become a parent type.
* The `has_child` and `has_parent` queries can no longer be use in alias filters.
=== Meta fields returned under the top-level json object
When selecting meta fields such as `_routing` or `_timestamp`, the field values
are now directly put as a top-level property of the json objet, instead of being
put under `fields` like regular stored fields.
[source,sh]
---------------
curl -XGET 'localhost:9200/test/_search?fields=_timestamp,foo'
---------------
[source,js]
---------------
{
[...]
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "test",
"_type": "test",
"_id": "1",
"_score": 1,
"_timestamp": 10000000,
"fields": {
"foo" : [ "bar" ]
}
}
]
}
}
---------------
=== Settings for resource watcher have been renamed
The setting names for configuring the resource watcher have been renamed
to prevent clashes with the watcher plugin
* `watcher.enabled` is now `resource.reload.enabled`
* `watcher.interval` is now `resource.reload.interval`
* `watcher.interval.low` is now `resource.reload.interval.low`
* `watcher.interval.medium` is now `resource.reload.interval.medium`
* `watcher.interval.high` is now `resource.reload.interval.high`
2015-06-30 12:44:58 -04:00
=== Percolator stats
Changed the `percolate.getTime` stat (total time spent on percolating) to `percolate.time` state.
Simplify Plugin Manager for official plugins Plugin Manager can now use another simplified form when a user wants to install an official plugin hosted at elasticsearch download service. The form we use is: ```sh bin/plugin install pluginname ``` As plugins share now the same version as elasticsearch, we can automatically guess what is the exact current version of the plugin manager script. Also, download service will now use `/org.elasticsearch.plugins/pluginName/pluginName-version.zip` URL path to download a plugin. If the older form is provided (`user/plugin/version` or `user/plugin`), we will still use: * elasticsearch download service at `/user/plugin/plugin-version.zip` * maven central with groupIp=user, artifactId=plugin and version=version * github with user=user, repoName=plugin and tag=version * github with user=user, repoName=plugin and branch=master if no version is set Note that community plugin providers can use other download services by using `--url` option. If you try to use the new form with a non core elasticsearch plugin, the plugin manager will reject it and will give you all known core plugins. ``` Usage: -u, --url [plugin location] : Set exact URL to download the plugin from -i, --install [plugin name] : Downloads and installs listed plugins [*] -t, --timeout [duration] : Timeout setting: 30s, 1m, 1h... (infinite by default) -r, --remove [plugin name] : Removes listed plugins -l, --list : List installed plugins -v, --verbose : Prints verbose messages -s, --silent : Run in silent mode -h, --help : Prints this help message [*] Plugin name could be: elasticsearch-plugin-name for Elasticsearch 2.0 Core plugin (download from download.elastic.co) elasticsearch/plugin/version for elasticsearch commercial plugins (download from download.elastic.co) groupId/artifactId/version for community plugins (download from maven central or oss sonatype) username/repository for site plugins (download from github master) Elasticsearch Core plugins: - elasticsearch-analysis-icu - elasticsearch-analysis-kuromoji - elasticsearch-analysis-phonetic - elasticsearch-analysis-smartcn - elasticsearch-analysis-stempel - elasticsearch-cloud-aws - elasticsearch-cloud-azure - elasticsearch-cloud-gce - elasticsearch-delete-by-query - elasticsearch-lang-javascript - elasticsearch-lang-python ```
2015-06-22 07:22:54 -04:00
=== Plugin Manager for official plugins
Some of the elasticsearch official plugins have been moved to elasticsearch repository and will be released at the
same time as elasticsearch itself, using the same version number.
In that case, the plugin manager can now use a simpler form to identify an official plugin. Instead of:
[source,sh]
---------------
bin/plugin install elasticsearch/plugin_name/version
---------------
You can use:
[source,sh]
---------------
bin/plugin install plugin_name
---------------
The plugin manager will recognize this form and will be able to download the right version for your elasticsearch
version.
For older versions of elasticsearch, you still have to use the older form.
For the record, official plugins which can use this new simplified form are:
* elasticsearch-analysis-icu
* elasticsearch-analysis-kuromoji
* elasticsearch-analysis-phonetic
* elasticsearch-analysis-smartcn
* elasticsearch-analysis-stempel
* elasticsearch-cloud-aws
* elasticsearch-cloud-azure
* elasticsearch-cloud-gce
* elasticsearch-delete-by-query
* elasticsearch-lang-javascript
* elasticsearch-lang-python
=== `/bin/elasticsearch` version needs `-V` parameter
Due to switching to elasticsearchs internal command line parsing
infrastructure for the pluginmanager and the elasticsearch start up
script, the `-v` parameter now stands for `--verbose`, where as `-V` or
`--version` can be used to show the Elasticsearch version and exit.
=== `/bin/elasticsearch` dynamic parameters must come after static ones
If you are setting configuration options like cluster name or node name via
the commandline, you have to ensure, that the static options like pid file
path or daemonizing always come first, like this
```
/bin/elasticsearch -d -p /tmp/foo.pid --http.cors.enabled=true --http.cors.allow-origin='*'
```
For a list of those static parameters, run `/bin/elasticsearch -h`
2015-07-09 06:07:49 -04:00
=== Aliases
Fields used in alias filters no longer have to exist in the mapping upon alias creation time. Alias filters are now
parsed at request time and then the fields in filters are resolved from the mapping, whereas before alias filters were
parsed at alias creation time and the parsed form was kept around in memory.
=== _analyze API
The `prefer_local` has been removed from the _analyze api. The _analyze api is a light operation and the caller shouldn't
be concerned about whether it executes on the node that receives the request or another node.