Docs: Updated annotations for 2.0.0-beta1

This commit is contained in:
Clinton Gormley 2015-08-14 10:51:09 +02:00
parent dcf3f4679f
commit c6c3a40cb6
32 changed files with 311 additions and 188 deletions

View File

@ -51,7 +51,7 @@ Combine a query clause in query context with another in filter context. deprecat
<<java-query-dsl-limit-query,`limit` query>>::
Limits the number of documents examined per shard. deprecated[1.6.0]
Limits the number of documents examined per shard.
include::constant-score-query.asciidoc[]

View File

@ -1,8 +1,6 @@
[[java-query-dsl-limit-query]]
==== Limit Query
deprecated[1.6.0, Use <<java-search-terminate-after,terminateAfter()>> instead]
See {ref}/query-dsl-limit-query.html[Limit Query]
[source,java]

View File

@ -2,7 +2,7 @@
== Pipeline Aggregations
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-avg-bucket-aggregation]]
=== Avg Bucket Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-bucket-script-aggregation]]
=== Bucket Script Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-bucket-selector-aggregation]]
=== Bucket Selector Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-cumulative-sum-aggregation]]
=== Cumulative Sum Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-derivative-aggregation]]
=== Derivative Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-max-bucket-aggregation]]
=== Max Bucket Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-min-bucket-aggregation]]
=== Min Bucket Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-movavg-aggregation]]
=== Moving Average Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-serialdiff-aggregation]]
=== Serial Differencing Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -1,7 +1,7 @@
[[search-aggregations-pipeline-sum-bucket-aggregation]]
=== Sum Bucket Aggregation
coming[2.0.0]
coming[2.0.0-beta1]
experimental[]

View File

@ -81,7 +81,7 @@ omit :
[float]
==== Distributed frequencies
coming[2.0]
coming[2.0.0-beta1]
Setting `dfs` to `true` (default is `false`) will return the term statistics
or the field statistics of the entire index, and not just at the shard. Use it
@ -90,7 +90,7 @@ with caution as distributed frequencies can have a serious performance impact.
[float]
==== Terms Filtering
coming[2.0]
coming[2.0.0-beta1]
With the parameter `filter`, the terms returned could also be filtered based
on their tf-idf scores. This could be useful in order find out a good

View File

@ -1,8 +1,8 @@
[[elasticsearch-reference]]
= Elasticsearch Reference
:version: 1.5.2
:branch: 1.5
:version: 2.0.0-beta1
:branch: 2.0
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current

View File

@ -16,7 +16,7 @@ curl -XGET 'localhost:9200/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0, body based parameters were added in 2.0.0]
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
If text parameter is provided as array of strings, it is analyzed as a multi-valued field.
@ -29,7 +29,7 @@ curl -XGET 'localhost:9200/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0, body based parameters were added in 2.0.0]
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
Or by building a custom transient analyzer out of tokenizers,
token filters and char filters. Token filters can use the shorter 'filters'
@ -53,7 +53,7 @@ curl -XGET 'localhost:9200/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0, body based parameters were added in 2.0.0]
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
It can also run against a specific index:
@ -78,7 +78,7 @@ curl -XGET 'localhost:9200/test/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0, body based parameters were added in 2.0.0]
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
Also, the analyzer can be derived based on a field mapping, for example:
@ -91,7 +91,7 @@ curl -XGET 'localhost:9200/test/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0, body based parameters were added in 2.0.0]
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
Will cause the analysis to happen based on the analyzer configured in the
mapping for `obj1.field1` (and if not, the default index analyzer).

View File

@ -51,7 +51,7 @@ Elasticsearch 2.0. Upgrading will:
* Rewrite old segments in the latest Lucene format.
* Add the `index.version.minimum_compatible` setting to the index, to mark it as
2.0 compatible coming[1.6.0].
2.0 compatible
Instead of upgrading all segments that weren't written with the most recent
version of Lucene, you can choose to do the minimum work required before

View File

@ -1,7 +1,7 @@
[[mapping-parent-field]]
=== `_parent` field
added[2.0.0,The parent-child implementation has been completely rewritten. It is advisable to reindex any 1.x indices which use parent-child to take advantage of the new optimizations]
added[2.0.0-beta1,The parent-child implementation has been completely rewritten. It is advisable to reindex any 1.x indices which use parent-child to take advantage of the new optimizations]
A parent-child relationship can be established between documents in the same
index by making one mapping type the parent of another:

View File

@ -4,6 +4,276 @@
This section discusses the changes that you need to be aware of when migrating
your application to Elasticsearch 2.0.
[float]
=== Indices created before 0.90
Elasticsearch 2.0 can read indices created in version 0.90 and above. If any
of your indices were created before 0.90 you will need to upgrade to the
latest 1.x version of Elasticsearch first, in order to upgrade your indices or
to delete the old indices. Elasticsearch will not start in the presence of old
indices.
[float]
=== Elasticsearch migration plugin
We have provided the https://github.com/elastic/elasticsearch-migration[Elasticsearch migration plugin]
to help you detect any issues that you may have when upgrading to
Elasticsearch 2.0. Please install and run the plugin *before* upgrading.
=== Mapping
Remove file based default mappings #10870 (issue: #10620)
Validate dynamic mappings updates on the master node. #10634 (issues: #8650, #8688)
Remove the ability to have custom per-field postings and doc values formats. #9741 (issue: #8746)
Remove support for new indexes using path setting in object/nested fields or index_name in any field #9570 (issue: #6677)
Remove index_analyzer setting to simplify analyzer logic #9451 (issue: #9371)
Remove type level default analyzers #9430 (issues: #8874, #9365)
Add doc values support to boolean fields. #7961 (issues: #4678, #7851)
A number of changes have been made to mappings to remove ambiguity and to
ensure that conflicting mappings cannot be created.
==== Conflicting field mappings
Fields with the same name, in the same index, in different types, must have
the same mapping, with the exception of the <<copy-to>>, <<dynamic>>,
<<enabled>>, <<ignore-above>>, <<include-in-all>>, and <<properties>>
parameters, which may have different settings per field.
[source,js]
---------------
PUT my_index
{
"mappings": {
"type_one": {
"properties": {
"name": { <1>
"type": "string"
}
}
},
"type_two": {
"properties": {
"name": { <1>
"type": "string",
"analyzer": "english"
}
}
}
}
}
---------------
<1> The two `name` fields have conflicting mappings and will prevent Elasticsearch
from starting.
Elasticsearch will not start in the presence of conflicting field mappings.
These indices must be deleted or reindexed using a new mapping.
The `ignore_conflicts` option of the put mappings API has been removed.
Conflicts can't be ignored anymore.
==== Fields cannot be referenced by short name
A field can no longer be referenced using its short name. Instead, the full
path to the field is required. For instance:
[source,js]
---------------
PUT my_index
{
"mappings": {
"my_type": {
"properties": {
"title": { "type": "string" }, <1>
"name": {
"properties": {
"title": { "type": "string" }, <2>
"first": { "type": "string" },
"last": { "type": "string" }
}
}
}
}
}
}
---------------
<1> This field is referred to as `title`.
<2> This field is referred to as `name.title`.
Previously, the two `title` fields in the example above could have been
confused with each other when using the short name `title`.
=== Type name prefix removed
Previously, two fields with the same name in two different types could
sometimes be disambiguated by prepending the type name. As a side effect, it
would add a filter on the type name to the relevant query. This feature was
ambiguous -- a type name could be confused with a field name -- and didn't
work everywhere e.g. aggregations.
Instead, fields should be specified with the full path, but without a type
name prefix. If you wish to filter by the `_type` field, either specify the
type in the URL or add an explicit filter.
The following example query in 1.x:
[source,js]
----------------------------
GET my_index/_search
{
"query": {
"match": {
"my_type.some_field": "quick brown fox"
}
}
}
----------------------------
would be rewritten in 2.0 as:
[source,js]
----------------------------
GET my_index/my_type/_search <1>
{
"query": {
"match": {
"some_field": "quick brown fox" <2>
}
}
}
----------------------------
<1> The type name can be specified in the URL to act as a filter.
<2> The field name should be specified without the type prefix.
==== Field names may not contain dots
In 1.x, it was possible to create fields with dots in their name, for
instance:
[source,js]
----------------------------
PUT my_index
{
"mappings": {
"my_type": {
"properties": {
"foo.bar": { <1>
"type": "string"
},
"foo": {
"properties": {
"bar": { <1>
"type": "string"
}
}
}
}
}
}
}
----------------------------
<1> These two fields cannot be distinguised as both are referred to as `foo.bar`.
You can no longer create fields with dots in the name.
==== Type names may not start with a dot
In 1.x, Elasticsearch would issue a warning if a type name included a dot,
e.g. `my.type`. Now that type names are no longer used to distinguish between
fields in differnt types, this warning has been relaxed: type names may now
contain dots, but they may not *begin* with a dot. The only exception to this
is the special `.percolator` type.
==== Types may no longer be deleted
In 1.x it was possible to delete a type mapping, along with all of the
documents of that type, using the delete mapping API. This is no longer
supported, because remnants of the fields in the type could remain in the
index, causing corruption later on.
==== Type meta-fields
The <<mapping-fields,meta-fields>> associated with had configuration options
removed, to make them more reliable:
* `_id` configuration can no longer be changed. If you need to sort, use the <<mapping-uid-field,`_uid`>> field instead.
* `_type` configuration can no longer be changed.
* `_index` configuration can no longer be changed.
* `_routing` configuration is limited to marking routing as required.
* `_field_names` configuration is limited to disabling the field.
* `_size` configuration is limited to enabling the field.
* `_timestamp` configuration is limited to enabling the field, setting format and default value.
* `_boost` has been removed.
* `_analyzer` has been removed.
Importantly, *meta-fields can no longer be specified as part of the document
body.* Instead, they must be specified in the query string parameters. For
instance, in 1.x, the `routing` could be specified as follows:
[source,json]
-----------------------------
PUT my_index
{
"mappings": {
"my_type": {
"_routing": {
"path": "group" <1>
},
"properties": {
"group": { <1>
"type": "string"
}
}
}
}
}
PUT my_index/my_type/1 <2>
{
"group": "foo"
}
-----------------------------
<1> This 1.x mapping tells Elasticsearch to extract the `routing` value from the `group` field in the document body.
<2> This indexing request uses a `routing` value of `foo`.
In 2.0, the routing must be specified explicitly:
[source,json]
-----------------------------
PUT my_index
{
"mappings": {
"my_type": {
"_routing": {
"required": true <1>
},
"properties": {
"group": {
"type": "string"
}
}
}
}
}
PUT my_index/my_type/1?routing=bar <2>
{
"group": "foo"
}
-----------------------------
<1> Routing can be marked as required to ensure it is not forgotten during indexing.
<2> This indexing request uses a `routing` value of `bar`.
==== Other mapping changes
* The setting `index.mapping.allow_type_wrapper` has been removed. Documents should always be sent without the type as the root element.
* The `binary` field does not support the `compress` and `compress_threshold` options anymore.
=== Networking
Elasticsearch now binds to the loopback interface by default (usually 127.0.0.1
@ -188,141 +458,6 @@ Delete api requires a routing value when deleting a document belonging to a type
mapping, whereas previous elasticsearch versions would trigger a broadcast delete on all shards belonging to the index.
A `RoutingMissingException` is now thrown instead.
=== Mappings
* The setting `index.mapping.allow_type_wrapper` has been removed. Documents should always be sent without the type as the root element.
* The delete mappings API has been removed. Mapping types can no longer be deleted.
* Mapping type names can no longer start with dots.
* The `ignore_conflicts` option of the put mappings API has been removed. Conflicts can't be ignored anymore.
* The `binary` field does not support the `compress` and `compress_threshold` options anymore.
==== Removed type prefix on field names in queries
Types can no longer be specified on fields within queries. Instead, specify type restrictions in the search request.
The following is an example query in 1.x over types `t1` and `t2`:
[source,js]
---------------
curl -XGET 'localhost:9200/index/_search'
{
"query": {
"bool": {
"should": [
{"match": { "t1.field_only_in_t1": "foo" }},
{"match": { "t2.field_only_in_t2": "bar" }}
]
}
}
}
---------------
In 2.0, the query should look like the following:
[source,js]
---------------
curl -XGET 'localhost:9200/index/t1,t2/_search'
{
"query": {
"bool": {
"should": [
{"match": { "field_only_in_t1": "foo" }},
{"match": { "field_only_in_t2": "bar" }}
]
}
}
}
---------------
==== Removed short name field access
Field names in queries, aggregations, etc. must now use the complete name. Use of the short name
caused ambiguities in field lookups when the same name existed within multiple object mappings.
The following example illustrates the difference between 1.x and 2.0.
Given these mappings:
[source,js]
---------------
curl -XPUT 'localhost:9200/index'
{
"mappings": {
"type": {
"properties": {
"name": {
"type": "object",
"properties": {
"first": {"type": "string"},
"last": {"type": "string"}
}
}
}
}
}
}
---------------
The following query was possible in 1.x:
[source,js]
---------------
curl -XGET 'localhost:9200/index/type/_search'
{
"query": {
"match": { "first": "foo" }
}
}
---------------
In 2.0, the same query should now be:
[source,js]
---------------
curl -XGET 'localhost:9200/index/type/_search'
{
"query": {
"match": { "name.first": "foo" }
}
}
---------------
==== Removed support for `.` in field name mappings
Prior to Elasticsearch 2.0, a field could be defined to have a `.` in its name.
Mappings like the one below have been deprecated for some time and they will be
blocked in Elasticsearch 2.0.
[source,js]
---------------
curl -XPUT 'localhost:9200/index'
{
"mappings": {
"type": {
"properties": {
"name.first": {
"type": "string"
}
}
}
}
}
---------------
==== Meta fields have limited configuration
Meta fields (those beginning with underscore) are fields used by elasticsearch
to provide special features. They now have limited configuration options.
* `_id` configuration can no longer be changed. If you need to sort, use `_uid` instead.
* `_type` configuration can no longer be changed.
* `_index` configuration can no longer be changed.
* `_routing` configuration is limited to requiring the field.
* `_boost` has been removed.
* `_field_names` configuration is limited to disabling the field.
* `_size` configuration is limited to enabling the field.
* `_timestamp` configuration is limited to enabling the field, setting format and default value
==== Meta fields in documents
Meta fields can no longer be specified within a document. They should be specified
via the API. For example, instead of adding a field `_parent` within a document,
use the `parent` url parameter when indexing that document.
==== Default date format now is `strictDateOptionalTime`
@ -389,10 +524,6 @@ the user-friendly representation of boolean fields: `false`/`true`:
Fields of type `murmur3` can no longer change `doc_values` or `index` setting.
They are always stored with doc values, and not indexed.
==== Source field configuration
The `_source` field no longer supports `includes` and `excludes` parameters. When
`_source` is enabled, the entire original source will be stored.
==== Config based mappings
The ability to specify mappings in configuration files has been removed. To specify
default mappings that apply to multiple indexes, use index templates.
@ -437,10 +568,10 @@ script.indexed: on
=== Script parameters
Deprecated script parameters `id`, `file`, `scriptField`, `script_id`, `script_file`,
Deprecated script parameters `id`, `file`, `scriptField`, `script_id`, `script_file`,
`script`, `lang` and `params`. The <<modules-scripting,new script API syntax>> should be used in their place.
The deprecated script parameters have been removed from the Java API so applications using the Java API will
The deprecated script parameters have been removed from the Java API so applications using the Java API will
need to be updated.
=== Groovy scripts sandbox

View File

@ -258,7 +258,7 @@ GET /_snapshot/my_backup/_all
-----------------------------------
// AUTOSENSE
coming[2.0] A currently running snapshot can be retrieved using the following command:
coming[2.0.0-beta1] A currently running snapshot can be retrieved using the following command:
[source,sh]
-----------------------------------

View File

@ -1,7 +1,7 @@
[[query-dsl-and-query]]
=== And Query
deprecated[2.0.0, Use the `bool` query instead]
deprecated[2.0.0-beta1, Use the `bool` query instead]
A query that matches documents using the `AND` boolean operator on other
queries.

View File

@ -47,11 +47,11 @@ Synonyms for the `bool` query.
<<query-dsl-filtered-query,`filtered` query>>::
Combine a query clause in query context with another in filter context. deprecated[2.0.0,Use the `bool` query instead]
Combine a query clause in query context with another in filter context. deprecated[2.0.0-beta1,Use the `bool` query instead]
<<query-dsl-limit-query,`limit` query>>::
Limits the number of documents examined per shard. deprecated[1.6.0]
Limits the number of documents examined per shard.
include::constant-score-query.asciidoc[]

View File

@ -1,7 +1,7 @@
[[query-dsl-filtered-query]]
=== Filtered Query
deprecated[2.0.0, Use the `bool` query instead with a `must` clause for the query and a `filter` clause for the filter]
deprecated[2.0.0-beta1, Use the `bool` query instead with a `must` clause for the query and a `filter` clause for the filter]
The `filtered` query is used to combine a query which will be used for
scoring with another query which will only be used for filtering the result

View File

@ -1,8 +1,6 @@
[[query-dsl-limit-query]]
=== Limit Query
deprecated[1.6.0, Use <<search-request-body,terminate_after>> instead]
A limit query limits the number of documents (per shard) to execute on.
For example:

View File

@ -149,7 +149,7 @@ input, the other one for term selection and for query formation.
==== Document Input Parameters
[horizontal]
`like`:: coming[2.0]
`like`:: coming[2.0.0-beta1]
The only *required* parameter of the MLT query is `like` and follows a
versatile syntax, in which the user can specify free form text and/or a single
or multiple documents (see examples above). The syntax to specify documents is
@ -162,7 +162,7 @@ follows a similar syntax to the `per_field_analyzer` parameter of the
Additionally, to provide documents not necessarily present in the index,
<<docs-termvectors-artificial-doc,artificial documents>> are also supported.
`unlike`:: coming[2.0]
`unlike`:: coming[2.0.0-beta1]
The `unlike` parameter is used in conjunction with `like` in order not to
select terms found in a chosen set of documents. In other words, we could ask
for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax
@ -172,10 +172,10 @@ is the same as `like`.
A list of fields to fetch and analyze the text from. Defaults to the `_all`
field for free text and to all possible fields for document inputs.
`like_text`:: deprecated[2.0,Replaced by `like`]
`like_text`:: deprecated[2.0.0-beta1,Replaced by `like`]
The text to find documents like it.
`ids` or `docs`:: deprecated[2.0,Replaced by `like`]
`ids` or `docs`:: deprecated[2.0.0-beta1,Replaced by `like`]
A list of documents following the same syntax as the <<docs-multi-get,Multi GET API>>.
[float]

View File

@ -1,7 +1,7 @@
[[query-dsl-or-query]]
=== Or Query
deprecated[2.0.0, Use the `bool` query instead]
deprecated[2.0.0-beta1, Use the `bool` query instead]
A query that matches documents using the `OR` boolean operator on other
queries.

View File

@ -63,7 +63,7 @@ curl -XGET <1> 'localhost:9200/_search/scroll' <2> -d'
'
--------------------------------------------------
coming[2.0.0, body based parameters were added in 2.0.0]
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
<1> `GET` or `POST` can be used.
<2> The URL should not include the `index` or `type` name -- these
@ -188,7 +188,7 @@ curl -XDELETE localhost:9200/_search/scroll -d '
}'
---------------------------------------
coming[2.0.0, Body based parameters were added in 2.0.0]
coming[2.0.0-beta1, Body based parameters were added in 2.0.0]
Multiple scroll IDs can be passed as array:
@ -200,7 +200,7 @@ curl -XDELETE localhost:9200/_search/scroll -d '
}'
---------------------------------------
coming[2.0.0, Body based parameters were added in 2.0.0]
coming[2.0.0-beta1, Body based parameters were added in 2.0.0]
All search contexts can be cleared with the `_all` parameter:

View File

@ -65,7 +65,7 @@ scoring.
[[count]]
==== Count
deprecated[2.0.0, `count` does not provide any benefits over `query_then_fetch` with a `size` of `0`]
deprecated[2.0.0-beta1, `count` does not provide any benefits over `query_then_fetch` with a `size` of `0`]
Parameter value: *count*.

View File

@ -104,7 +104,7 @@ Defaults to no terminate_after.
|`search_type` |The type of the search operation to perform. Can be
`dfs_query_then_fetch`, `query_then_fetch`, `scan` or `count`
deprecated[2.0,Replaced by `size: 0`]. Defaults to `query_then_fetch`. See
deprecated[2.0.0-beta1,Replaced by `size: 0`]. Defaults to `query_then_fetch`. See
<<search-request-search-type,_Search Type_>> for
more details on the different types of search that can be performed.
|=======================================================================

View File

@ -104,7 +104,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_validate/query?q=post_date:foo&
}
--------------------------------------------------
coming[1.6] When the query is valid, the explanation defaults to the string
When the query is valid, the explanation defaults to the string
representation of that query. With `rewrite` set to `true`, the explanation
is more detailed showing the actual Lucene query that will be executed.

View File

@ -41,8 +41,6 @@ PUT /_cluster/settings
==== Step 2: Perform a synced flush
added[1.6.0,Synced flush is only supported in Elasticsearch 1.6.0 and above]
Shard recovery will be much faster if you stop indexing and issue a
<<indices-synced-flush, synced-flush>> request:

View File

@ -32,8 +32,6 @@ PUT /_cluster/settings
==== Step 2: Stop non-essential indexing and perform a synced flush (Optional)
added[1.6.0,Synced flush is only supported in Elasticsearch 1.6.0 and above]
You may happily continue indexing during the upgrade. However, shard recovery
will be much faster if you temporarily stop non-essential indexing and issue a
<<indices-synced-flush, synced-flush>> request: