Updated docs for 3.0.0-beta

This commit is contained in:
Clinton Gormley 2015-10-07 13:27:36 +02:00
parent 53f316b540
commit dc018cf622
25 changed files with 8 additions and 70 deletions

View File

@ -2,8 +2,6 @@
== Pipeline Aggregations
coming[2.0.0-beta1]
experimental[]
Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-avg-bucket-aggregation]]
=== Avg Bucket Aggregation
coming[2.0.0-beta1]
experimental[]
A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-bucket-script-aggregation]]
=== Bucket Script Aggregation
coming[2.0.0-beta1]
experimental[]
A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-bucket-selector-aggregation]]
=== Bucket Selector Aggregation
coming[2.0.0-beta1]
experimental[]
A parent pipeline aggregation which executes a script which determines whether the current bucket will be retained

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-cumulative-sum-aggregation]]
=== Cumulative Sum Aggregation
coming[2.0.0-beta1]
experimental[]
A parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram)

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-derivative-aggregation]]
=== Derivative Aggregation
coming[2.0.0-beta1]
experimental[]
A parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-extended-stats-bucket-aggregation]]
=== Extended Stats Bucket Aggregation
coming[2.1.0]
experimental[]
A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-max-bucket-aggregation]]
=== Max Bucket Aggregation
coming[2.0.0-beta1]
experimental[]
A sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-min-bucket-aggregation]]
=== Min Bucket Aggregation
coming[2.0.0-beta1]
experimental[]
A sibling pipeline aggregation which identifies the bucket(s) with the minimum value of a specified metric in a sibling aggregation

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-movavg-aggregation]]
=== Moving Average Aggregation
coming[2.0.0-beta1]
experimental[]
Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-percentiles-bucket-aggregation]]
=== Percentiles Bucket Aggregation
coming[2.1.0]
experimental[]
A sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-serialdiff-aggregation]]
=== Serial Differencing Aggregation
coming[2.0.0-beta1]
experimental[]
Serial differencing is a technique where values in a time series are subtracted from itself at

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-stats-bucket-aggregation]]
=== Stats Bucket Aggregation
coming[2.1.0]
experimental[]
A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.

View File

@ -1,8 +1,6 @@
[[search-aggregations-pipeline-sum-bucket-aggregation]]
=== Sum Bucket Aggregation
coming[2.0.0-beta1]
experimental[]
A sibling pipeline aggregation which calculates the sum across all bucket of a specified metric in a sibling aggregation.

View File

@ -131,8 +131,6 @@ operation based on the `_parent` / `_routing` mapping.
[[bulk-timestamp]]
=== Timestamp
deprecated[2.0.0,The `_timestamp` field is deprecated. Instead, use a normal <<date,`date`>> field and set its value explicitly]
Each bulk item can include the timestamp value using the
`_timestamp`/`timestamp` field. It automatically follows the behavior of
the index operation based on the `_timestamp` mapping.
@ -141,8 +139,6 @@ the index operation based on the `_timestamp` mapping.
[[bulk-ttl]]
=== TTL
deprecated[2.0.0,The current `_ttl` implementation is deprecated and will be replaced with a different implementation in a future version]
Each bulk item can include the ttl value using the `_ttl`/`ttl` field.
It automatically follows the behavior of the index operation based on
the `_ttl` mapping.

View File

@ -257,8 +257,6 @@ specified using the `routing` parameter.
[[index-timestamp]]
=== Timestamp
deprecated[2.0.0,The `_timestamp` field is deprecated. Instead, use a normal <<date,`date`>> field and set its value explicitly]
A document can be indexed with a `timestamp` associated with it. The
`timestamp` value of a document can be set using the `timestamp`
parameter. For example:
@ -281,8 +279,6 @@ page>>.
[[index-ttl]]
=== TTL
deprecated[2.0.0,The current `_ttl` implementation is deprecated and will be replaced with a different implementation in a future version]
A document can be indexed with a `ttl` (time to live) associated with
it. Expired documents will be expunged automatically. The expiration

View File

@ -81,8 +81,6 @@ omit :
[float]
==== Distributed frequencies
coming[2.0.0-beta1]
Setting `dfs` to `true` (default is `false`) will return the term statistics
or the field statistics of the entire index, and not just at the shard. Use it
with caution as distributed frequencies can have a serious performance impact.
@ -90,8 +88,6 @@ with caution as distributed frequencies can have a serious performance impact.
[float]
==== Terms Filtering
coming[2.0.0-beta1]
With the parameter `filter`, the terms returned could also be filtered based
on their tf-idf scores. This could be useful in order find out a good
characteristic vector of a document. This feature works in a similar manner to

View File

@ -1,8 +1,8 @@
[[elasticsearch-reference]]
= Elasticsearch Reference
:version: 2.0.0-beta1
:branch: 2.0
:version: 3.0.0-beta1
:branch: 3.0
:jdk: 1.8.0_25
:defguide: https://www.elastic.co/guide/en/elasticsearch/guide/current
:plugins: https://www.elastic.co/guide/en/elasticsearch/plugins/master

View File

@ -16,8 +16,6 @@ curl -XGET 'localhost:9200/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
If text parameter is provided as array of strings, it is analyzed as a multi-valued field.
[source,js]
@ -29,8 +27,6 @@ curl -XGET 'localhost:9200/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
Or by building a custom transient analyzer out of tokenizers,
token filters and char filters. Token filters can use the shorter 'filters'
parameter name:
@ -53,8 +49,6 @@ curl -XGET 'localhost:9200/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
It can also run against a specific index:
[source,js]
@ -78,8 +72,6 @@ curl -XGET 'localhost:9200/test/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
Also, the analyzer can be derived based on a field mapping, for example:
[source,js]
@ -91,8 +83,6 @@ curl -XGET 'localhost:9200/test/_analyze' -d '
}'
--------------------------------------------------
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
Will cause the analysis to happen based on the analyzer configured in the
mapping for `obj1.field1` (and if not, the default index analyzer).

View File

@ -1,8 +1,6 @@
[[mapping-parent-field]]
=== `_parent` field
added[2.0.0-beta1,The parent-child implementation has been completely rewritten. It is advisable to reindex any 1.x indices which use parent-child to take advantage of the new optimizations]
A parent-child relationship can be established between documents in the same
index by making one mapping type the parent of another:

View File

@ -1,8 +1,6 @@
[[mapping-timestamp-field]]
=== `_timestamp` field
deprecated[2.0.0,The `_timestamp` field is deprecated. Instead, use a normal <<date,`date`>> field and set its value explicitly]
The `_timestamp` field, when enabled, allows a timestamp to be indexed and
stored with a document. The timestamp may be specified manually, generated
automatically, or set to a default value:

View File

@ -1,8 +1,6 @@
[[mapping-ttl-field]]
=== `_ttl` field
deprecated[2.0.0,The current `_ttl` implementation is deprecated and will be replaced with a different implementation in a future version]
Some types of documents, such as session data or special offers, come with an
expiration date. The `_ttl` field allows you to specify the minimum time a
document should live, after which time the document is deleted automatically.

View File

@ -121,7 +121,7 @@ The following settings are supported:
using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).
`max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second.
`max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second.
`readonly`:: Makes repository read-only. coming[2.1.0] Defaults to `false`.
`readonly`:: Makes repository read-only. Defaults to `false`.
[float]
===== Read-only URL Repository
@ -259,7 +259,7 @@ GET /_snapshot/my_backup/_all
-----------------------------------
// AUTOSENSE
coming[2.0.0-beta1] A currently running snapshot can be retrieved using the following command:
A currently running snapshot can be retrieved using the following command:
[source,sh]
-----------------------------------

View File

@ -149,7 +149,7 @@ input, the other one for term selection and for query formation.
==== Document Input Parameters
[horizontal]
`like`:: coming[2.0.0-beta1]
`like`::
The only *required* parameter of the MLT query is `like` and follows a
versatile syntax, in which the user can specify free form text and/or a single
or multiple documents (see examples above). The syntax to specify documents is
@ -162,7 +162,7 @@ follows a similar syntax to the `per_field_analyzer` parameter of the
Additionally, to provide documents not necessarily present in the index,
<<docs-termvectors-artificial-doc,artificial documents>> are also supported.
`unlike`:: coming[2.0.0-beta1]
`unlike`::
The `unlike` parameter is used in conjunction with `like` in order not to
select terms found in a chosen set of documents. In other words, we could ask
for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax
@ -172,10 +172,10 @@ is the same as `like`.
A list of fields to fetch and analyze the text from. Defaults to the `_all`
field for free text and to all possible fields for document inputs.
`like_text`:: deprecated[2.0.0-beta1,Replaced by `like`]
`like_text`::
The text to find documents like it.
`ids` or `docs`:: deprecated[2.0.0-beta1,Replaced by `like`]
`ids` or `docs`::
A list of documents following the same syntax as the <<docs-multi-get,Multi GET API>>.
[float]

View File

@ -63,8 +63,6 @@ curl -XGET <1> 'localhost:9200/_search/scroll' <2> -d'
'
--------------------------------------------------
coming[2.0.0-beta1, body based parameters were added in 2.0.0]
<1> `GET` or `POST` can be used.
<2> The URL should not include the `index` or `type` name -- these
are specified in the original `search` request instead.
@ -151,8 +149,6 @@ curl -XDELETE localhost:9200/_search/scroll -d '
}'
---------------------------------------
coming[2.0.0-beta1, Body based parameters were added in 2.0.0]
Multiple scroll IDs can be passed as array:
[source,js]
@ -163,8 +159,6 @@ curl -XDELETE localhost:9200/_search/scroll -d '
}'
---------------------------------------
coming[2.0.0-beta1, Body based parameters were added in 2.0.0]
All search contexts can be cleared with the `_all` parameter:
[source,js]