[DOCS] move all coming tags to added in master
This commit is contained in:
parent
81a83aab22
commit
5bfea56457
|
@ -1,7 +1,7 @@
|
|||
[[analysis-apostrophe-tokenfilter]]
|
||||
=== Apostrophe Token Filter
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
The `apostrophe` token filter strips all characters after an apostrophe,
|
||||
including the apostrophe itself.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[analysis-classic-tokenfilter]]
|
||||
=== Classic Token Filter
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
The `classic` token filter does optional post-processing of
|
||||
terms that are generated by the <<analysis-classic-tokenizer,`classic` tokenizer>>.
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
A token filter of type `lowercase` that normalizes token text to lower
|
||||
case.
|
||||
|
||||
Lowercase token filter supports Greek, Irish coming[1.3.0], and Turkish lowercase token
|
||||
Lowercase token filter supports Greek, Irish added[1.3.0], and Turkish lowercase token
|
||||
filters through the `language` parameter. Below is a usage example in a
|
||||
custom analyzer
|
||||
|
||||
|
|
|
@ -11,19 +11,19 @@ http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/
|
|||
|
||||
German::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html[`german_normalization`] coming[1.3.0]
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html[`german_normalization`] added[1.3.0]
|
||||
|
||||
Hindi::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizer.html[`hindi_normalization`] coming[1.3.0]
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizer.html[`hindi_normalization`] added[1.3.0]
|
||||
|
||||
Indic::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizer.html[`indic_normalization`] coming[1.3.0]
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizer.html[`indic_normalization`] added[1.3.0]
|
||||
|
||||
Kurdish (Sorani)::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizer.html[`sorani_normalization`] coming[1.3.0]
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizer.html[`sorani_normalization`] added[1.3.0]
|
||||
|
||||
Persian::
|
||||
|
||||
|
@ -31,6 +31,6 @@ http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/
|
|||
|
||||
Scandinavian::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`] coming[1.3.0],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`] coming[1.3.0]
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`] added[1.3.0],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`] added[1.3.0]
|
||||
|
||||
|
|
|
@ -65,15 +65,15 @@ http://snowball.tartarus.org/algorithms/danish/stemmer.html[*`danish`*]
|
|||
Dutch::
|
||||
|
||||
http://snowball.tartarus.org/algorithms/dutch/stemmer.html[*`dutch`*],
|
||||
http://snowball.tartarus.org/algorithms/kraaij_pohlmann/stemmer.html[`dutch_kp`] coming[1.3.0,Renamed from `kp`]
|
||||
http://snowball.tartarus.org/algorithms/kraaij_pohlmann/stemmer.html[`dutch_kp`] added[1.3.0,Renamed from `kp`]
|
||||
|
||||
English::
|
||||
|
||||
http://snowball.tartarus.org/algorithms/porter/stemmer.html[*`english`*] coming[1.3.0,Returns the <<analysis-porterstem-tokenfilter,`porter_stem`>> instead of the <<analysis-snowball-tokenfilter,`english` Snowball token filter>>],
|
||||
http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`] coming[1.3.0,Returns the <<analysis-kstem-tokenfilter,`kstem` token filter>>],
|
||||
http://snowball.tartarus.org/algorithms/porter/stemmer.html[*`english`*] added[1.3.0,Returns the <<analysis-porterstem-tokenfilter,`porter_stem`>> instead of the <<analysis-snowball-tokenfilter,`english` Snowball token filter>>],
|
||||
http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`] added[1.3.0,Returns the <<analysis-kstem-tokenfilter,`kstem` token filter>>],
|
||||
http://www.medialab.tfe.umu.se/courses/mdm0506a/material/fulltext_ID%3D10049387%26PLACEBO%3DIE.pdf[`minimal_english`],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html[`possessive_english`],
|
||||
http://snowball.tartarus.org/algorithms/english/stemmer.html[`porter2`] coming[1.3.0,Returns the <<analysis-snowball-tokenfilter,`english` Snowball token filter>> instead of the <<analysis-snowball-tokenfilter,`porter` Snowball token filter>>],
|
||||
http://snowball.tartarus.org/algorithms/english/stemmer.html[`porter2`] added[1.3.0,Returns the <<analysis-snowball-tokenfilter,`english` Snowball token filter>> instead of the <<analysis-snowball-tokenfilter,`porter` Snowball token filter>>],
|
||||
http://snowball.tartarus.org/algorithms/lovins/stemmer.html[`lovins`]
|
||||
|
||||
Finnish::
|
||||
|
@ -89,8 +89,8 @@ http://dl.acm.org/citation.cfm?id=318984[`minimal_french`]
|
|||
|
||||
Galician::
|
||||
|
||||
http://bvg.udc.es/recursos_lingua/stemming.jsp[*`galician`*] coming[1.3.0],
|
||||
http://bvg.udc.es/recursos_lingua/stemming.jsp[`minimal_galician`] (Plural step only) coming[1.3.0]
|
||||
http://bvg.udc.es/recursos_lingua/stemming.jsp[*`galician`*] added[1.3.0],
|
||||
http://bvg.udc.es/recursos_lingua/stemming.jsp[`minimal_galician`] (Plural step only) added[1.3.0]
|
||||
|
||||
German::
|
||||
|
||||
|
@ -127,7 +127,7 @@ http://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_italian`*
|
|||
|
||||
Kurdish (Sorani)::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html[*`sorani`*] coming[1.3.0]
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html[*`sorani`*] added[1.3.0]
|
||||
|
||||
Latvian::
|
||||
|
||||
|
@ -136,20 +136,20 @@ http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/
|
|||
Norwegian (Bokmål)::
|
||||
|
||||
http://snowball.tartarus.org/algorithms/norwegian/stemmer.html[*`norwegian`*],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_norwegian`*] coming[1.3.0],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_norwegian`*] added[1.3.0],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html[`minimal_norwegian`]
|
||||
|
||||
Norwegian (Nynorsk)::
|
||||
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_nynorsk`*] coming[1.3.0],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html[`minimal_nynorsk`] coming[1.3.0]
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_nynorsk`*] added[1.3.0],
|
||||
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html[`minimal_nynorsk`] added[1.3.0]
|
||||
|
||||
Portuguese::
|
||||
|
||||
http://snowball.tartarus.org/algorithms/portuguese/stemmer.html[`portuguese`],
|
||||
http://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[*`light_portuguese`*],
|
||||
http://www.inf.ufrgs.br/\~buriol/papers/Orengo_CLEF07.pdf[`minimal_portuguese`],
|
||||
http://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`] coming[1.3.0]
|
||||
http://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`] added[1.3.0]
|
||||
|
||||
Romanian::
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[analysis-classic-tokenizer]]
|
||||
=== Classic Tokenizer
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
A tokenizer of type `classic` providing grammar based tokenizer that is
|
||||
a good tokenizer for English language documents. This tokenizer has
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[analysis-thai-tokenizer]]
|
||||
=== Thai Tokenizer
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
A tokenizer of type `thai` that segments Thai text into words. This tokenizer
|
||||
uses the built-in Thai segmentation algorithm included with Java to divide
|
||||
|
|
|
@ -41,7 +41,7 @@ If the requested information wasn't stored in the index, it will be
|
|||
computed on the fly if possible. See <<mapping-types,type mapping>>
|
||||
for how to configure your index to store term vectors.
|
||||
|
||||
coming[1.4.0,The ability to computed term vectors on the fly is only available from 1.4.0 onwards (see below)]
|
||||
added[1.4.0,The ability to computed term vectors on the fly is only available from 1.4.0 onwards (see below)]
|
||||
|
||||
[WARNING]
|
||||
======
|
||||
|
@ -226,7 +226,7 @@ Response:
|
|||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
=== Example 2 coming[1.4.0]
|
||||
=== Example 2 added[1.4.0]
|
||||
|
||||
Additionally, term vectors which are not explicitly stored in the index are automatically
|
||||
computed on the fly. The following request returns all information and statistics for the
|
||||
|
|
|
@ -100,7 +100,7 @@ settings API.
|
|||
[[disk]]
|
||||
=== Disk-based Shard Allocation
|
||||
|
||||
coming[1.3.0] disk based shard allocation is enabled from version 1.3.0 onward
|
||||
added[1.3.0] disk based shard allocation is enabled from version 1.3.0 onward
|
||||
|
||||
Elasticsearch can be configured to prevent shard
|
||||
allocation on nodes depending on disk usage for the node. This
|
||||
|
|
|
@ -113,7 +113,7 @@ See <<vm-max-map-count>>
|
|||
|
||||
[[default_fs]]
|
||||
[float]
|
||||
==== Hybrid MMap / NIO FS coming[1.3.0]
|
||||
==== Hybrid MMap / NIO FS added[1.3.0]
|
||||
|
||||
The `default` type stores the shard index on the file system depending on
|
||||
the file type by mapping a file into memory (mmap) or using Java NIO. Currently
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[mapping-field-names-field]]
|
||||
=== `_field_names`
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
The `_field_names` field indexes the field names of a document, which can later
|
||||
be used to search for documents based on the fields that they contain typically
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
[[mapping-transform]]
|
||||
== Transform
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
The document can be transformed before it is indexed by registering a
|
||||
script in the `transform` element of the mapping. The result of the
|
||||
|
|
|
@ -75,7 +75,7 @@ configure the election to handle cases of slow or congested networks
|
|||
(higher values assure less chance of failure). Once a node joins, it
|
||||
will send a join request to the master (`discovery.zen.join_timeout`)
|
||||
with a timeout defaulting at 20 times the ping timeout.
|
||||
coming[1.3.0,Previously defaulted to 10 times the ping timeout].
|
||||
added[1.3.0,Previously defaulted to 10 times the ping timeout].
|
||||
|
||||
Nodes can be excluded from becoming a master by setting `node.master` to
|
||||
`false`. Note, once a node is a client node (`node.client` set to
|
||||
|
|
|
@ -43,7 +43,7 @@ once all `gateway.recover_after...nodes` conditions are met.
|
|||
The `gateway.expected_nodes` allows to set how many data and master
|
||||
eligible nodes are expected to be in the cluster, and once met, the
|
||||
`gateway.recover_after_time` is ignored and recovery starts.
|
||||
Setting `gateway.expected_nodes` also defaults `gateway.recovery_after_time` to `5m` coming[1.3.0, before `expected_nodes`
|
||||
Setting `gateway.expected_nodes` also defaults `gateway.recovery_after_time` to `5m` added[1.3.0, before `expected_nodes`
|
||||
required `recovery_after_time` to be set]. The `gateway.expected_data_nodes` and `gateway.expected_master_nodes`
|
||||
settings are also supported. For example setting:
|
||||
|
||||
|
|
|
@ -189,7 +189,7 @@ should be restored as well as prevent global cluster state from being restored b
|
|||
<<search-multi-index-type,multi index syntax>>. The `rename_pattern` and `rename_replacement` options can be also used to
|
||||
rename index on restore using regular expression that supports referencing the original text as explained
|
||||
http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
|
||||
Set `include_aliases` to `false` to prevent aliases from being restored together with associated indices coming[1.3.0].
|
||||
Set `include_aliases` to `false` to prevent aliases from being restored together with associated indices added[1.3.0].
|
||||
|
||||
[source,js]
|
||||
-----------------------------------
|
||||
|
@ -211,7 +211,7 @@ persistent settings are added to the existing persistent settings.
|
|||
[float]
|
||||
=== Partial restore
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
By default, entire restore operation will fail if one or more indices participating in the operation don't have
|
||||
snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to
|
||||
|
|
|
@ -64,7 +64,7 @@ next to the given cell.
|
|||
[float]
|
||||
==== Caching
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
The result of the filter is not cached by default. The
|
||||
`_cache` parameter can be set to `true` to turn caching on.
|
||||
|
|
|
@ -45,7 +45,7 @@ The `has_child` filter also accepts a filter instead of a query:
|
|||
[float]
|
||||
==== Min/Max Children
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
The `has_child` filter allows you to specify that a minimum and/or maximum
|
||||
number of children are required to match for the parent doc to be considered
|
||||
|
|
|
@ -56,7 +56,7 @@ inside the `has_child` query:
|
|||
[float]
|
||||
==== Min/Max Children
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
The `has_child` query allows you to specify that a minimum and/or maximum
|
||||
number of children are required to match for the parent doc to be considered
|
||||
|
|
|
@ -322,7 +322,7 @@ http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html#UNIX_LINES
|
|||
|
||||
==== Collect mode
|
||||
|
||||
coming[1.3.0] Deferring calculation of child aggregations
|
||||
added[1.3.0] Deferring calculation of child aggregations
|
||||
|
||||
For fields with many unique terms and a small number of required results it can be more efficient to delay the calculation
|
||||
of child aggregations until the top parent-level aggs have been pruned. Ordinarily, all branches of the aggregation tree
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[search-aggregations-metrics-geobounds-aggregation]]
|
||||
=== Geo Bounds Aggregation
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
A metric aggregation that computes the bounding box containing all geo_point values for a field.
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[search-aggregations-metrics-percentile-rank-aggregation]]
|
||||
=== Percentile Ranks Aggregation
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
A `multi-value` metrics aggregation that calculates one or more percentile ranks
|
||||
over numeric values extracted from the aggregated documents. These values
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[search-aggregations-metrics-top-hits-aggregation]]
|
||||
=== Top hits Aggregation
|
||||
|
||||
coming[1.3.0]
|
||||
added[1.3.0]
|
||||
|
||||
A `top_hits` metric aggregator keeps track of the most relevant document being aggregated. This aggregator is intended
|
||||
to be used as a sub aggregator, so that the top matching documents can be aggregated per bucket.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[[search-benchmark]]
|
||||
== Benchmark
|
||||
|
||||
coming[1.4.0]
|
||||
added[1.4.0]
|
||||
|
||||
.Experimental!
|
||||
[IMPORTANT]
|
||||
|
|
Loading…
Reference in New Issue