[DOCS] Removed outdated new/deprecated version notices
This commit is contained in:
parent
d5a47e597d
commit
393c28bee4
|
@ -91,7 +91,6 @@ The hunspell token filter accepts four options:
|
|||
Configures the recursion level a
|
||||
stemmer can go into. Defaults to `2`. Some languages (for example czech)
|
||||
give better results when set to `1` or `0`, so you should test it out.
|
||||
(since 0.90.3)
|
||||
|
||||
NOTE: As opposed to the snowball stemmers (which are algorithm based)
|
||||
this is a dictionary lookup based stemmer and therefore the quality of
|
||||
|
|
|
@ -9,8 +9,6 @@ subsequent stemmer will be indexed twice. Therefore, consider adding a
|
|||
`unique` filter with `only_on_same_position` set to `true` to drop
|
||||
unnecessary duplicates.
|
||||
|
||||
Note: this is available from `0.90.0.Beta2` on.
|
||||
|
||||
Here is an example:
|
||||
|
||||
[source,js]
|
||||
|
|
|
@ -11,5 +11,3 @@ http://lucene.apache.org/core/4_3_1/analyzers-common/org/apache/lucene/analysis/
|
|||
or the
|
||||
http://lucene.apache.org/core/4_3_1/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizer.html[PersianNormalizer]
|
||||
documentation.
|
||||
|
||||
*Note:* This filters are available since `0.90.2`
|
||||
|
|
|
@ -36,8 +36,7 @@ settings are: `ignore_case` (defaults to `false`), and `expand`
|
|||
The `tokenizer` parameter controls the tokenizers that will be used to
|
||||
tokenize the synonym, and defaults to the `whitespace` tokenizer.
|
||||
|
||||
As of elasticsearch 0.17.9 two synonym formats are supported: Solr,
|
||||
WordNet.
|
||||
Two synonym formats are supported: Solr, WordNet.
|
||||
|
||||
[float]
|
||||
==== Solr synonyms
|
||||
|
|
|
@ -16,7 +16,7 @@ type:
|
|||
|
||||
|`max_gram` |Maximum size in codepoints of a single n-gram |`2`.
|
||||
|
||||
|`token_chars` |(Since `0.90.2`) Characters classes to keep in the
|
||||
|`token_chars` | Characters classes to keep in the
|
||||
tokens, Elasticsearch will split on characters that don't belong to any
|
||||
of these classes. |`[]` (Keep all characters)
|
||||
|=======================================================================
|
||||
|
|
|
@ -12,7 +12,7 @@ The following are settings that can be set for a `nGram` tokenizer type:
|
|||
|
||||
|`max_gram` |Maximum size in codepoints of a single n-gram |`2`.
|
||||
|
||||
|`token_chars` |(Since `0.90.2`) Characters classes to keep in the
|
||||
|`token_chars` |Characters classes to keep in the
|
||||
tokens, Elasticsearch will split on characters that don't belong to any
|
||||
of these classes. |`[]` (Keep all characters)
|
||||
|=======================================================================
|
||||
|
|
|
@ -83,7 +83,7 @@ The `all` flag can be set to return all the stats.
|
|||
[float]
|
||||
=== Field data statistics
|
||||
|
||||
From 0.90, you can get information about field data memory usage on node
|
||||
You can get information about field data memory usage on node
|
||||
level or on index level.
|
||||
|
||||
[source,js]
|
||||
|
|
|
@ -119,7 +119,7 @@ There is a specific list of settings that can be updated, those include:
|
|||
`cluster.routing.allocation.exclude.*`::
|
||||
See <<modules-cluster>>.
|
||||
|
||||
`cluster.routing.allocation.require.*` (from 0.90)::
|
||||
`cluster.routing.allocation.require.*`
|
||||
See <<modules-cluster>>.
|
||||
|
||||
[float]
|
||||
|
@ -177,10 +177,7 @@ There is a specific list of settings that can be updated, those include:
|
|||
See <<modules-indices>>
|
||||
|
||||
`indices.recovery.max_bytes_per_sec`::
|
||||
Since 0.90.1. See <<modules-indices>>
|
||||
|
||||
`indices.recovery.max_size_per_sec`::
|
||||
Deprecated since 0.90.1. See `max_bytes_per_sec` instead.
|
||||
See <<modules-indices>>
|
||||
|
||||
[float]
|
||||
==== Store level throttling
|
||||
|
|
|
@ -19,8 +19,8 @@ optional_source\n
|
|||
|
||||
*NOTE*: the final line of data must end with a newline character `\n`.
|
||||
|
||||
The possible actions are `index`, `create`, `delete` and since version
|
||||
`0.90.1` also `update`. `index` and `create` expect a source on the next
|
||||
The possible actions are `index`, `create`, `delete` and `update`.
|
||||
`index` and `create` expect a source on the next
|
||||
line, and have the same semantics as the `op_type` parameter to the
|
||||
standard index API (i.e. create will fail if a document with the same
|
||||
index and type exists already, whereas index will add or replace a
|
||||
|
|
|
@ -82,17 +82,16 @@ extraction from _source, like `obj1.obj2`.
|
|||
[float]
|
||||
=== Getting the _source directly
|
||||
|
||||
Since version `0.90.1` there is a new rest end point that allows the
|
||||
source to be returned directly without any additional content around it.
|
||||
The get endpoint has the following structure:
|
||||
`{index}/{type}/{id}/_source`. Curl example:
|
||||
Use the `/{index}/{type}/{id}/_source` endpoint to get
|
||||
just the `_source` field of the document,
|
||||
without any additional content around it. For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
curl -XGET 'http://localhost:9200/twitter/tweet/1/_source'
|
||||
--------------------------------------------------
|
||||
|
||||
Note, there is also a HEAD variant for the new _source endpoint. Curl
|
||||
Note, there is also a HEAD variant for the _source endpoint. Curl
|
||||
example:
|
||||
|
||||
[source,js]
|
||||
|
|
|
@ -66,8 +66,7 @@ on the specific index settings).
|
|||
|
||||
Automatic index creation can include a pattern based white/black list,
|
||||
for example, set `action.auto_create_index` to `+aaa*,-bbb*,+ccc*,-*` (+
|
||||
meaning allowed, and - meaning disallowed). Note, this feature is
|
||||
available since 0.20.
|
||||
meaning allowed, and - meaning disallowed).
|
||||
|
||||
[float]
|
||||
=== Versioning
|
||||
|
|
|
@ -6,7 +6,7 @@ The operation gets the document (collocated with the shard) from the
|
|||
index, runs the script (with optional script language and parameters),
|
||||
and index back the result (also allows to delete, or ignore the
|
||||
operation). It uses versioning to make sure no updates have happened
|
||||
during the "get" and "reindex". (available from `0.19` onwards).
|
||||
during the "get" and "reindex".
|
||||
|
||||
Note, this operation still means full reindex of the document, it just
|
||||
removes some network roundtrips and reduces chances of version conflicts
|
||||
|
@ -92,7 +92,7 @@ ctx._source.tags.contains(tag) ? (ctx.op = \"none\") : ctx._source.tags += tag
|
|||
if (ctx._source.tags.contains(tag)) { ctx.op = \"none\" } else { ctx._source.tags += tag }
|
||||
--------------------------------------------------
|
||||
|
||||
The update API also support passing a partial document (since 0.20),
|
||||
The update API also support passing a partial document,
|
||||
which will be merged into the existing document (simple recursive merge,
|
||||
inner merging of objects, replacing core "keys/values" and arrays). For
|
||||
example:
|
||||
|
@ -109,7 +109,7 @@ curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
|
|||
If both `doc` and `script` is specified, then `doc` is ignored. Best is
|
||||
to put your field pairs of the partial document in the script itself.
|
||||
|
||||
There is also support for `upsert` (since 0.20). If the document does
|
||||
There is also support for `upsert`. If the document does
|
||||
not already exists, the content of the `upsert` element will be used to
|
||||
index the fresh doc:
|
||||
|
||||
|
@ -126,7 +126,7 @@ curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
|
|||
}'
|
||||
--------------------------------------------------
|
||||
|
||||
Last it also supports `doc_as_upsert` (since 0.90.2). So that the
|
||||
Last it also supports `doc_as_upsert`. So that the
|
||||
provided document will be inserted if the document does not already
|
||||
exist. This will reduce the amount of data that needs to be sent to
|
||||
elasticsearch.
|
||||
|
@ -164,8 +164,8 @@ including:
|
|||
so that the updated document appears in search results
|
||||
immediately.
|
||||
|
||||
`fields`:: return the relevant fields from the document updated
|
||||
(since 0.20). Support `_source` to return the full updated
|
||||
`fields`:: return the relevant fields from the updated document.
|
||||
Support `_source` to return the full updated
|
||||
source.
|
||||
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ curl -XPUT localhost:9200/test/_settings -d '{
|
|||
}'
|
||||
--------------------------------------------------
|
||||
|
||||
From version 0.90, `index.routing.allocation.require.*` can be used to
|
||||
`index.routing.allocation.require.*` can be used to
|
||||
specify a number of rules, all of which MUST match in order for a shard
|
||||
to be allocated to a node. This is in contrast to `include` which will
|
||||
include a node if ANY rule matches.
|
||||
|
|
|
@ -10,8 +10,6 @@ Configuring custom postings formats is an expert feature and most likely
|
|||
using the builtin postings formats will suite your needs as is described
|
||||
in the <<mapping-core-types,mapping section>>
|
||||
|
||||
Codecs are available in Elasticsearch from version `0.90.0.beta1`.
|
||||
|
||||
[float]
|
||||
=== Configuring a custom postings format
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ document based access to those values. The field data cache can be
|
|||
expensive to build for a field, so its recommended to have enough memory
|
||||
to allocate it, and to keep it loaded.
|
||||
|
||||
From version 0.90 onwards, the amount of memory used for the field
|
||||
The amount of memory used for the field
|
||||
data cache can be controlled using `indices.fielddata.cache.size`. Note:
|
||||
reloading the field data which does not fit into your cache will be expensive
|
||||
and perform poorly.
|
||||
|
@ -117,24 +117,6 @@ The `frequency` and `regex` filters can be combined:
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
=== Settings before v0.90
|
||||
|
||||
[cols="<,<",options="header",]
|
||||
|=======================================================================
|
||||
|Setting |Description
|
||||
|`index.cache.field.type` |The default type for the field data cache is
|
||||
`resident` (because of the cost of rebuilding it). Other types include
|
||||
`soft`
|
||||
|
||||
|`index.cache.field.max_size` |The max size (count, not byte size) of
|
||||
the cache (per search segment in a shard). Defaults to not set (`-1`).
|
||||
|
||||
|`index.cache.field.expire` |A time based setting that expires filters
|
||||
after a certain time of inactivity. Defaults to `-1`. For example, can
|
||||
be set to `5m` for a 5 minute expiry.
|
||||
|=======================================================================
|
||||
|
||||
[float]
|
||||
=== Monitoring field data
|
||||
|
||||
|
|
|
@ -9,8 +9,6 @@ Configuring a custom similarity is considered a expert feature and the
|
|||
builtin similarities are most likely sufficient as is described in the
|
||||
<<mapping-core-types,mapping section>>
|
||||
|
||||
Configuring similarities is a `0.90.0.Beta1` feature.
|
||||
|
||||
[float]
|
||||
=== Configuring a similarity
|
||||
|
||||
|
|
|
@ -18,38 +18,10 @@ heap space* using the "Memory" (see below) storage type. It translates
|
|||
to the fact that there is no need for extra large JVM heaps (with their
|
||||
own consequences) for storing the index in memory.
|
||||
|
||||
[float]
|
||||
=== Store Level Compression
|
||||
|
||||
*From version 0.90 onwards, store compression is always enabled.*
|
||||
|
||||
For versions 0.19.5 to 0.20:
|
||||
|
||||
In the mapping, one can configure the `_source` field to be compressed.
|
||||
The problem with it is the fact that small documents don't end up
|
||||
compressing well, as several documents compressed in a single
|
||||
compression "block" will provide a considerable better compression
|
||||
ratio. This version introduces the ability to compress stored fields
|
||||
using the `index.store.compress.stored` setting, as well as term vector
|
||||
using the `index.store.compress.tv` setting.
|
||||
|
||||
The settings can be set on the index level, and are dynamic, allowing to
|
||||
change them using the index update settings API. elasticsearch can
|
||||
handle mixed stored / non stored cases. This allows, for example, to
|
||||
enable compression at a later stage in the index lifecycle, and optimize
|
||||
the index to make use of it (generating new segments that use
|
||||
compression).
|
||||
|
||||
Best compression, compared to _source level compression, will mainly
|
||||
happen when indexing smaller documents (less than 64k). The price on the
|
||||
other hand is the fact that for each doc returned, a block will need to
|
||||
be decompressed (its fast though) in order to extract the document data.
|
||||
|
||||
[float]
|
||||
=== Store Level Throttling
|
||||
|
||||
(0.19.5 and above).
|
||||
|
||||
The way Lucene, the IR library elasticsearch uses under the covers,
|
||||
works is by creating immutable segments (up to deletes) and constantly
|
||||
merging them (the merge policy settings allow to control how those
|
||||
|
@ -66,7 +38,7 @@ node, the merge process won't pass the specific setting bytes per
|
|||
second. It can be set by setting `indices.store.throttle.type` to
|
||||
`merge`, and setting `indices.store.throttle.max_bytes_per_sec` to
|
||||
something like `5mb`. The node level settings can be changed dynamically
|
||||
using the cluster update settings API. Since 0.90.1 the default is set
|
||||
using the cluster update settings API. The default is set
|
||||
to `20mb` with type `merge`.
|
||||
|
||||
If specific index level configuration is needed, regardless of the node
|
||||
|
|
|
@ -152,8 +152,7 @@ curl -XGET 'http://localhost:9200/alias2/_search?q=user:kimchy&routing=2,3'
|
|||
[float]
|
||||
=== Add a single index alias
|
||||
|
||||
From version `0.90.1` there is an api to add a single index alias,
|
||||
options:
|
||||
There is also an api to add a single index alias, with options:
|
||||
|
||||
[horizontal]
|
||||
`index`:: The index to alias refers to. This is a required option.
|
||||
|
@ -190,8 +189,7 @@ curl -XPUT 'localhost:9200/users/_alias/user_12' -d '{
|
|||
[float]
|
||||
=== Delete a single index alias
|
||||
|
||||
From version `0.90.1` there is an api to delete a single index alias,
|
||||
options:
|
||||
Th API to delete a single index alias, has options:
|
||||
|
||||
[horizontal]
|
||||
`index`:: The index the alias is in, the needs to be deleted. This is
|
||||
|
@ -208,7 +206,7 @@ curl -XDELETE 'localhost:9200/users/_alias/user_12'
|
|||
[float]
|
||||
=== Retrieving existing aliases
|
||||
|
||||
The get index alias api (Available since `0.90.1`) allows to filter by
|
||||
The get index alias api allows to filter by
|
||||
alias name and index name. This api redirects to the master and fetches
|
||||
the requested index aliases, if available. This api only serialises the
|
||||
found index aliases.
|
||||
|
@ -336,16 +334,3 @@ curl -XHEAD 'localhost:9200/_alias/2013'
|
|||
curl -XHEAD 'localhost:9200/_alias/2013_01*'
|
||||
curl -XHEAD 'localhost:9200/users/_alias/*'
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
=== Pre 0.90.1 way of getting index aliases
|
||||
|
||||
Aliases can be retrieved using the get aliases API, which can either
|
||||
return all indices with all aliases, or just for specific indices:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
curl -XGET 'localhost:9200/test/_aliases'
|
||||
curl -XGET 'localhost:9200/test1,test2/_aliases'
|
||||
curl -XGET 'localhost:9200/_aliases'
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -1,8 +1,7 @@
|
|||
[[indices-types-exists]]
|
||||
== Types Exists
|
||||
|
||||
Used to check if a type/types exists in an index/indices (available
|
||||
since 0.20).
|
||||
Used to check if a type/types exists in an index/indices.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -4,8 +4,7 @@
|
|||
Index warming allows to run registered search requests to warm up the
|
||||
index before it is available for search. With the near real time aspect
|
||||
of search, cold data (segments) will be warmed up before they become
|
||||
available for search. This feature is available from version 0.20
|
||||
onwards.
|
||||
available for search.
|
||||
|
||||
Warmup searches typically include requests that require heavy loading of
|
||||
data, such as faceting or sorting on specific fields. The warmup APIs
|
||||
|
|
|
@ -22,11 +22,6 @@ using:
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
In order to maintain backward compatibility, a node level setting
|
||||
`index.mapping._id.indexed` can be set to `true` to make sure that the
|
||||
id is indexed when upgrading to `0.16`, though it's recommended to not
|
||||
index the id.
|
||||
|
||||
The `_id` mapping can also be associated with a `path` that will be used
|
||||
to extract the id from a different location in the source document. For
|
||||
example, having the following mapping:
|
||||
|
|
|
@ -21,30 +21,6 @@ example:
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
==== Compression
|
||||
|
||||
*From version 0.90 onwards, all stored fields (including `_source`) are
|
||||
always compressed.*
|
||||
|
||||
For versions before 0.90:
|
||||
|
||||
The source field can be compressed (LZF) when stored in the index. This
|
||||
can greatly reduce the index size, as well as possibly improving
|
||||
performance (when decompression overhead is better than loading a bigger
|
||||
source from disk). The code takes special care to decompress the source
|
||||
only when needed, for example decompressing it directly into the REST
|
||||
stream of a result.
|
||||
|
||||
In order to enable compression, the `compress` option should be set to
|
||||
`true`. By default it is set to `false`. Note, this can be changed on an
|
||||
existing index, as a mix of compressed and uncompressed sources is
|
||||
supported.
|
||||
|
||||
Moreover, a `compress_threshold` can be set to control when the source
|
||||
will be compressed. It accepts a byte size value (for example `100b`,
|
||||
`10kb`). Note, `compress` should be set to `true`.
|
||||
|
||||
[float]
|
||||
==== Includes / Excludes
|
||||
|
||||
|
|
|
@ -100,16 +100,12 @@ all.
|
|||
to `false` for `analyzed` fields, and to `true` for `not_analyzed`
|
||||
fields.
|
||||
|
||||
|`omit_term_freq_and_positions` |Boolean value if term freq and
|
||||
positions should be omitted. Defaults to `false`. Deprecated since 0.20,
|
||||
see `index_options`.
|
||||
|
||||
|`index_options` |Available since 0.20. Allows to set the indexing
|
||||
|`index_options` | Allows to set the indexing
|
||||
options, possible values are `docs` (only doc numbers are indexed),
|
||||
`freqs` (doc numbers and term frequencies), and `positions` (doc
|
||||
numbers, term frequencies and positions). Defaults to `positions` for
|
||||
`analyzed` fields, and to `docs` for `not_analyzed` fields. Since 0.90
|
||||
it is also possible to set it to `offsets` (doc numbers, term
|
||||
`analyzed` fields, and to `docs` for `not_analyzed` fields. It
|
||||
is also possible to set it to `offsets` (doc numbers, term
|
||||
frequencies, positions and offsets).
|
||||
|
||||
|`analyzer` |The analyzer used to analyze the text contents when
|
||||
|
@ -128,7 +124,6 @@ defaults to `true` or to the parent `object` type setting.
|
|||
|
||||
|`ignore_above` |The analyzer will ignore strings larger than this size.
|
||||
Useful for generic `not_analyzed` fields that should ignore long text.
|
||||
(since @0.19.9).
|
||||
|
||||
|`position_offset_gap` |Position increment gap between field instances
|
||||
with the same field name. Defaults to 0.
|
||||
|
@ -212,7 +207,7 @@ enabled). If `index` is set to `no` this defaults to `false`, otherwise,
|
|||
defaults to `true` or to the parent `object` type setting.
|
||||
|
||||
|`ignore_malformed` |Ignored a malformed number. Defaults to `false`.
|
||||
(Since @0.19.9).
|
||||
|
||||
|=======================================================================
|
||||
|
||||
[float]
|
||||
|
@ -276,7 +271,7 @@ enabled). If `index` is set to `no` this defaults to `false`, otherwise,
|
|||
defaults to `true` or to the parent `object` type setting.
|
||||
|
||||
|`ignore_malformed` |Ignored a malformed number. Defaults to `false`.
|
||||
(Since @0.19.9).
|
||||
|
||||
|=======================================================================
|
||||
|
||||
[float]
|
||||
|
@ -402,9 +397,8 @@ to reload the fielddata using the new filters.
|
|||
|
||||
Posting formats define how fields are written into the index and how
|
||||
fields are represented into memory. Posting formats can be defined per
|
||||
field via the `postings_format` option. Postings format are configurable
|
||||
since version `0.90.0.Beta1`. Elasticsearch has several builtin
|
||||
formats:
|
||||
field via the `postings_format` option. Postings format are configurable.
|
||||
Elasticsearch has several builtin formats:
|
||||
|
||||
`direct`::
|
||||
A postings format that uses disk-based storage but loads
|
||||
|
@ -463,8 +457,7 @@ information.
|
|||
[float]
|
||||
==== Similarity
|
||||
|
||||
From version `0.90.Beta1` Elasticsearch includes changes from Lucene 4
|
||||
that allows you to configure a similarity (scoring algorithm) per field.
|
||||
Elasticsearch allows you to configure a similarity (scoring algorithm) per field.
|
||||
Allowing users a simpler extension beyond the usual TF/IDF algorithm. As
|
||||
part of this, new algorithms have been added including BM25. Also as
|
||||
part of the changes, it is now possible to define a Similarity per
|
||||
|
|
|
@ -17,11 +17,6 @@ http://www.vividsolutions.com/jts/jtshome.htm[JTS], both of which are
|
|||
optional dependencies. Consequently you must add Spatial4J v0.3 and JTS
|
||||
v1.12 to your classpath in order to use this type.
|
||||
|
||||
Note, the implementation of geo_shape was modified in an API breaking
|
||||
way in 0.90. Implementations prior to this version had significant
|
||||
issues and users are recommended to update to the latest version of
|
||||
Elasticsearch if they wish to use the geo_shape functionality.
|
||||
|
||||
[float]
|
||||
==== Mapping Options
|
||||
|
||||
|
|
|
@ -11,8 +11,6 @@ include::modules/http.asciidoc[]
|
|||
|
||||
include::modules/indices.asciidoc[]
|
||||
|
||||
include::modules/jmx.asciidoc[]
|
||||
|
||||
include::modules/memcached.asciidoc[]
|
||||
|
||||
include::modules/network.asciidoc[]
|
||||
|
|
|
@ -177,7 +177,7 @@ curl -XPUT localhost:9200/test/_settings -d '{
|
|||
}'
|
||||
--------------------------------------------------
|
||||
|
||||
From version 0.90, `index.routing.allocation.require.*` can be used to
|
||||
`index.routing.allocation.require.*` can be used to
|
||||
specify a number of rules, all of which MUST match in order for a shard
|
||||
to be allocated to a node. This is in contrast to `include` which will
|
||||
include a node if ANY rule matches.
|
||||
|
|
|
@ -68,9 +68,7 @@ As part of the initial ping process a master of the cluster is either
|
|||
elected or joined to. This is done automatically. The
|
||||
`discovery.zen.ping_timeout` (which defaults to `3s`) allows to
|
||||
configure the election to handle cases of slow or congested networks
|
||||
(higher values assure less chance of failure). Note, this setting was
|
||||
changed from 0.15.1 onwards, prior it was called
|
||||
`discovery.zen.initial_ping_timeout`.
|
||||
(higher values assure less chance of failure).
|
||||
|
||||
Nodes can be excluded from becoming a master by setting `node.master` to
|
||||
`false`. Note, once a node is a client node (`node.client` set to
|
||||
|
|
|
@ -56,10 +56,7 @@ The following settings can be set to manage recovery policy:
|
|||
defaults to `true`.
|
||||
|
||||
`indices.recovery.max_bytes_per_sec`::
|
||||
since 0.90.1, defaults to `20mb`.
|
||||
|
||||
`indices.recovery.max_size_per_sec`::
|
||||
deprecated from 0.90.1. Replaced by `indices.recovery.max_bytes_per_sec`.
|
||||
defaults to `20mb`.
|
||||
|
||||
[float]
|
||||
=== Store level throttling
|
||||
|
|
|
@ -1,34 +0,0 @@
|
|||
[[modules-jmx]]
|
||||
== JMX
|
||||
|
||||
[float]
|
||||
=== REMOVED AS OF v0.90
|
||||
|
||||
Use the stats APIs instead.
|
||||
|
||||
The JMX module exposes node information through
|
||||
http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/[JMX].
|
||||
JMX can be used by either
|
||||
http://en.wikipedia.org/wiki/JConsole[jconsole] or
|
||||
http://en.wikipedia.org/wiki/VisualVM[VisualVM].
|
||||
|
||||
Exposed JMX data include both node level information, as well as
|
||||
instantiated index and shard on specific node. This is a work in
|
||||
progress with each version exposing more information.
|
||||
|
||||
[float]
|
||||
=== jmx.domain
|
||||
|
||||
The domain under which the JMX will register under can be set using
|
||||
`jmx.domain` setting. It defaults to `{elasticsearch}`.
|
||||
|
||||
[float]
|
||||
=== jmx.create_connector
|
||||
|
||||
An RMI connector can be started to accept JMX requests. This can be
|
||||
enabled by setting `jmx.create_connector` to `true`. An RMI connector
|
||||
does come with its own overhead, make sure you really need it.
|
||||
|
||||
When an RMI connector is created, the `jmx.port` setting provides a port
|
||||
range setting for the ports the rmi connector can open on. By default,
|
||||
it is set to `9400-9500`.
|
|
@ -17,8 +17,14 @@ Installing plugins can either be done manually by placing them under the
|
|||
be found under the https://github.com/elasticsearch[elasticsearch]
|
||||
organization in GitHub, starting with `elasticsearch-`.
|
||||
|
||||
Starting from 0.90.2, installing plugins typically take the form of
|
||||
`plugin --install <org>/<user/component>/<version>`. The plugins will be
|
||||
Installing plugins typically take the following form:
|
||||
|
||||
[source,shell]
|
||||
-----------------------------------
|
||||
plugin --install <org>/<user/component>/<version>
|
||||
-----------------------------------
|
||||
|
||||
The plugins will be
|
||||
automatically downloaded in this case from `download.elasticsearch.org`,
|
||||
and in case they don't exist there, from maven (central and sonatype).
|
||||
|
||||
|
@ -26,17 +32,16 @@ Note that when the plugin is located in maven central or sonatype
|
|||
repository, `<org>` is the artifact `groupId` and `<user/component>` is
|
||||
the `artifactId`.
|
||||
|
||||
For prior version, the older form is
|
||||
`plugin -install <org>/<user/component>/<version>`
|
||||
|
||||
A plugin can also be installed directly by specifying the URL for it,
|
||||
for example:
|
||||
`bin/plugin --url file://path/to/plugin --install plugin-name` or
|
||||
`bin/plugin -url file://path/to/plugin -install plugin-name` for older
|
||||
version.
|
||||
|
||||
Starting from 0.90.2, for more information about plugins, you can run
|
||||
`bin/plugin -h`.
|
||||
[source,shell]
|
||||
-----------------------------------
|
||||
bin/plugin --url file://path/to/plugin --install plugin-name
|
||||
-----------------------------------
|
||||
|
||||
|
||||
You can run `bin/plugin -h`.
|
||||
|
||||
[float]
|
||||
==== Site Plugins
|
||||
|
@ -56,13 +61,8 @@ running:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
# From 0.90.2
|
||||
bin/plugin --install mobz/elasticsearch-head
|
||||
bin/plugin --install lukas-vlcek/bigdesk
|
||||
|
||||
# From a prior version
|
||||
bin/plugin -install mobz/elasticsearch-head
|
||||
bin/plugin -install lukas-vlcek/bigdesk
|
||||
--------------------------------------------------
|
||||
|
||||
Will install both of those site plugins, with `elasticsearch-head`
|
||||
|
|
|
@ -7,29 +7,28 @@ pools, but the important ones include:
|
|||
|
||||
[horizontal]
|
||||
`index`::
|
||||
For index/delete operations, defaults to `fixed` type since
|
||||
`0.90.0`, size `# of available processors`. (previously type `cached`)
|
||||
For index/delete operations, defaults to `fixed`,
|
||||
size `# of available processors`.
|
||||
|
||||
`search`::
|
||||
For count/search operations, defaults to `fixed` type since
|
||||
`0.90.0`, size `3x # of available processors`. (previously type
|
||||
`cached`)
|
||||
For count/search operations, defaults to `fixed`,
|
||||
size `3x # of available processors`.
|
||||
|
||||
`get`::
|
||||
For get operations, defaults to `fixed` type since `0.90.0`,
|
||||
size `# of available processors`. (previously type `cached`)
|
||||
For get operations, defaults to `fixed`
|
||||
size `# of available processors`.
|
||||
|
||||
`bulk`::
|
||||
For bulk operations, defaults to `fixed` type since `0.90.0`,
|
||||
size `# of available processors`. (previously type `cached`)
|
||||
For bulk operations, defaults to `fixed`
|
||||
size `# of available processors`.
|
||||
|
||||
`warmer`::
|
||||
For segment warm-up operations, defaults to `scaling` since
|
||||
`0.90.0` with a `5m` keep-alive. (previously type `cached`)
|
||||
For segment warm-up operations, defaults to `scaling`
|
||||
with a `5m` keep-alive.
|
||||
|
||||
`refresh`::
|
||||
For refresh operations, defaults to `scaling` since
|
||||
`0.90.0` with a `5m` keep-alive. (previously type `cached`)
|
||||
For refresh operations, defaults to `scaling`
|
||||
with a `5m` keep-alive.
|
||||
|
||||
Changing a specific thread pool can be done by setting its type and
|
||||
specific type parameters, for example, changing the `index` thread pool
|
||||
|
|
|
@ -119,19 +119,3 @@ can contain 10s-100s of coordinates and any one differing means a new
|
|||
shape, it may make sense to only using caching when you are sure that
|
||||
the shapes will remain reasonably static.
|
||||
|
||||
[float]
|
||||
==== Compatibility with older versions
|
||||
|
||||
Elasticsearch 0.90 changed the geo_shape implementation in a way that is
|
||||
not compatible. Prior to this version, there was a required `relation`
|
||||
field on queries and filter queries that indicated the relation of the
|
||||
query shape to the indexed shapes. Support for this was implemented in
|
||||
Elasticsearch and was poorly aligned with the underlying Lucene
|
||||
implementation, which has no notion of a relation. From 0.90, this field
|
||||
defaults to its only supported value: `intersects`. The other values of
|
||||
`contains`, `within`, `disjoint` are no longer supported. By using e.g.
|
||||
a bool filter, one can easily emulate `disjoint`. Given the imprecise
|
||||
accuracy (see
|
||||
<<mapping-geo-shape-type,geo_shape Mapping>>),
|
||||
`within` and `contains` were always somewhat problematic and
|
||||
`intersects` is generally good enough.
|
||||
|
|
|
@ -7,9 +7,6 @@ type. This filter return child documents which associated parents have
|
|||
matched. For the rest `has_parent` filter has the same options and works
|
||||
in the same manner as the `has_child` filter.
|
||||
|
||||
The `has_parent` filter is available from version `0.19.10`. This is an
|
||||
experimental filter.
|
||||
|
||||
[float]
|
||||
==== Filter example
|
||||
|
||||
|
|
|
@ -90,8 +90,6 @@ Potentially the amount of user ids specified in the terms filter can be
|
|||
a lot. In this scenario it makes sense to use the terms filter's terms
|
||||
lookup mechanism.
|
||||
|
||||
The terms lookup mechanism is supported from version `0.90.0.Beta1`.
|
||||
|
||||
The terms lookup mechanism supports the following options:
|
||||
|
||||
[horizontal]
|
||||
|
|
|
@ -82,8 +82,6 @@ include::queries/top-children-query.asciidoc[]
|
|||
|
||||
include::queries/wildcard-query.asciidoc[]
|
||||
|
||||
include::queries/text-query.asciidoc[]
|
||||
|
||||
include::queries/minimum-should-match.asciidoc[]
|
||||
|
||||
include::queries/multi-term-rewrite.asciidoc[]
|
||||
|
|
|
@ -47,20 +47,3 @@ Currently Elasticsearch does not have any notion of geo shape relevancy,
|
|||
consequently the Query internally uses a `constant_score` Query which
|
||||
wraps a <<query-dsl-geo-shape-filter,geo_shape
|
||||
filter>>.
|
||||
|
||||
[float]
|
||||
==== Compatibility with older versions
|
||||
|
||||
Elasticsearch 0.90 changed the geo_shape implementation in a way that is
|
||||
not compatible. Prior to this version, there was a required `relation`
|
||||
field on queries and filter queries that indicated the relation of the
|
||||
query shape to the indexed shapes. Support for this was implemented in
|
||||
Elasticsearch and was poorly aligned with the underlying Lucene
|
||||
implementation, which has no notion of a relation. From 0.90, this field
|
||||
defaults to its only supported value: `intersects`. The other values of
|
||||
`contains`, `within`, `disjoint` are no longer supported. By using e.g.
|
||||
a bool filter, one can easily emulate `disjoint`. Given the imprecise
|
||||
accuracy (see
|
||||
<<mapping-geo-shape-type,geo_shape Mapping>>),
|
||||
`within` and `contains` were always somewhat problematic and
|
||||
`intersects` is generally good enough.
|
||||
|
|
|
@ -30,7 +30,7 @@ query the `total_hits` is always correct.
|
|||
[float]
|
||||
==== Scoring capabilities
|
||||
|
||||
The `has_child` also has scoring support from version `0.20.2`. The
|
||||
The `has_child` also has scoring support. The
|
||||
supported score types are `max`, `sum`, `avg` or `none`. The default is
|
||||
`none` and yields the same behaviour as in previous versions. If the
|
||||
score type is set to another value than `none`, the scores of all the
|
||||
|
@ -53,30 +53,6 @@ inside the `has_child` query:
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
==== Scope
|
||||
|
||||
The `_scope` support has been removed from version `0.90.beta1`. See:
|
||||
https://github.com/elasticsearch/elasticsearch/issues/2606
|
||||
|
||||
A `_scope` can be defined on the filter allowing to run facets on the
|
||||
same scope name that will work against the child documents. For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"has_child" : {
|
||||
"_scope" : "my_scope",
|
||||
"type" : "blog_tag",
|
||||
"query" : {
|
||||
"term" : {
|
||||
"tag" : "something"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
==== Memory Considerations
|
||||
|
||||
|
|
|
@ -6,8 +6,7 @@ The `has_parent` query works the same as the
|
|||
filter, by automatically wrapping the filter with a constant_score (when
|
||||
using the default score type). It has the same syntax as the
|
||||
<<query-dsl-has-parent-filter,has_parent>>
|
||||
filter. This query is experimental and is available from version
|
||||
`0.19.10`.
|
||||
filter.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
|
@ -26,7 +25,7 @@ filter. This query is experimental and is available from version
|
|||
[float]
|
||||
==== Scoring capabilities
|
||||
|
||||
The `has_parent` also has scoring support from version `0.20.2`. The
|
||||
The `has_parent` also has scoring support. The
|
||||
supported score types are `score` or `none`. The default is `none` and
|
||||
this ignores the score from the parent document. The score is in this
|
||||
case equal to the boost on the `has_parent` query (Defaults to 1). If
|
||||
|
@ -50,31 +49,6 @@ matching parent document. The score type can be specified with the
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
==== Scope
|
||||
|
||||
The `_scope` support has been removed from version `0.90.beta1`. See:
|
||||
https://github.com/elasticsearch/elasticsearch/issues/2606
|
||||
|
||||
A `_scope` can be defined on the filter allowing to run facets on the
|
||||
same scope name that will work against the parent documents. For
|
||||
example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"has_parent" : {
|
||||
"_scope" : "my_scope",
|
||||
"parent_type" : "blog",
|
||||
"query" : {
|
||||
"term" : {
|
||||
"tag" : "something"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
==== Memory Considerations
|
||||
|
||||
|
|
|
@ -58,8 +58,7 @@ change in structure, `message` is the field name):
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
zero_terms_query
|
||||
|
||||
.zero_terms_query
|
||||
If the analyzer used removes all tokens in a query like a `stop` filter
|
||||
does, the default behavior is to match no documents at all. In order to
|
||||
change that the `zero_terms_query` option can be used, which accepts
|
||||
|
@ -78,9 +77,8 @@ change that the `zero_terms_query` option can be used, which accepts
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
cutoff_frequency
|
||||
|
||||
Since `0.90.0` match query supports a `cutoff_frequency` that allows
|
||||
.cutoff_frequency
|
||||
The match query supports a `cutoff_frequency` that allows
|
||||
specifying an absolute or relative document frequency where high
|
||||
frequent terms are moved into an optional subquery and are only scored
|
||||
if one of the low frequent (below the cutoff) terms in the case of an
|
||||
|
|
|
@ -70,7 +70,7 @@ in the resulting boolean query should match. It can be an absolute value
|
|||
both>>.
|
||||
|
||||
|`lenient` |If set to `true` will cause format based failures (like
|
||||
providing text to a numeric field) to be ignored. (since 0.19.4).
|
||||
providing text to a numeric field) to be ignored.
|
||||
|=======================================================================
|
||||
|
||||
When a multi term query is being generated, one can control how it gets
|
||||
|
@ -128,7 +128,7 @@ search on all "city" fields:
|
|||
|
||||
Another option is to provide the wildcard fields search in the query
|
||||
string itself (properly escaping the `*` sign), for example:
|
||||
`city.\*:something`. (since 0.19.4).
|
||||
`city.\*:something`.
|
||||
|
||||
When running the `query_string` query against multiple fields, the
|
||||
following additional parameters are allowed:
|
||||
|
|
|
@ -28,5 +28,3 @@ A boost can also be associated with the query:
|
|||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
The `span_multi` query is supported from version `0.90.1`
|
||||
|
|
|
@ -1,171 +0,0 @@
|
|||
[[query-dsl-text-query]]
|
||||
=== Text Query
|
||||
|
||||
`text` query has been deprecated (effectively renamed) to `match` query
|
||||
since `0.19.9`, please use it. `text` is still supported.
|
||||
|
||||
A family of `text` queries that accept text, analyzes it, and constructs
|
||||
a query out of it. For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text" : {
|
||||
"message" : "this is a test"
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
Note, even though the name is text, it also supports exact matching
|
||||
(`term` like) on numeric values and dates.
|
||||
|
||||
Note, `message` is the name of a field, you can substitute the name of
|
||||
any field (including `_all`) instead.
|
||||
|
||||
[float]
|
||||
[float]
|
||||
==== Types of Text Queries
|
||||
|
||||
[float]
|
||||
[float]
|
||||
===== boolean
|
||||
|
||||
The default `text` query is of type `boolean`. It means that the text
|
||||
provided is analyzed and the analysis process constructs a boolean query
|
||||
from the provided text. The `operator` flag can be set to `or` or `and`
|
||||
to control the boolean clauses (defaults to `or`).
|
||||
|
||||
The `analyzer` can be set to control which analyzer will perform the
|
||||
analysis process on the text. It default to the field explicit mapping
|
||||
definition, or the default search analyzer.
|
||||
|
||||
`fuzziness` can be set to a value (depending on the relevant type, for
|
||||
string types it should be a value between `0.0` and `1.0`) to constructs
|
||||
fuzzy queries for each term analyzed. The `prefix_length` and
|
||||
`max_expansions` can be set in this case to control the fuzzy process.
|
||||
|
||||
Here is an example when providing additional parameters (note the slight
|
||||
change in structure, `message` is the field name):
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text" : {
|
||||
"message" : {
|
||||
"query" : "this is a test",
|
||||
"operator" : "and"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[float]
|
||||
===== phrase
|
||||
|
||||
The `text_phrase` query analyzes the text and creates a `phrase` query
|
||||
out of the analyzed text. For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text_phrase" : {
|
||||
"message" : "this is a test"
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
Since `text_phrase` is only a `type` of a `text` query, it can also be
|
||||
used in the following manner:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text" : {
|
||||
"message" : {
|
||||
"query" : "this is a test",
|
||||
"type" : "phrase"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
A phrase query maintains order of the terms up to a configurable `slop`
|
||||
(which defaults to 0).
|
||||
|
||||
The `analyzer` can be set to control which analyzer will perform the
|
||||
analysis process on the text. It default to the field explicit mapping
|
||||
definition, or the default search analyzer, for example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text_phrase" : {
|
||||
"message" : {
|
||||
"query" : "this is a test",
|
||||
"analyzer" : "my_analyzer"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[float]
|
||||
===== text_phrase_prefix
|
||||
|
||||
The `text_phrase_prefix` is the same as `text_phrase`, expect it allows
|
||||
for prefix matches on the last term in the text. For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text_phrase_prefix" : {
|
||||
"message" : "this is a test"
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
Or:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text" : {
|
||||
"message" : {
|
||||
"query" : "this is a test",
|
||||
"type" : "phrase_prefix"
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
It accepts the same parameters as the phrase type. In addition, it also
|
||||
accepts a `max_expansions` parameter that can control to how many
|
||||
prefixes the last term will be expanded. It is highly recommended to set
|
||||
it to an acceptable value to control the execution time of the query.
|
||||
For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"text_phrase_prefix" : {
|
||||
"message" : {
|
||||
"query" : "this is a test",
|
||||
"max_expansions" : 10
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
[float]
|
||||
==== Comparison to query_string / field
|
||||
|
||||
The text family of queries does not go through a "query parsing"
|
||||
process. It does not support field name prefixes, wildcard characters,
|
||||
or other "advance" features. For this reason, chances of it failing are
|
||||
very small / non existent, and it provides an excellent behavior when it
|
||||
comes to just analyze and run that text as a query behavior (which is
|
||||
usually what a text search box does). Also, the `phrase_prefix` can
|
||||
provide a great "as you type" behavior to automatically load search
|
||||
results.
|
|
@ -13,8 +13,7 @@ and "remove" (`-`), for example: `+test*,-test3`.
|
|||
|
||||
All multi indices API support the `ignore_indices` option. Setting it to
|
||||
`missing` will cause indices that do not exists to be ignored from the
|
||||
execution. By default, when its not set, the request will fail. Note,
|
||||
this feature is available since 0.20 version.
|
||||
execution. By default, when its not set, the request will fail.
|
||||
|
||||
[float]
|
||||
== Routing
|
||||
|
|
|
@ -3,8 +3,7 @@
|
|||
|
||||
The explain api computes a score explanation for a query and a specific
|
||||
document. This can give useful feedback whether a document matches or
|
||||
didn't match a specific query. This feature is available from version
|
||||
`0.19.9` and up.
|
||||
didn't match a specific query.
|
||||
|
||||
[float]
|
||||
=== Usage
|
||||
|
@ -62,8 +61,7 @@ This will yield the same result as the previous request.
|
|||
[horizontal]
|
||||
`fields`::
|
||||
Allows to control which fields to return as part of the
|
||||
document explained (support `_source` for the full document). Note, this
|
||||
feature is available since 0.20.
|
||||
document explained (support `_source` for the full document).
|
||||
|
||||
`routing`::
|
||||
Controls the routing in the case the routing was used
|
||||
|
|
|
@ -209,15 +209,6 @@ And, here is a sample data:
|
|||
--------------------------------------------------
|
||||
|
||||
|
||||
.Nested Query Facets
|
||||
[NOTE]
|
||||
--
|
||||
Scoped filters and queries have been removed from version `0.90.0.Beta1`
|
||||
instead the facet / queries need be repeated as `facet_filter`. More
|
||||
information about this can be found in
|
||||
https://github.com/elasticsearch/elasticsearch/issues/2606[issue 2606]
|
||||
--
|
||||
|
||||
[float]
|
||||
==== All Nested Matching Root Documents
|
||||
|
||||
|
|
|
@ -2,8 +2,7 @@
|
|||
== Multi Search API
|
||||
|
||||
The multi search API allows to execute several search requests within
|
||||
the same API. The endpoint for it is `_msearch` (available from `0.19`
|
||||
onwards).
|
||||
the same API. The endpoint for it is `_msearch`.
|
||||
|
||||
The format of the request is similar to the bulk API format, and the
|
||||
structure is as follows (the structure is specifically optimized to
|
||||
|
|
|
@ -50,7 +50,7 @@ the index to be bigger):
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
Since `0.20.2` the field name support wildcard notation, for example,
|
||||
The field name supports wildcard notation, for example,
|
||||
using `comment_*` which will cause all fields that match the expression
|
||||
to be highlighted.
|
||||
|
||||
|
|
|
@ -28,8 +28,7 @@ the response.
|
|||
|
||||
==== Sort mode option
|
||||
|
||||
From version `0.90.0.Beta1` Elasticsearch supports sorting by array
|
||||
fields which is also known as multi-valued fields. The `mode` option
|
||||
Elasticsearch supports sorting by array or multi-valued fields. The `mode` option
|
||||
controls what array value is picked for sorting the document it belongs
|
||||
to. The `mode` option can have the following values:
|
||||
|
||||
|
@ -61,7 +60,7 @@ curl -XPOST 'localhost:9200/_search' -d '{
|
|||
|
||||
==== Sorting within nested objects.
|
||||
|
||||
Also from version `0.90.0.Beta1` Elasticsearch supports sorting by
|
||||
Elasticsearch also supports sorting by
|
||||
fields that are inside one or more nested objects. The sorting by nested
|
||||
field support has the following parameters on top of the already
|
||||
existing sort options:
|
||||
|
@ -105,7 +104,7 @@ curl -XPOST 'localhost:9200/_search' -d '{
|
|||
}'
|
||||
--------------------------------------------------
|
||||
|
||||
Since version `0.90.1` nested sorting is also support when sorting by
|
||||
Nested sorting is also supported when sorting by
|
||||
scripts and sorting by geo distance.
|
||||
|
||||
==== Missing Values
|
||||
|
@ -126,7 +125,7 @@ will be used for missing docs as the sort value). For example:
|
|||
}
|
||||
--------------------------------------------------
|
||||
|
||||
Note: from version `0.90.1` if a nested inner object doesn't match with
|
||||
NOTE: If a nested inner object doesn't match with
|
||||
the `nested_filter` then a missing value is used.
|
||||
|
||||
==== Ignoring Unmapped Fields
|
||||
|
|
|
@ -2,8 +2,7 @@
|
|||
== Suggesters
|
||||
|
||||
The suggest feature suggests similar looking terms based on a provided
|
||||
text by using a suggester. The suggest feature is available from version
|
||||
`0.90.0.Beta1`. Parts of the suggest feature are still under
|
||||
text by using a suggester. Parts of the suggest feature are still under
|
||||
development.
|
||||
|
||||
The suggest request part is either defined alongside the query part in a
|
||||
|
|
Loading…
Reference in New Issue