Merge branch 'master' into feature/query-refactoring
Conflicts: src/main/java/org/elasticsearch/index/query/RangeQueryParser.java src/main/java/org/elasticsearch/index/query/SpanTermQueryParser.java
This commit is contained in:
commit
313e9c6769
|
@ -50,7 +50,7 @@ POST /_flush
|
||||||
=== Synced Flush
|
=== Synced Flush
|
||||||
|
|
||||||
Elasticsearch tracks the indexing activity of each shard. Shards that have not
|
Elasticsearch tracks the indexing activity of each shard. Shards that have not
|
||||||
received any indexing operations for 30 minutes are automatically marked as inactive. This presents
|
received any indexing operations for 5 minutes are automatically marked as inactive. This presents
|
||||||
an opportunity for Elasticsearch to reduce shard resources and also perform
|
an opportunity for Elasticsearch to reduce shard resources and also perform
|
||||||
a special kind of flush, called `synced flush`. A synced flush performs a normal flush, then adds
|
a special kind of flush, called `synced flush`. A synced flush performs a normal flush, then adds
|
||||||
a generated unique marker (sync_id) to all shards.
|
a generated unique marker (sync_id) to all shards.
|
||||||
|
@ -117,7 +117,7 @@ which returns something similar to:
|
||||||
=== Synced Flush API
|
=== Synced Flush API
|
||||||
|
|
||||||
The Synced Flush API allows an administrator to initiate a synced flush manually. This can be particularly useful for
|
The Synced Flush API allows an administrator to initiate a synced flush manually. This can be particularly useful for
|
||||||
a planned (rolling) cluster restart where you can stop indexing and don't want to wait the default 30 minutes for
|
a planned (rolling) cluster restart where you can stop indexing and don't want to wait the default 5 minutes for
|
||||||
idle indices to be sync-flushed automatically.
|
idle indices to be sync-flushed automatically.
|
||||||
|
|
||||||
While handy, there are a couple of caveats for this API:
|
While handy, there are a couple of caveats for this API:
|
||||||
|
|
|
@ -16,6 +16,25 @@ settings, you need to enable using it in elasticsearch.yml:
|
||||||
node.enable_custom_paths: true
|
node.enable_custom_paths: true
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
|
You will also need to disable the default security manager that Elasticsearch
|
||||||
|
runs with. You can do this by either passing
|
||||||
|
`-Des.security.manager.enabled=false` with the parameters while starting
|
||||||
|
Elasticsearch, or you can disable it in elasticsearch.yml:
|
||||||
|
|
||||||
|
[source,yaml]
|
||||||
|
--------------------------------------------------
|
||||||
|
security.manager.enabled: false
|
||||||
|
--------------------------------------------------
|
||||||
|
|
||||||
|
[WARNING]
|
||||||
|
========================
|
||||||
|
Disabling the security manager means that the Elasticsearch process is not
|
||||||
|
limited to the directories and files that it can read and write. However,
|
||||||
|
because the `index.data_path` setting is set when creating the index, the
|
||||||
|
security manager would prevent writing or reading from the index's location, so
|
||||||
|
it must be disabled.
|
||||||
|
========================
|
||||||
|
|
||||||
You can then create an index with a custom data path, where each node will use
|
You can then create an index with a custom data path, where each node will use
|
||||||
this path for the data:
|
this path for the data:
|
||||||
|
|
||||||
|
@ -88,6 +107,12 @@ settings API:
|
||||||
Boolean value indicating this index uses a shared filesystem. Defaults to
|
Boolean value indicating this index uses a shared filesystem. Defaults to
|
||||||
the `true` if `index.shadow_replicas` is set to true, `false` otherwise.
|
the `true` if `index.shadow_replicas` is set to true, `false` otherwise.
|
||||||
|
|
||||||
|
`index.shared_filesystem.recover_on_any_node`::
|
||||||
|
Boolean value indicating whether the primary shards for the index should be
|
||||||
|
allowed to recover on any node in the cluster, regardless of the number of
|
||||||
|
replicas or whether the node has previously had the shard allocated to it
|
||||||
|
before. Defaults to `false`.
|
||||||
|
|
||||||
=== Node level settings related to shadow replicas
|
=== Node level settings related to shadow replicas
|
||||||
|
|
||||||
These are non-dynamic settings that need to be configured in `elasticsearch.yml`
|
These are non-dynamic settings that need to be configured in `elasticsearch.yml`
|
||||||
|
|
|
@ -198,6 +198,11 @@ year.
|
||||||
|
|
||||||
|`year_month_day`|A formatter for a four digit year, two digit month of
|
|`year_month_day`|A formatter for a four digit year, two digit month of
|
||||||
year, and two digit day of month.
|
year, and two digit day of month.
|
||||||
|
|
||||||
|
|`epoch_second`|A formatter for the number of seconds since the epoch.
|
||||||
|
|
||||||
|
|`epoch_millis`|A formatter for the number of milliseconds since
|
||||||
|
the epoch.
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
|
|
@ -79,7 +79,7 @@ format>> used to parse the provided timestamp value. For example:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Note, the default format is `dateOptionalTime`. The timestamp value will
|
Note, the default format is `epoch_millis||dateOptionalTime`. The timestamp value will
|
||||||
first be parsed as a number and if it fails the format will be tried.
|
first be parsed as a number and if it fails the format will be tried.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
|
|
@ -349,7 +349,7 @@ date type:
|
||||||
Defaults to the property/field name.
|
Defaults to the property/field name.
|
||||||
|
|
||||||
|`format` |The <<mapping-date-format,date
|
|`format` |The <<mapping-date-format,date
|
||||||
format>>. Defaults to `dateOptionalTime`.
|
format>>. Defaults to `epoch_millis||dateOptionalTime`.
|
||||||
|
|
||||||
|`store` |Set to `true` to store actual field in the index, `false` to not
|
|`store` |Set to `true` to store actual field in the index, `false` to not
|
||||||
store it. Defaults to `false` (note, the JSON document itself is stored,
|
store it. Defaults to `false` (note, the JSON document itself is stored,
|
||||||
|
|
|
@ -42,8 +42,8 @@ and will use the matching format as its format attribute. The date
|
||||||
format itself is explained
|
format itself is explained
|
||||||
<<mapping-date-format,here>>.
|
<<mapping-date-format,here>>.
|
||||||
|
|
||||||
The default formats are: `dateOptionalTime` (ISO) and
|
The default formats are: `dateOptionalTime` (ISO),
|
||||||
`yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z`.
|
`yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z` and `epoch_millis`.
|
||||||
|
|
||||||
*Note:* `dynamic_date_formats` are used *only* for dynamically added
|
*Note:* `dynamic_date_formats` are used *only* for dynamically added
|
||||||
date fields, not for `date` fields that you specify in your mapping.
|
date fields, not for `date` fields that you specify in your mapping.
|
||||||
|
|
|
@ -3,32 +3,48 @@
|
||||||
|
|
||||||
[partintro]
|
[partintro]
|
||||||
--
|
--
|
||||||
*elasticsearch* provides a full Query DSL based on JSON to define
|
|
||||||
queries. In general, there are basic queries such as
|
|
||||||
<<query-dsl-term-query,term>> or
|
|
||||||
<<query-dsl-prefix-query,prefix>>. There are
|
|
||||||
also compound queries like the
|
|
||||||
<<query-dsl-bool-query,bool>> query.
|
|
||||||
|
|
||||||
While queries have scoring capabilities, in some contexts they will
|
Elasticsearch provides a full Query DSL based on JSON to define queries.
|
||||||
only be used to filter the result set, such as in the
|
Think of the Query DSL as an AST of queries, consisting of two types of
|
||||||
<<query-dsl-filtered-query,filtered>> or
|
clauses:
|
||||||
<<query-dsl-constant-score-query,constant_score>>
|
|
||||||
queries.
|
|
||||||
|
|
||||||
Think of the Query DSL as an AST of queries.
|
Leaf query clauses::
|
||||||
Some queries can be used by themselves like the
|
|
||||||
<<query-dsl-term-query,term>> query but other queries can contain
|
|
||||||
queries (like the <<query-dsl-bool-query,bool>> query), and each
|
|
||||||
of these composite queries can contain *any* query of the list of
|
|
||||||
queries, resulting in the ability to build quite
|
|
||||||
complex (and interesting) queries.
|
|
||||||
|
|
||||||
Queries can be used in different APIs. For example,
|
Leaf query clauses look for a particular value in a particular field, such as the
|
||||||
within a <<search-request-query,search query>>, or
|
<<query-dsl-match-query,`match`>>, <<query-dsl-term-query,`term`>> or
|
||||||
as an <<search-aggregations-bucket-filter-aggregation,aggregation filter>>.
|
<<query-dsl-range-query,`range`>> queries. These queries can be used
|
||||||
This section explains the queries that can form the AST one can use.
|
by themselves.
|
||||||
|
|
||||||
|
Compound query clauses::
|
||||||
|
|
||||||
|
Compound query clauses wrap other leaf *or* compound queries and are used to combine
|
||||||
|
multiple queries in a logical fashion (such as the
|
||||||
|
<<query-dsl-bool-query,`bool`>> or <<query-dsl-dis-max-query,`dis_max`>> query),
|
||||||
|
or to alter their behaviour (such as the <<query-dsl-not-query,`not`>> or
|
||||||
|
<<query-dsl-constant-score-query,`constant_score`>> query).
|
||||||
|
|
||||||
|
Query clauses behave differently depending on whether they are used in
|
||||||
|
<<query-filter-context,query context or filter context>>.
|
||||||
--
|
--
|
||||||
|
|
||||||
include::query-dsl/index.asciidoc[]
|
include::query-dsl/query_filter_context.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/match-all-query.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/full-text-queries.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/term-level-queries.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/compound-queries.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/joining-queries.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/geo-queries.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/special-queries.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/span-queries.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/minimum-should-match.asciidoc[]
|
||||||
|
|
||||||
|
include::query-dsl/multi-term-rewrite.asciidoc[]
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-and-query]]
|
[[query-dsl-and-query]]
|
||||||
== And Query
|
=== And Query
|
||||||
|
|
||||||
deprecated[2.0.0, Use the `bool` query instead]
|
deprecated[2.0.0, Use the `bool` query instead]
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-bool-query]]
|
[[query-dsl-bool-query]]
|
||||||
== Bool Query
|
=== Bool Query
|
||||||
|
|
||||||
A query that matches documents matching boolean combinations of other
|
A query that matches documents matching boolean combinations of other
|
||||||
queries. The bool query maps to Lucene `BooleanQuery`. It is built using
|
queries. The bool query maps to Lucene `BooleanQuery`. It is built using
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-boosting-query]]
|
[[query-dsl-boosting-query]]
|
||||||
== Boosting Query
|
=== Boosting Query
|
||||||
|
|
||||||
The `boosting` query can be used to effectively demote results that
|
The `boosting` query can be used to effectively demote results that
|
||||||
match a given query. Unlike the "NOT" clause in bool query, this still
|
match a given query. Unlike the "NOT" clause in bool query, this still
|
||||||
|
|
|
@ -1,12 +1,12 @@
|
||||||
[[query-dsl-common-terms-query]]
|
[[query-dsl-common-terms-query]]
|
||||||
== Common Terms Query
|
=== Common Terms Query
|
||||||
|
|
||||||
The `common` terms query is a modern alternative to stopwords which
|
The `common` terms query is a modern alternative to stopwords which
|
||||||
improves the precision and recall of search results (by taking stopwords
|
improves the precision and recall of search results (by taking stopwords
|
||||||
into account), without sacrificing performance.
|
into account), without sacrificing performance.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== The problem
|
==== The problem
|
||||||
|
|
||||||
Every term in a query has a cost. A search for `"The brown fox"`
|
Every term in a query has a cost. A search for `"The brown fox"`
|
||||||
requires three term queries, one for each of `"the"`, `"brown"` and
|
requires three term queries, one for each of `"the"`, `"brown"` and
|
||||||
|
@ -25,7 +25,7 @@ and `"not happy"`) and we lose recall (eg text like `"The The"` or
|
||||||
`"To be or not to be"` would simply not exist in the index).
|
`"To be or not to be"` would simply not exist in the index).
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== The solution
|
==== The solution
|
||||||
|
|
||||||
The `common` terms query divides the query terms into two groups: more
|
The `common` terms query divides the query terms into two groups: more
|
||||||
important (ie _low frequency_ terms) and less important (ie _high
|
important (ie _low frequency_ terms) and less important (ie _high
|
||||||
|
@ -63,7 +63,7 @@ site, common terms like `"clip"` or `"video"` will automatically behave
|
||||||
as stopwords without the need to maintain a manual list.
|
as stopwords without the need to maintain a manual list.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Examples
|
==== Examples
|
||||||
|
|
||||||
In this example, words that have a document frequency greater than 0.1%
|
In this example, words that have a document frequency greater than 0.1%
|
||||||
(eg `"this"` and `"is"`) will be treated as _common terms_.
|
(eg `"this"` and `"is"`) will be treated as _common terms_.
|
||||||
|
|
|
@ -0,0 +1,69 @@
|
||||||
|
[[compound-queries]]
|
||||||
|
== Compound queries
|
||||||
|
|
||||||
|
Compound queries wrap other compound or leaf queries, either to combine their
|
||||||
|
results and scores, to change their behaviour, or to switch from query to
|
||||||
|
filter context.
|
||||||
|
|
||||||
|
The queries in this group are:
|
||||||
|
|
||||||
|
<<query-dsl-constant-score-query,`constant_score` query>>::
|
||||||
|
|
||||||
|
A query which wraps another query, but executes it in filter context. All
|
||||||
|
matching documents are given the same ``constant'' `_score`.
|
||||||
|
|
||||||
|
<<query-dsl-bool-query,`bool` query>>::
|
||||||
|
|
||||||
|
The default query for combining multiple leaf or compound query clauses, as
|
||||||
|
`must`, `should`, `must_not`, or `filter` clauses. The `must` and `should`
|
||||||
|
clauses have their scores combined -- the more matching clauses, the better --
|
||||||
|
while the `must_not` and `filter` clauses are executed in filter context.
|
||||||
|
|
||||||
|
<<query-dsl-dis-max-query,`dis_max` query>>::
|
||||||
|
|
||||||
|
A query which accepts multiple queries, and returns any documents which match
|
||||||
|
any of the query clauses. While the `bool` query combines the scores from all
|
||||||
|
matching queries, the `dis_max` query uses the score of the single best-
|
||||||
|
matching query clause.
|
||||||
|
|
||||||
|
<<query-dsl-function-score-query,`function_score` query>>::
|
||||||
|
|
||||||
|
Modify the scores returned by the main query with functions to take into
|
||||||
|
account factors like popularity, recency, distance, or custom algorithms
|
||||||
|
implemented with scripting.
|
||||||
|
|
||||||
|
<<query-dsl-boosting-query,`boosting` query>>::
|
||||||
|
|
||||||
|
Return documents which match a `positive` query, but reduce the score of
|
||||||
|
documents which also match a `negative` query.
|
||||||
|
|
||||||
|
<<query-dsl-indices-query,`indices` query>>::
|
||||||
|
|
||||||
|
Execute one query for the specified indices, and another for other indices.
|
||||||
|
|
||||||
|
<<query-dsl-and-query,`and`>>, <<query-dsl-or-query,`or`>>, <<query-dsl-not-query,`not`>>::
|
||||||
|
|
||||||
|
Synonyms for the `bool` query.
|
||||||
|
|
||||||
|
<<query-dsl-filtered-query,`filtered` query>>::
|
||||||
|
|
||||||
|
Combine a query clause in query context with another in filter context. deprecated[2.0.0,Use the `bool` query instead]
|
||||||
|
|
||||||
|
<<query-dsl-limit-query,`limit` query>>::
|
||||||
|
|
||||||
|
Limits the number of documents examined per shard. deprecated[1.6.0]
|
||||||
|
|
||||||
|
|
||||||
|
include::constant-score-query.asciidoc[]
|
||||||
|
include::bool-query.asciidoc[]
|
||||||
|
include::dis-max-query.asciidoc[]
|
||||||
|
include::function-score-query.asciidoc[]
|
||||||
|
include::boosting-query.asciidoc[]
|
||||||
|
include::indices-query.asciidoc[]
|
||||||
|
include::and-query.asciidoc[]
|
||||||
|
include::not-query.asciidoc[]
|
||||||
|
include::or-query.asciidoc[]
|
||||||
|
include::filtered-query.asciidoc[]
|
||||||
|
include::limit-query.asciidoc[]
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-constant-score-query]]
|
[[query-dsl-constant-score-query]]
|
||||||
== Constant Score Query
|
=== Constant Score Query
|
||||||
|
|
||||||
A query that wraps another query and simply returns a
|
A query that wraps another query and simply returns a
|
||||||
constant score equal to the query boost for every document in the
|
constant score equal to the query boost for every document in the
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-dis-max-query]]
|
[[query-dsl-dis-max-query]]
|
||||||
== Dis Max Query
|
=== Dis Max Query
|
||||||
|
|
||||||
A query that generates the union of documents produced by its
|
A query that generates the union of documents produced by its
|
||||||
subqueries, and that scores each document with the maximum score for
|
subqueries, and that scores each document with the maximum score for
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-exists-query]]
|
[[query-dsl-exists-query]]
|
||||||
== Exists Query
|
=== Exists Query
|
||||||
|
|
||||||
Returns documents that have at least one non-`null` value in the original field:
|
Returns documents that have at least one non-`null` value in the original field:
|
||||||
|
|
||||||
|
@ -42,7 +42,7 @@ These documents would *not* match the above query:
|
||||||
<3> The `user` field is missing completely.
|
<3> The `user` field is missing completely.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== `null_value` mapping
|
===== `null_value` mapping
|
||||||
|
|
||||||
If the field mapping includes the `null_value` setting (see <<mapping-core-types>>)
|
If the field mapping includes the `null_value` setting (see <<mapping-core-types>>)
|
||||||
then explicit `null` values are replaced with the specified `null_value`. For
|
then explicit `null` values are replaced with the specified `null_value`. For
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-filtered-query]]
|
[[query-dsl-filtered-query]]
|
||||||
== Filtered Query
|
=== Filtered Query
|
||||||
|
|
||||||
deprecated[2.0.0, Use the `bool` query instead with a `must` clause for the query and a `filter` clause for the filter]
|
deprecated[2.0.0, Use the `bool` query instead with a `must` clause for the query and a `filter` clause for the filter]
|
||||||
|
|
||||||
|
@ -47,7 +47,7 @@ curl -XGET localhost:9200/_search -d '
|
||||||
<1> The `filtered` query is passed as the value of the `query`
|
<1> The `filtered` query is passed as the value of the `query`
|
||||||
parameter in the search request.
|
parameter in the search request.
|
||||||
|
|
||||||
=== Filtering without a query
|
==== Filtering without a query
|
||||||
|
|
||||||
If a `query` is not specified, it defaults to the
|
If a `query` is not specified, it defaults to the
|
||||||
<<query-dsl-match-all-query,`match_all` query>>. This means that the
|
<<query-dsl-match-all-query,`match_all` query>>. This means that the
|
||||||
|
@ -71,7 +71,7 @@ curl -XGET localhost:9200/_search -d '
|
||||||
<1> No `query` has been specified, so this request applies just the filter,
|
<1> No `query` has been specified, so this request applies just the filter,
|
||||||
returning all documents created since yesterday.
|
returning all documents created since yesterday.
|
||||||
|
|
||||||
==== Multiple filters
|
===== Multiple filters
|
||||||
|
|
||||||
Multiple filters can be applied by wrapping them in a
|
Multiple filters can be applied by wrapping them in a
|
||||||
<<query-dsl-bool-query,`bool` query>>, for example:
|
<<query-dsl-bool-query,`bool` query>>, for example:
|
||||||
|
@ -95,7 +95,7 @@ Multiple filters can be applied by wrapping them in a
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
==== Filter strategy
|
===== Filter strategy
|
||||||
|
|
||||||
You can control how the filter and query are executed with the `strategy`
|
You can control how the filter and query are executed with the `strategy`
|
||||||
parameter:
|
parameter:
|
||||||
|
|
|
@ -0,0 +1,44 @@
|
||||||
|
[[full-text-queries]]
|
||||||
|
== Full text queries
|
||||||
|
|
||||||
|
The high-level full text queries are usually used for running full text
|
||||||
|
queries on full text fields like the body of an email. They understand how the
|
||||||
|
field being queried is <<analysis,analyzed>> and will apply each field's
|
||||||
|
`analyzer` (or `search_analyzer`) to the query string before executing.
|
||||||
|
|
||||||
|
The queries in this group are:
|
||||||
|
|
||||||
|
<<query-dsl-match-query,`match` query>>::
|
||||||
|
|
||||||
|
The standard query for performing full text queries, including fuzzy matching
|
||||||
|
and phrase or proximity queries.
|
||||||
|
|
||||||
|
<<query-dsl-multi-match-query,`multi_match` query>>::
|
||||||
|
|
||||||
|
The multi-field version of the `match` query.
|
||||||
|
|
||||||
|
<<query-dsl-common-terms-query,`common_terms` query>>::
|
||||||
|
|
||||||
|
A more specialized query which gives more preference to uncommon words.
|
||||||
|
|
||||||
|
<<query-dsl-query-string-query,`query_string` query>>::
|
||||||
|
|
||||||
|
Supports the compact Lucene <<query-string-syntax,query string syntax>>,
|
||||||
|
allowing you to specify AND|OR|NOT conditions and multi-field search
|
||||||
|
within a single query string. For expert users only.
|
||||||
|
|
||||||
|
<<query-dsl-simple-query-string-query,`simple_query_string`>>::
|
||||||
|
|
||||||
|
A simpler, more robust version of the `query_string` syntax suitable
|
||||||
|
for exposing directly to users.
|
||||||
|
|
||||||
|
include::match-query.asciidoc[]
|
||||||
|
|
||||||
|
include::multi-match-query.asciidoc[]
|
||||||
|
|
||||||
|
include::common-terms-query.asciidoc[]
|
||||||
|
|
||||||
|
include::query-string-query.asciidoc[]
|
||||||
|
|
||||||
|
include::simple-query-string-query.asciidoc[]
|
||||||
|
|
|
@ -1,15 +1,13 @@
|
||||||
[[query-dsl-function-score-query]]
|
[[query-dsl-function-score-query]]
|
||||||
== Function Score Query
|
=== Function Score Query
|
||||||
|
|
||||||
The `function_score` allows you to modify the score of documents that are
|
The `function_score` allows you to modify the score of documents that are
|
||||||
retrieved by a query. This can be useful if, for example, a score
|
retrieved by a query. This can be useful if, for example, a score
|
||||||
function is computationally expensive and it is sufficient to compute
|
function is computationally expensive and it is sufficient to compute
|
||||||
the score on a filtered set of documents.
|
the score on a filtered set of documents.
|
||||||
|
|
||||||
=== Using function score
|
|
||||||
|
|
||||||
To use `function_score`, the user has to define a query and one or
|
To use `function_score`, the user has to define a query and one or
|
||||||
several functions, that compute a new score for each document returned
|
more functions, that compute a new score for each document returned
|
||||||
by the query.
|
by the query.
|
||||||
|
|
||||||
`function_score` can be used with only one function like this:
|
`function_score` can be used with only one function like this:
|
||||||
|
@ -89,13 +87,11 @@ query. The parameter `boost_mode` defines how:
|
||||||
`min`:: min of query score and function score
|
`min`:: min of query score and function score
|
||||||
|
|
||||||
By default, modifying the score does not change which documents match. To exclude
|
By default, modifying the score does not change which documents match. To exclude
|
||||||
documents that do not meet a certain score threshold the `min_score` parameter can be set to the desired score threshold.
|
documents that do not meet a certain score threshold the `min_score` parameter can be set to the desired score threshold.
|
||||||
|
|
||||||
==== Score functions
|
|
||||||
|
|
||||||
The `function_score` query provides several types of score functions.
|
The `function_score` query provides several types of score functions.
|
||||||
|
|
||||||
===== Script score
|
==== Script score
|
||||||
|
|
||||||
The `script_score` function allows you to wrap another query and customize
|
The `script_score` function allows you to wrap another query and customize
|
||||||
the scoring of it optionally with a computation derived from other numeric
|
the scoring of it optionally with a computation derived from other numeric
|
||||||
|
@ -135,7 +131,7 @@ Note that unlike the `custom_score` query, the
|
||||||
score of the query is multiplied with the result of the script scoring. If
|
score of the query is multiplied with the result of the script scoring. If
|
||||||
you wish to inhibit this, set `"boost_mode": "replace"`
|
you wish to inhibit this, set `"boost_mode": "replace"`
|
||||||
|
|
||||||
===== Weight
|
==== Weight
|
||||||
|
|
||||||
The `weight` score allows you to multiply the score by the provided
|
The `weight` score allows you to multiply the score by the provided
|
||||||
`weight`. This can sometimes be desired since boost value set on
|
`weight`. This can sometimes be desired since boost value set on
|
||||||
|
@ -147,7 +143,7 @@ not.
|
||||||
"weight" : number
|
"weight" : number
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
===== Random
|
==== Random
|
||||||
|
|
||||||
The `random_score` generates scores using a hash of the `_uid` field,
|
The `random_score` generates scores using a hash of the `_uid` field,
|
||||||
with a `seed` for variation. If `seed` is not specified, the current
|
with a `seed` for variation. If `seed` is not specified, the current
|
||||||
|
@ -163,7 +159,7 @@ be a memory intensive operation since the values are unique.
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
===== Field Value factor
|
==== Field Value factor
|
||||||
|
|
||||||
The `field_value_factor` function allows you to use a field from a document to
|
The `field_value_factor` function allows you to use a field from a document to
|
||||||
influence the score. It's similar to using the `script_score` function, however,
|
influence the score. It's similar to using the `script_score` function, however,
|
||||||
|
@ -207,7 +203,7 @@ is an illegal operation, and an exception will be thrown. Be sure to limit the
|
||||||
values of the field with a range filter to avoid this, or use `log1p` and
|
values of the field with a range filter to avoid this, or use `log1p` and
|
||||||
`ln1p`.
|
`ln1p`.
|
||||||
|
|
||||||
===== Decay functions
|
==== Decay functions
|
||||||
|
|
||||||
Decay functions score a document with a function that decays depending
|
Decay functions score a document with a function that decays depending
|
||||||
on the distance of a numeric field value of the document from a user
|
on the distance of a numeric field value of the document from a user
|
||||||
|
@ -254,13 +250,13 @@ The `offset` and `decay` parameters are optional.
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`origin`::
|
`origin`::
|
||||||
The point of origin used for calculating distance. Must be given as a
|
The point of origin used for calculating distance. Must be given as a
|
||||||
number for numeric field, date for date fields and geo point for geo fields.
|
number for numeric field, date for date fields and geo point for geo fields.
|
||||||
Required for geo and numeric field. For date fields the default is `now`. Date
|
Required for geo and numeric field. For date fields the default is `now`. Date
|
||||||
math (for example `now-1h`) is supported for origin.
|
math (for example `now-1h`) is supported for origin.
|
||||||
|
|
||||||
`scale`::
|
`scale`::
|
||||||
Required for all types. Defines the distance from origin at which the computed
|
Required for all types. Defines the distance from origin at which the computed
|
||||||
score will equal `decay` parameter. For geo fields: Can be defined as number+unit (1km, 12m,...).
|
score will equal `decay` parameter. For geo fields: Can be defined as number+unit (1km, 12m,...).
|
||||||
Default unit is meters. For date fields: Can to be defined as a number+unit ("1h", "10d",...).
|
Default unit is meters. For date fields: Can to be defined as a number+unit ("1h", "10d",...).
|
||||||
Default unit is milliseconds. For numeric field: Any number.
|
Default unit is milliseconds. For numeric field: Any number.
|
||||||
|
@ -360,7 +356,7 @@ Example:
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
==== Detailed example
|
===== Detailed example
|
||||||
|
|
||||||
Suppose you are searching for a hotel in a certain town. Your budget is
|
Suppose you are searching for a hotel in a certain town. Your budget is
|
||||||
limited. Also, you would like the hotel to be close to the town center,
|
limited. Also, you would like the hotel to be close to the town center,
|
||||||
|
@ -480,7 +476,7 @@ image::https://f.cloud.github.com/assets/4320215/768161/082975c0-e899-11e2-86f7-
|
||||||
|
|
||||||
image::https://f.cloud.github.com/assets/4320215/768162/0b606884-e899-11e2-907b-aefc77eefef6.png[width="700px"]
|
image::https://f.cloud.github.com/assets/4320215/768162/0b606884-e899-11e2-907b-aefc77eefef6.png[width="700px"]
|
||||||
|
|
||||||
===== Linear' decay, keyword `linear`
|
===== Linear decay, keyword `linear`
|
||||||
|
|
||||||
When choosing `linear` as the decay function in the above example, the
|
When choosing `linear` as the decay function in the above example, the
|
||||||
contour and surface plot of the multiplier looks like this:
|
contour and surface plot of the multiplier looks like this:
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
[[query-dsl-fuzzy-query]]
|
[[query-dsl-fuzzy-query]]
|
||||||
== Fuzzy Query
|
=== Fuzzy Query
|
||||||
|
|
||||||
The fuzzy query uses similarity based on Levenshtein edit distance for
|
The fuzzy query uses similarity based on Levenshtein edit distance for
|
||||||
`string` fields, and a `+/-` margin on numeric and date fields.
|
`string` fields, and a `+/-` margin on numeric and date fields.
|
||||||
|
|
||||||
=== String fields
|
==== String fields
|
||||||
|
|
||||||
The `fuzzy` query generates all possible matching terms that are within the
|
The `fuzzy` query generates all possible matching terms that are within the
|
||||||
maximum edit distance specified in `fuzziness` and then checks the term
|
maximum edit distance specified in `fuzziness` and then checks the term
|
||||||
|
@ -38,7 +38,7 @@ Or with more advanced settings:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Parameters
|
===== Parameters
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`fuzziness`::
|
`fuzziness`::
|
||||||
|
@ -62,7 +62,7 @@ are both set to `0`. This could cause every term in the index to be examined!
|
||||||
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Numeric and date fields
|
==== Numeric and date fields
|
||||||
|
|
||||||
Performs a <<query-dsl-range-query>> ``around'' the value using the
|
Performs a <<query-dsl-range-query>> ``around'' the value using the
|
||||||
`fuzziness` value as a `+/-` range, where:
|
`fuzziness` value as a `+/-` range, where:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-geo-bounding-box-query]]
|
[[query-dsl-geo-bounding-box-query]]
|
||||||
== Geo Bounding Box Query
|
=== Geo Bounding Box Query
|
||||||
|
|
||||||
A query allowing to filter hits based on a point location using a
|
A query allowing to filter hits based on a point location using a
|
||||||
bounding box. Assuming the following indexed document:
|
bounding box. Assuming the following indexed document:
|
||||||
|
@ -45,13 +45,13 @@ Then the following simple query can be executed with a
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Accepted Formats
|
==== Accepted Formats
|
||||||
|
|
||||||
In much the same way the geo_point type can accept different
|
In much the same way the geo_point type can accept different
|
||||||
representation of the geo point, the filter can accept it as well:
|
representation of the geo point, the filter can accept it as well:
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Lon As Properties
|
===== Lat Lon As Properties
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -79,7 +79,7 @@ representation of the geo point, the filter can accept it as well:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Lon As Array
|
===== Lat Lon As Array
|
||||||
|
|
||||||
Format in `[lon, lat]`, note, the order of lon/lat here in order to
|
Format in `[lon, lat]`, note, the order of lon/lat here in order to
|
||||||
conform with http://geojson.org/[GeoJSON].
|
conform with http://geojson.org/[GeoJSON].
|
||||||
|
@ -104,7 +104,7 @@ conform with http://geojson.org/[GeoJSON].
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Lon As String
|
===== Lat Lon As String
|
||||||
|
|
||||||
Format in `lat,lon`.
|
Format in `lat,lon`.
|
||||||
|
|
||||||
|
@ -128,7 +128,7 @@ Format in `lat,lon`.
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Geohash
|
===== Geohash
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -150,7 +150,7 @@ Format in `lat,lon`.
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Vertices
|
==== Vertices
|
||||||
|
|
||||||
The vertices of the bounding box can either be set by `top_left` and
|
The vertices of the bounding box can either be set by `top_left` and
|
||||||
`bottom_right` or by `top_right` and `bottom_left` parameters. More
|
`bottom_right` or by `top_right` and `bottom_left` parameters. More
|
||||||
|
@ -182,20 +182,20 @@ values separately.
|
||||||
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== geo_point Type
|
==== geo_point Type
|
||||||
|
|
||||||
The filter *requires* the `geo_point` type to be set on the relevant
|
The filter *requires* the `geo_point` type to be set on the relevant
|
||||||
field.
|
field.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Multi Location Per Document
|
==== Multi Location Per Document
|
||||||
|
|
||||||
The filter can work with multiple locations / points per document. Once
|
The filter can work with multiple locations / points per document. Once
|
||||||
a single location / point matches the filter, the document will be
|
a single location / point matches the filter, the document will be
|
||||||
included in the filter
|
included in the filter
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Type
|
==== Type
|
||||||
|
|
||||||
The type of the bounding box execution by default is set to `memory`,
|
The type of the bounding box execution by default is set to `memory`,
|
||||||
which means in memory checks if the doc falls within the bounding box
|
which means in memory checks if the doc falls within the bounding box
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-geo-distance-query]]
|
[[query-dsl-geo-distance-query]]
|
||||||
== Geo Distance Query
|
=== Geo Distance Query
|
||||||
|
|
||||||
Filters documents that include only hits that exists within a specific
|
Filters documents that include only hits that exists within a specific
|
||||||
distance from a geo point. Assuming the following indexed json:
|
distance from a geo point. Assuming the following indexed json:
|
||||||
|
@ -40,13 +40,13 @@ filter:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Accepted Formats
|
==== Accepted Formats
|
||||||
|
|
||||||
In much the same way the `geo_point` type can accept different
|
In much the same way the `geo_point` type can accept different
|
||||||
representation of the geo point, the filter can accept it as well:
|
representation of the geo point, the filter can accept it as well:
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Lon As Properties
|
===== Lat Lon As Properties
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -69,7 +69,7 @@ representation of the geo point, the filter can accept it as well:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Lon As Array
|
===== Lat Lon As Array
|
||||||
|
|
||||||
Format in `[lon, lat]`, note, the order of lon/lat here in order to
|
Format in `[lon, lat]`, note, the order of lon/lat here in order to
|
||||||
conform with http://geojson.org/[GeoJSON].
|
conform with http://geojson.org/[GeoJSON].
|
||||||
|
@ -92,7 +92,7 @@ conform with http://geojson.org/[GeoJSON].
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Lon As String
|
===== Lat Lon As String
|
||||||
|
|
||||||
Format in `lat,lon`.
|
Format in `lat,lon`.
|
||||||
|
|
||||||
|
@ -114,7 +114,7 @@ Format in `lat,lon`.
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Geohash
|
===== Geohash
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -134,7 +134,7 @@ Format in `lat,lon`.
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Options
|
==== Options
|
||||||
|
|
||||||
The following are options allowed on the filter:
|
The following are options allowed on the filter:
|
||||||
|
|
||||||
|
@ -160,13 +160,13 @@ The following are options allowed on the filter:
|
||||||
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== geo_point Type
|
==== geo_point Type
|
||||||
|
|
||||||
The filter *requires* the `geo_point` type to be set on the relevant
|
The filter *requires* the `geo_point` type to be set on the relevant
|
||||||
field.
|
field.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Multi Location Per Document
|
==== Multi Location Per Document
|
||||||
|
|
||||||
The `geo_distance` filter can work with multiple locations / points per
|
The `geo_distance` filter can work with multiple locations / points per
|
||||||
document. Once a single location / point matches the filter, the
|
document. Once a single location / point matches the filter, the
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-geo-distance-range-query]]
|
[[query-dsl-geo-distance-range-query]]
|
||||||
== Geo Distance Range Query
|
=== Geo Distance Range Query
|
||||||
|
|
||||||
Filters documents that exists within a range from a specific point:
|
Filters documents that exists within a range from a specific point:
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-geo-polygon-query]]
|
[[query-dsl-geo-polygon-query]]
|
||||||
== Geo Polygon Query
|
=== Geo Polygon Query
|
||||||
|
|
||||||
A query allowing to include hits that only fall within a polygon of
|
A query allowing to include hits that only fall within a polygon of
|
||||||
points. Here is an example:
|
points. Here is an example:
|
||||||
|
@ -27,10 +27,10 @@ points. Here is an example:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Allowed Formats
|
==== Allowed Formats
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Long as Array
|
===== Lat Long as Array
|
||||||
|
|
||||||
Format in `[lon, lat]`, note, the order of lon/lat here in order to
|
Format in `[lon, lat]`, note, the order of lon/lat here in order to
|
||||||
conform with http://geojson.org/[GeoJSON].
|
conform with http://geojson.org/[GeoJSON].
|
||||||
|
@ -58,7 +58,7 @@ conform with http://geojson.org/[GeoJSON].
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Lat Lon as String
|
===== Lat Lon as String
|
||||||
|
|
||||||
Format in `lat,lon`.
|
Format in `lat,lon`.
|
||||||
|
|
||||||
|
@ -85,7 +85,7 @@ Format in `lat,lon`.
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Geohash
|
===== Geohash
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -110,7 +110,7 @@ Format in `lat,lon`.
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== geo_point Type
|
==== geo_point Type
|
||||||
|
|
||||||
The filter *requires* the
|
The filter *requires* the
|
||||||
<<mapping-geo-point-type,geo_point>> type to be
|
<<mapping-geo-point-type,geo_point>> type to be
|
||||||
|
|
|
@ -0,0 +1,50 @@
|
||||||
|
[[geo-queries]]
|
||||||
|
== Geo queries
|
||||||
|
|
||||||
|
Elasticsearch supports two types of geo data:
|
||||||
|
<<mapping-geo-point-type,`geo_point`>> fields which support lat/lon pairs, and
|
||||||
|
<<mapping-geo-shape-type,`geo_shape`>> fields, which support points,
|
||||||
|
lines, circles, polygons, multi-polygons etc.
|
||||||
|
|
||||||
|
The queries in this group are:
|
||||||
|
|
||||||
|
<<query-dsl-geo-shape-query,`geo_shape`>> query::
|
||||||
|
|
||||||
|
Find document with geo-shapes which either intersect, are contained by, or
|
||||||
|
do not interesect with the specified geo-shape.
|
||||||
|
|
||||||
|
<<query-dsl-geo-bounding-box-query,`geo_bounding_box`>> query::
|
||||||
|
|
||||||
|
Finds documents with geo-points that fall into the specified rectangle.
|
||||||
|
|
||||||
|
<<query-dsl-geo-distance-query,`geo_distance`>> query::
|
||||||
|
|
||||||
|
Finds document with geo-points within the specified distance of a central
|
||||||
|
point.
|
||||||
|
|
||||||
|
<<query-dsl-geo-distance-range-query,`geo_distance_range`>> query::
|
||||||
|
|
||||||
|
Like the `geo_point` query, but the range starts at a specified distance
|
||||||
|
from the central point.
|
||||||
|
|
||||||
|
<<query-dsl-geo-polygon-query,`geo_polygon`>> query::
|
||||||
|
|
||||||
|
Find documents with geo-points within the specified polygon.
|
||||||
|
|
||||||
|
<<query-dsl-geohash-cell-query,`geohash_cell`>> query::
|
||||||
|
|
||||||
|
Find geo-points whose geohash intersects with the geohash of the specified
|
||||||
|
point.
|
||||||
|
|
||||||
|
|
||||||
|
include::geo-shape-query.asciidoc[]
|
||||||
|
|
||||||
|
include::geo-bounding-box-query.asciidoc[]
|
||||||
|
|
||||||
|
include::geo-distance-query.asciidoc[]
|
||||||
|
|
||||||
|
include::geo-distance-range-query.asciidoc[]
|
||||||
|
|
||||||
|
include::geo-polygon-query.asciidoc[]
|
||||||
|
|
||||||
|
include::geohash-cell-query.asciidoc[]
|
|
@ -1,26 +1,21 @@
|
||||||
[[query-dsl-geo-shape-query]]
|
[[query-dsl-geo-shape-query]]
|
||||||
== GeoShape Filter
|
=== GeoShape Query
|
||||||
|
|
||||||
Filter documents indexed using the `geo_shape` type.
|
Filter documents indexed using the `geo_shape` type.
|
||||||
|
|
||||||
Requires the <<mapping-geo-shape-type,geo_shape
|
Requires the <<mapping-geo-shape-type,geo_shape Mapping>>.
|
||||||
Mapping>>.
|
|
||||||
|
|
||||||
The `geo_shape` query uses the same grid square representation as the
|
The `geo_shape` query uses the same grid square representation as the
|
||||||
geo_shape mapping to find documents that have a shape that intersects
|
geo_shape mapping to find documents that have a shape that intersects
|
||||||
with the query shape. It will also use the same PrefixTree configuration
|
with the query shape. It will also use the same PrefixTree configuration
|
||||||
as defined for the field mapping.
|
as defined for the field mapping.
|
||||||
|
|
||||||
[float]
|
The query supports two ways of defining the query shape, either by
|
||||||
==== Filter Format
|
|
||||||
|
|
||||||
The Filter supports two ways of defining the Filter shape, either by
|
|
||||||
providing a whole shape definition, or by referencing the name of a shape
|
providing a whole shape definition, or by referencing the name of a shape
|
||||||
pre-indexed in another index. Both formats are defined below with
|
pre-indexed in another index. Both formats are defined below with
|
||||||
examples.
|
examples.
|
||||||
|
|
||||||
[float]
|
==== Inline Shape Definition
|
||||||
===== Provided Shape Definition
|
|
||||||
|
|
||||||
Similar to the `geo_shape` type, the `geo_shape` Filter uses
|
Similar to the `geo_shape` type, the `geo_shape` Filter uses
|
||||||
http://www.geojson.org[GeoJSON] to represent shapes.
|
http://www.geojson.org[GeoJSON] to represent shapes.
|
||||||
|
@ -64,8 +59,7 @@ The following query will find the point using the Elasticsearch's
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
==== Pre-Indexed Shape
|
||||||
===== Pre-Indexed Shape
|
|
||||||
|
|
||||||
The Filter also supports using a shape which has already been indexed in
|
The Filter also supports using a shape which has already been indexed in
|
||||||
another index and/or index type. This is particularly useful for when
|
another index and/or index type. This is particularly useful for when
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-geohash-cell-query]]
|
[[query-dsl-geohash-cell-query]]
|
||||||
== Geohash Cell Query
|
=== Geohash Cell Query
|
||||||
|
|
||||||
The `geohash_cell` query provides access to a hierarchy of geohashes.
|
The `geohash_cell` query provides access to a hierarchy of geohashes.
|
||||||
By defining a geohash cell, only <<mapping-geo-point-type,geopoints>>
|
By defining a geohash cell, only <<mapping-geo-point-type,geopoints>>
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-has-child-query]]
|
[[query-dsl-has-child-query]]
|
||||||
== Has Child Query
|
=== Has Child Query
|
||||||
|
|
||||||
The `has_child` filter accepts a query and the child type to run against, and
|
The `has_child` filter accepts a query and the child type to run against, and
|
||||||
results in parent documents that have child docs matching the query. Here is
|
results in parent documents that have child docs matching the query. Here is
|
||||||
|
@ -20,7 +20,7 @@ an example:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Scoring capabilities
|
==== Scoring capabilities
|
||||||
|
|
||||||
The `has_child` also has scoring support. The
|
The `has_child` also has scoring support. The
|
||||||
supported score types are `min`, `max`, `sum`, `avg` or `none`. The default is
|
supported score types are `min`, `max`, `sum`, `avg` or `none`. The default is
|
||||||
|
@ -46,7 +46,7 @@ inside the `has_child` query:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Min/Max Children
|
==== Min/Max Children
|
||||||
|
|
||||||
The `has_child` query allows you to specify that a minimum and/or maximum
|
The `has_child` query allows you to specify that a minimum and/or maximum
|
||||||
number of children are required to match for the parent doc to be considered
|
number of children are required to match for the parent doc to be considered
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-has-parent-query]]
|
[[query-dsl-has-parent-query]]
|
||||||
== Has Parent Query
|
=== Has Parent Query
|
||||||
|
|
||||||
The `has_parent` query accepts a query and a parent type. The query is
|
The `has_parent` query accepts a query and a parent type. The query is
|
||||||
executed in the parent document space, which is specified by the parent
|
executed in the parent document space, which is specified by the parent
|
||||||
|
@ -22,7 +22,7 @@ in the same manner as the `has_child` query.
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Scoring capabilities
|
==== Scoring capabilities
|
||||||
|
|
||||||
The `has_parent` also has scoring support. The
|
The `has_parent` also has scoring support. The
|
||||||
supported score types are `score` or `none`. The default is `none` and
|
supported score types are `score` or `none`. The default is `none` and
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-ids-query]]
|
[[query-dsl-ids-query]]
|
||||||
== Ids Query
|
=== Ids Query
|
||||||
|
|
||||||
Filters documents that only have the provided ids. Note, this query
|
Filters documents that only have the provided ids. Note, this query
|
||||||
uses the <<mapping-uid-field,_uid>> field.
|
uses the <<mapping-uid-field,_uid>> field.
|
||||||
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
include::query_filter_context.asciidoc[]
|
||||||
|
|
||||||
include::match-query.asciidoc[]
|
include::match-query.asciidoc[]
|
||||||
|
|
||||||
include::multi-match-query.asciidoc[]
|
include::multi-match-query.asciidoc[]
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-indices-query]]
|
[[query-dsl-indices-query]]
|
||||||
== Indices Query
|
=== Indices Query
|
||||||
|
|
||||||
The `indices` query can be used when executed across multiple indices,
|
The `indices` query can be used when executed across multiple indices,
|
||||||
allowing to have a query that executes only when executed on an index
|
allowing to have a query that executes only when executed on an index
|
||||||
|
@ -29,9 +29,9 @@ documents), and `all` (to match all). Defaults to `all`.
|
||||||
`query` is mandatory, as well as `indices` (or `index`).
|
`query` is mandatory, as well as `indices` (or `index`).
|
||||||
|
|
||||||
[TIP]
|
[TIP]
|
||||||
===================================================================
|
====================================================================
|
||||||
The fields order is important: if the `indices` are provided before `query`
|
The fields order is important: if the `indices` are provided before `query`
|
||||||
or `no_match_query`, the related queries get parsed only against the indices
|
or `no_match_query`, the related queries get parsed only against the indices
|
||||||
that they are going to be executed on. This is useful to avoid parsing queries
|
that they are going to be executed on. This is useful to avoid parsing queries
|
||||||
when it is not necessary and prevent potential mapping errors.
|
when it is not necessary and prevent potential mapping errors.
|
||||||
===================================================================
|
====================================================================
|
||||||
|
|
|
@ -0,0 +1,32 @@
|
||||||
|
[[joining-queries]]
|
||||||
|
== Joining queries
|
||||||
|
|
||||||
|
Performing full SQL-style joins in a distributed system like Elasticsearch is
|
||||||
|
prohibitively expensive. Instead, Elasticsearch offers two forms of join
|
||||||
|
which are designed to scale horizontally.
|
||||||
|
|
||||||
|
<<query-dsl-nested-query,`nested` query>>::
|
||||||
|
|
||||||
|
Documents may contains fields of type <<mapping-nested-type,`nested`>>. These
|
||||||
|
fields are used to index arrays of objects, where each object can be queried
|
||||||
|
(with the `nested` query) as an independent document.
|
||||||
|
|
||||||
|
<<query-dsl-has-child-query,`has_child`>> and <<query-dsl-has-parent-query,`has_parent`>> queries::
|
||||||
|
|
||||||
|
A <<mapping-parent-field,parent-child relationship>> can exist between two
|
||||||
|
document types within a single index. The `has_child` query returns parent
|
||||||
|
documents whose child documents match the specified query, while the
|
||||||
|
`has_parent` query returns child documents whose parent document matches the
|
||||||
|
specified query.
|
||||||
|
|
||||||
|
Also see the <<query-dsl-terms-lookup,terms-lookup mechanism>> in the `terms`
|
||||||
|
query, which allows you to build a `terms` query from values contained in
|
||||||
|
another document.
|
||||||
|
|
||||||
|
include::nested-query.asciidoc[]
|
||||||
|
|
||||||
|
include::has-child-query.asciidoc[]
|
||||||
|
|
||||||
|
include::has-parent-query.asciidoc[]
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-limit-query]]
|
[[query-dsl-limit-query]]
|
||||||
== Limit Query
|
=== Limit Query
|
||||||
|
|
||||||
deprecated[1.6.0, Use <<search-request-body,terminate_after>> instead]
|
deprecated[1.6.0, Use <<search-request-body,terminate_after>> instead]
|
||||||
|
|
||||||
|
|
|
@ -1,20 +1,17 @@
|
||||||
[[query-dsl-match-all-query]]
|
[[query-dsl-match-all-query]]
|
||||||
== Match All Query
|
== Match All Query
|
||||||
|
|
||||||
A query that matches all documents. Maps to Lucene `MatchAllDocsQuery`.
|
The most simple query, which matches all documents, giving them all a `_score`
|
||||||
|
of `1.0`.
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
{
|
{ "match_all": {} }
|
||||||
"match_all" : { }
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Which can also have boost associated with it:
|
The `_score` can be changed with the `boost` parameter:
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
{
|
{ "match_all": { "boost" : 1.2 }}
|
||||||
"match_all" : { "boost" : 1.2 }
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-match-query]]
|
[[query-dsl-match-query]]
|
||||||
== Match Query
|
=== Match Query
|
||||||
|
|
||||||
A family of `match` queries that accept text/numerics/dates, analyzes
|
A family of `match` queries that accept text/numerics/dates, analyzes
|
||||||
it, and constructs a query out of it. For example:
|
it, and constructs a query out of it. For example:
|
||||||
|
@ -16,10 +16,8 @@ it, and constructs a query out of it. For example:
|
||||||
Note, `message` is the name of a field, you can substitute the name of
|
Note, `message` is the name of a field, you can substitute the name of
|
||||||
any field (including `_all`) instead.
|
any field (including `_all`) instead.
|
||||||
|
|
||||||
[float]
|
There are three types of `match` query: `boolean`, `phrase`, and `phrase_prefix`:
|
||||||
=== Types of Match Queries
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[[query-dsl-match-query-boolean]]
|
[[query-dsl-match-query-boolean]]
|
||||||
==== boolean
|
==== boolean
|
||||||
|
|
||||||
|
@ -40,7 +38,6 @@ data-type mismatches, such as trying to query a numeric field with a text
|
||||||
query string. Defaults to `false`.
|
query string. Defaults to `false`.
|
||||||
|
|
||||||
[[query-dsl-match-query-fuzziness]]
|
[[query-dsl-match-query-fuzziness]]
|
||||||
[float]
|
|
||||||
===== Fuzziness
|
===== Fuzziness
|
||||||
|
|
||||||
`fuzziness` allows _fuzzy matching_ based on the type of field being queried.
|
`fuzziness` allows _fuzzy matching_ based on the type of field being queried.
|
||||||
|
@ -69,7 +66,6 @@ change in structure, `message` is the field name):
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[[query-dsl-match-query-zero]]
|
[[query-dsl-match-query-zero]]
|
||||||
[float]
|
|
||||||
===== Zero terms query
|
===== Zero terms query
|
||||||
If the analyzer used removes all tokens in a query like a `stop` filter
|
If the analyzer used removes all tokens in a query like a `stop` filter
|
||||||
does, the default behavior is to match no documents at all. In order to
|
does, the default behavior is to match no documents at all. In order to
|
||||||
|
@ -90,7 +86,6 @@ change that the `zero_terms_query` option can be used, which accepts
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[[query-dsl-match-query-cutoff]]
|
[[query-dsl-match-query-cutoff]]
|
||||||
[float]
|
|
||||||
===== Cutoff frequency
|
===== Cutoff frequency
|
||||||
|
|
||||||
The match query supports a `cutoff_frequency` that allows
|
The match query supports a `cutoff_frequency` that allows
|
||||||
|
@ -132,7 +127,6 @@ that when trying it out on test indexes with low document numbers you
|
||||||
should follow the advice in {defguide}/relevance-is-broken.html[Relevance is broken].
|
should follow the advice in {defguide}/relevance-is-broken.html[Relevance is broken].
|
||||||
|
|
||||||
[[query-dsl-match-query-phrase]]
|
[[query-dsl-match-query-phrase]]
|
||||||
[float]
|
|
||||||
==== phrase
|
==== phrase
|
||||||
|
|
||||||
The `match_phrase` query analyzes the text and creates a `phrase` query
|
The `match_phrase` query analyzes the text and creates a `phrase` query
|
||||||
|
@ -181,9 +175,8 @@ definition, or the default search analyzer, for example:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
|
||||||
[[query-dsl-match-query-phrase-prefix]]
|
[[query-dsl-match-query-phrase-prefix]]
|
||||||
===== match_phrase_prefix
|
==== match_phrase_prefix
|
||||||
|
|
||||||
The `match_phrase_prefix` is the same as `match_phrase`, except that it
|
The `match_phrase_prefix` is the same as `match_phrase`, except that it
|
||||||
allows for prefix matches on the last term in the text. For example:
|
allows for prefix matches on the last term in the text. For example:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-missing-query]]
|
[[query-dsl-missing-query]]
|
||||||
== Missing Query
|
=== Missing Query
|
||||||
|
|
||||||
Returns documents that have only `null` values or no value in the original field:
|
Returns documents that have only `null` values or no value in the original field:
|
||||||
|
|
||||||
|
@ -42,7 +42,7 @@ These documents would *not* match the above filter:
|
||||||
<3> This field has one non-`null` value.
|
<3> This field has one non-`null` value.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== `null_value` mapping
|
==== `null_value` mapping
|
||||||
|
|
||||||
If the field mapping includes a `null_value` (see <<mapping-core-types>>) then explicit `null` values
|
If the field mapping includes a `null_value` (see <<mapping-core-types>>) then explicit `null` values
|
||||||
are replaced with the specified `null_value`. For instance, if the `user` field were mapped
|
are replaced with the specified `null_value`. For instance, if the `user` field were mapped
|
||||||
|
@ -75,7 +75,7 @@ no values in the `user` field and thus would match the `missing` filter:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== `existence` and `null_value` parameters
|
===== `existence` and `null_value` parameters
|
||||||
|
|
||||||
When the field being queried has a `null_value` mapping, then the behaviour of
|
When the field being queried has a `null_value` mapping, then the behaviour of
|
||||||
the `missing` filter can be altered with the `existence` and `null_value`
|
the `missing` filter can be altered with the `existence` and `null_value`
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-mlt-query]]
|
[[query-dsl-mlt-query]]
|
||||||
== More Like This Query
|
=== More Like This Query
|
||||||
|
|
||||||
The More Like This Query (MLT Query) finds documents that are "like" a given
|
The More Like This Query (MLT Query) finds documents that are "like" a given
|
||||||
set of documents. In order to do so, MLT selects a set of representative terms
|
set of documents. In order to do so, MLT selects a set of representative terms
|
||||||
|
@ -87,7 +87,7 @@ present in the index, the syntax is similar to <<docs-termvectors-artificial-doc
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
=== How it Works
|
==== How it Works
|
||||||
|
|
||||||
Suppose we wanted to find all documents similar to a given input document.
|
Suppose we wanted to find all documents similar to a given input document.
|
||||||
Obviously, the input document itself should be its best match for that type of
|
Obviously, the input document itself should be its best match for that type of
|
||||||
|
@ -139,14 +139,14 @@ curl -s -XPUT 'http://localhost:9200/imdb/' -d '{
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
=== Parameters
|
==== Parameters
|
||||||
|
|
||||||
The only required parameter is `like`, all other parameters have sensible
|
The only required parameter is `like`, all other parameters have sensible
|
||||||
defaults. There are three types of parameters: one to specify the document
|
defaults. There are three types of parameters: one to specify the document
|
||||||
input, the other one for term selection and for query formation.
|
input, the other one for term selection and for query formation.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Document Input Parameters
|
==== Document Input Parameters
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`like`:: coming[2.0]
|
`like`:: coming[2.0]
|
||||||
|
@ -179,7 +179,7 @@ A list of documents following the same syntax as the <<docs-multi-get,Multi GET
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
[[mlt-query-term-selection]]
|
[[mlt-query-term-selection]]
|
||||||
=== Term Selection Parameters
|
==== Term Selection Parameters
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`max_query_terms`::
|
`max_query_terms`::
|
||||||
|
@ -219,7 +219,7 @@ The analyzer that is used to analyze the free form text. Defaults to the
|
||||||
analyzer associated with the first field in `fields`.
|
analyzer associated with the first field in `fields`.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Query Formation Parameters
|
==== Query Formation Parameters
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`minimum_should_match`::
|
`minimum_should_match`::
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-multi-match-query]]
|
[[query-dsl-multi-match-query]]
|
||||||
== Multi Match Query
|
=== Multi Match Query
|
||||||
|
|
||||||
The `multi_match` query builds on the <<query-dsl-match-query,`match` query>>
|
The `multi_match` query builds on the <<query-dsl-match-query,`match` query>>
|
||||||
to allow multi-field queries:
|
to allow multi-field queries:
|
||||||
|
@ -17,7 +17,7 @@ to allow multi-field queries:
|
||||||
<2> The fields to be queried.
|
<2> The fields to be queried.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== `fields` and per-field boosting
|
==== `fields` and per-field boosting
|
||||||
|
|
||||||
Fields can be specified with wildcards, eg:
|
Fields can be specified with wildcards, eg:
|
||||||
|
|
||||||
|
@ -47,7 +47,7 @@ Individual fields can be boosted with the caret (`^`) notation:
|
||||||
|
|
||||||
[[multi-match-types]]
|
[[multi-match-types]]
|
||||||
[float]
|
[float]
|
||||||
=== Types of `multi_match` query:
|
==== Types of `multi_match` query:
|
||||||
|
|
||||||
The way the `multi_match` query is executed internally depends on the `type`
|
The way the `multi_match` query is executed internally depends on the `type`
|
||||||
parameter, which can be set to:
|
parameter, which can be set to:
|
||||||
|
@ -70,7 +70,7 @@ parameter, which can be set to:
|
||||||
combines the `_score` from each field. See <<type-phrase>>.
|
combines the `_score` from each field. See <<type-phrase>>.
|
||||||
|
|
||||||
[[type-best-fields]]
|
[[type-best-fields]]
|
||||||
=== `best_fields`
|
==== `best_fields`
|
||||||
|
|
||||||
The `best_fields` type is most useful when you are searching for multiple
|
The `best_fields` type is most useful when you are searching for multiple
|
||||||
words best found in the same field. For instance ``brown fox'' in a single
|
words best found in the same field. For instance ``brown fox'' in a single
|
||||||
|
@ -121,7 +121,7 @@ and `cutoff_frequency`, as explained in <<query-dsl-match-query, match query>>.
|
||||||
[IMPORTANT]
|
[IMPORTANT]
|
||||||
[[operator-min]]
|
[[operator-min]]
|
||||||
.`operator` and `minimum_should_match`
|
.`operator` and `minimum_should_match`
|
||||||
==================================================
|
===================================================
|
||||||
|
|
||||||
The `best_fields` and `most_fields` types are _field-centric_ -- they generate
|
The `best_fields` and `most_fields` types are _field-centric_ -- they generate
|
||||||
a `match` query *per field*. This means that the `operator` and
|
a `match` query *per field*. This means that the `operator` and
|
||||||
|
@ -153,10 +153,10 @@ to match.
|
||||||
|
|
||||||
See <<type-cross-fields>> for a better solution.
|
See <<type-cross-fields>> for a better solution.
|
||||||
|
|
||||||
==================================================
|
===================================================
|
||||||
|
|
||||||
[[type-most-fields]]
|
[[type-most-fields]]
|
||||||
=== `most_fields`
|
==== `most_fields`
|
||||||
|
|
||||||
The `most_fields` type is most useful when querying multiple fields that
|
The `most_fields` type is most useful when querying multiple fields that
|
||||||
contain the same text analyzed in different ways. For instance, the main
|
contain the same text analyzed in different ways. For instance, the main
|
||||||
|
@ -203,7 +203,7 @@ and `cutoff_frequency`, as explained in <<query-dsl-match-query,match query>>, b
|
||||||
*see <<operator-min>>*.
|
*see <<operator-min>>*.
|
||||||
|
|
||||||
[[type-phrase]]
|
[[type-phrase]]
|
||||||
=== `phrase` and `phrase_prefix`
|
==== `phrase` and `phrase_prefix`
|
||||||
|
|
||||||
The `phrase` and `phrase_prefix` types behave just like <<type-best-fields>>,
|
The `phrase` and `phrase_prefix` types behave just like <<type-best-fields>>,
|
||||||
but they use a `match_phrase` or `match_phrase_prefix` query instead of a
|
but they use a `match_phrase` or `match_phrase_prefix` query instead of a
|
||||||
|
@ -240,7 +240,7 @@ in <<query-dsl-match-query>>. Type `phrase_prefix` additionally accepts
|
||||||
`max_expansions`.
|
`max_expansions`.
|
||||||
|
|
||||||
[[type-cross-fields]]
|
[[type-cross-fields]]
|
||||||
=== `cross_fields`
|
==== `cross_fields`
|
||||||
|
|
||||||
The `cross_fields` type is particularly useful with structured documents where
|
The `cross_fields` type is particularly useful with structured documents where
|
||||||
multiple fields *should* match. For instance, when querying the `first_name`
|
multiple fields *should* match. For instance, when querying the `first_name`
|
||||||
|
@ -317,7 +317,7 @@ Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`,
|
||||||
`zero_terms_query` and `cutoff_frequency`, as explained in
|
`zero_terms_query` and `cutoff_frequency`, as explained in
|
||||||
<<query-dsl-match-query, match query>>.
|
<<query-dsl-match-query, match query>>.
|
||||||
|
|
||||||
==== `cross_field` and analysis
|
===== `cross_field` and analysis
|
||||||
|
|
||||||
The `cross_field` type can only work in term-centric mode on fields that have
|
The `cross_field` type can only work in term-centric mode on fields that have
|
||||||
the same analyzer. Fields with the same analyzer are grouped together as in
|
the same analyzer. Fields with the same analyzer are grouped together as in
|
||||||
|
@ -411,7 +411,7 @@ which will be executed as:
|
||||||
blended("will", fields: [first, first.edge, last.edge, last])
|
blended("will", fields: [first, first.edge, last.edge, last])
|
||||||
blended("smith", fields: [first, first.edge, last.edge, last])
|
blended("smith", fields: [first, first.edge, last.edge, last])
|
||||||
|
|
||||||
==== `tie_breaker`
|
===== `tie_breaker`
|
||||||
|
|
||||||
By default, each per-term `blended` query will use the best score returned by
|
By default, each per-term `blended` query will use the best score returned by
|
||||||
any field in a group, then these scores are added together to give the final
|
any field in a group, then these scores are added together to give the final
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-nested-query]]
|
[[query-dsl-nested-query]]
|
||||||
== Nested Query
|
=== Nested Query
|
||||||
|
|
||||||
Nested query allows to query nested objects / docs (see
|
Nested query allows to query nested objects / docs (see
|
||||||
<<mapping-nested-type,nested mapping>>). The
|
<<mapping-nested-type,nested mapping>>). The
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-not-query]]
|
[[query-dsl-not-query]]
|
||||||
== Not Query
|
=== Not Query
|
||||||
|
|
||||||
A query that filters out matched documents using a query. For example:
|
A query that filters out matched documents using a query. For example:
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-or-query]]
|
[[query-dsl-or-query]]
|
||||||
== Or Query
|
=== Or Query
|
||||||
|
|
||||||
deprecated[2.0.0, Use the `bool` query instead]
|
deprecated[2.0.0, Use the `bool` query instead]
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-prefix-query]]
|
[[query-dsl-prefix-query]]
|
||||||
== Prefix Query
|
=== Prefix Query
|
||||||
|
|
||||||
Matches documents that have fields containing terms with a specified
|
Matches documents that have fields containing terms with a specified
|
||||||
prefix (*not analyzed*). The prefix query maps to Lucene `PrefixQuery`.
|
prefix (*not analyzed*). The prefix query maps to Lucene `PrefixQuery`.
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-query-string-query]]
|
[[query-dsl-query-string-query]]
|
||||||
== Query String Query
|
=== Query String Query
|
||||||
|
|
||||||
A query that uses a query parser in order to parse its content. Here is
|
A query that uses a query parser in order to parse its content. Here is
|
||||||
an example:
|
an example:
|
||||||
|
@ -89,7 +89,7 @@ rewritten using the
|
||||||
parameter.
|
parameter.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Default Field
|
==== Default Field
|
||||||
|
|
||||||
When not explicitly specifying the field to search on in the query
|
When not explicitly specifying the field to search on in the query
|
||||||
string syntax, the `index.query.default_field` will be used to derive
|
string syntax, the `index.query.default_field` will be used to derive
|
||||||
|
@ -99,7 +99,7 @@ So, if `_all` field is disabled, it might make sense to change it to set
|
||||||
a different default field.
|
a different default field.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Multi Field
|
==== Multi Field
|
||||||
|
|
||||||
The `query_string` query can also run against multiple fields. Fields can be
|
The `query_string` query can also run against multiple fields. Fields can be
|
||||||
provided via the `"fields"` parameter (example below).
|
provided via the `"fields"` parameter (example below).
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
[[query-string-syntax]]
|
[[query-string-syntax]]
|
||||||
|
|
||||||
=== Query string syntax
|
==== Query string syntax
|
||||||
|
|
||||||
The query string ``mini-language'' is used by the
|
The query string ``mini-language'' is used by the
|
||||||
<<query-dsl-query-string-query>> and by the
|
<<query-dsl-query-string-query>> and by the
|
||||||
|
@ -14,7 +14,7 @@ phrase, in the same order.
|
||||||
Operators allow you to customize the search -- the available options are
|
Operators allow you to customize the search -- the available options are
|
||||||
explained below.
|
explained below.
|
||||||
|
|
||||||
==== Field names
|
===== Field names
|
||||||
|
|
||||||
As mentioned in <<query-dsl-query-string-query>>, the `default_field` is searched for the
|
As mentioned in <<query-dsl-query-string-query>>, the `default_field` is searched for the
|
||||||
search terms, but it is possible to specify other fields in the query syntax:
|
search terms, but it is possible to specify other fields in the query syntax:
|
||||||
|
@ -46,7 +46,7 @@ search terms, but it is possible to specify other fields in the query syntax:
|
||||||
|
|
||||||
_exists_:title
|
_exists_:title
|
||||||
|
|
||||||
==== Wildcards
|
===== Wildcards
|
||||||
|
|
||||||
Wildcard searches can be run on individual terms, using `?` to replace
|
Wildcard searches can be run on individual terms, using `?` to replace
|
||||||
a single character, and `*` to replace zero or more characters:
|
a single character, and `*` to replace zero or more characters:
|
||||||
|
@ -58,12 +58,12 @@ perform very badly -- just think how many terms need to be queried to
|
||||||
match the query string `"a* b* c*"`.
|
match the query string `"a* b* c*"`.
|
||||||
|
|
||||||
[WARNING]
|
[WARNING]
|
||||||
======
|
=======
|
||||||
Allowing a wildcard at the beginning of a word (eg `"*ing"`) is particularly
|
Allowing a wildcard at the beginning of a word (eg `"*ing"`) is particularly
|
||||||
heavy, because all terms in the index need to be examined, just in case
|
heavy, because all terms in the index need to be examined, just in case
|
||||||
they match. Leading wildcards can be disabled by setting
|
they match. Leading wildcards can be disabled by setting
|
||||||
`allow_leading_wildcard` to `false`.
|
`allow_leading_wildcard` to `false`.
|
||||||
======
|
=======
|
||||||
|
|
||||||
Wildcarded terms are not analyzed by default -- they are lowercased
|
Wildcarded terms are not analyzed by default -- they are lowercased
|
||||||
(`lowercase_expanded_terms` defaults to `true`) but no further analysis
|
(`lowercase_expanded_terms` defaults to `true`) but no further analysis
|
||||||
|
@ -72,7 +72,7 @@ is missing some of its letters. However, by setting `analyze_wildcard` to
|
||||||
`true`, an attempt will be made to analyze wildcarded words before searching
|
`true`, an attempt will be made to analyze wildcarded words before searching
|
||||||
the term list for matching terms.
|
the term list for matching terms.
|
||||||
|
|
||||||
==== Regular expressions
|
===== Regular expressions
|
||||||
|
|
||||||
Regular expression patterns can be embedded in the query string by
|
Regular expression patterns can be embedded in the query string by
|
||||||
wrapping them in forward-slashes (`"/"`):
|
wrapping them in forward-slashes (`"/"`):
|
||||||
|
@ -82,7 +82,7 @@ wrapping them in forward-slashes (`"/"`):
|
||||||
The supported regular expression syntax is explained in <<regexp-syntax>>.
|
The supported regular expression syntax is explained in <<regexp-syntax>>.
|
||||||
|
|
||||||
[WARNING]
|
[WARNING]
|
||||||
======
|
=======
|
||||||
The `allow_leading_wildcard` parameter does not have any control over
|
The `allow_leading_wildcard` parameter does not have any control over
|
||||||
regular expressions. A query string such as the following would force
|
regular expressions. A query string such as the following would force
|
||||||
Elasticsearch to visit every term in the index:
|
Elasticsearch to visit every term in the index:
|
||||||
|
@ -90,9 +90,9 @@ Elasticsearch to visit every term in the index:
|
||||||
/.*n/
|
/.*n/
|
||||||
|
|
||||||
Use with caution!
|
Use with caution!
|
||||||
======
|
=======
|
||||||
|
|
||||||
==== Fuzziness
|
===== Fuzziness
|
||||||
|
|
||||||
We can search for terms that are
|
We can search for terms that are
|
||||||
similar to, but not exactly like our search terms, using the ``fuzzy''
|
similar to, but not exactly like our search terms, using the ``fuzzy''
|
||||||
|
@ -112,7 +112,7 @@ sufficient to catch 80% of all human misspellings. It can be specified as:
|
||||||
|
|
||||||
quikc~1
|
quikc~1
|
||||||
|
|
||||||
==== Proximity searches
|
===== Proximity searches
|
||||||
|
|
||||||
While a phrase query (eg `"john smith"`) expects all of the terms in exactly
|
While a phrase query (eg `"john smith"`) expects all of the terms in exactly
|
||||||
the same order, a proximity query allows the specified words to be further
|
the same order, a proximity query allows the specified words to be further
|
||||||
|
@ -127,7 +127,7 @@ query string, the more relevant that document is considered to be. When
|
||||||
compared to the above example query, the phrase `"quick fox"` would be
|
compared to the above example query, the phrase `"quick fox"` would be
|
||||||
considered more relevant than `"quick brown fox"`.
|
considered more relevant than `"quick brown fox"`.
|
||||||
|
|
||||||
==== Ranges
|
===== Ranges
|
||||||
|
|
||||||
Ranges can be specified for date, numeric or string fields. Inclusive ranges
|
Ranges can be specified for date, numeric or string fields. Inclusive ranges
|
||||||
are specified with square brackets `[min TO max]` and exclusive ranges with
|
are specified with square brackets `[min TO max]` and exclusive ranges with
|
||||||
|
@ -168,20 +168,20 @@ Ranges with one side unbounded can use the following syntax:
|
||||||
age:<=10
|
age:<=10
|
||||||
|
|
||||||
[NOTE]
|
[NOTE]
|
||||||
===================================================================
|
====================================================================
|
||||||
To combine an upper and lower bound with the simplified syntax, you
|
To combine an upper and lower bound with the simplified syntax, you
|
||||||
would need to join two clauses with an `AND` operator:
|
would need to join two clauses with an `AND` operator:
|
||||||
|
|
||||||
age:(>=10 AND <20)
|
age:(>=10 AND <20)
|
||||||
age:(+>=10 +<20)
|
age:(+>=10 +<20)
|
||||||
|
|
||||||
===================================================================
|
====================================================================
|
||||||
|
|
||||||
The parsing of ranges in query strings can be complex and error prone. It is
|
The parsing of ranges in query strings can be complex and error prone. It is
|
||||||
much more reliable to use an explicit <<query-dsl-range-query,`range` query>>.
|
much more reliable to use an explicit <<query-dsl-range-query,`range` query>>.
|
||||||
|
|
||||||
|
|
||||||
==== Boosting
|
===== Boosting
|
||||||
|
|
||||||
Use the _boost_ operator `^` to make one term more relevant than another.
|
Use the _boost_ operator `^` to make one term more relevant than another.
|
||||||
For instance, if we want to find all documents about foxes, but we are
|
For instance, if we want to find all documents about foxes, but we are
|
||||||
|
@ -196,7 +196,7 @@ Boosts can also be applied to phrases or to groups:
|
||||||
|
|
||||||
"john smith"^2 (foo bar)^4
|
"john smith"^2 (foo bar)^4
|
||||||
|
|
||||||
==== Boolean operators
|
===== Boolean operators
|
||||||
|
|
||||||
By default, all terms are optional, as long as one term matches. A search
|
By default, all terms are optional, as long as one term matches. A search
|
||||||
for `foo bar baz` will find any document that contains one or more of
|
for `foo bar baz` will find any document that contains one or more of
|
||||||
|
@ -256,7 +256,7 @@ would look like this:
|
||||||
|
|
||||||
****
|
****
|
||||||
|
|
||||||
==== Grouping
|
===== Grouping
|
||||||
|
|
||||||
Multiple terms or clauses can be grouped together with parentheses, to form
|
Multiple terms or clauses can be grouped together with parentheses, to form
|
||||||
sub-queries:
|
sub-queries:
|
||||||
|
@ -268,7 +268,7 @@ of a sub-query:
|
||||||
|
|
||||||
status:(active OR pending) title:(full text search)^2
|
status:(active OR pending) title:(full text search)^2
|
||||||
|
|
||||||
==== Reserved characters
|
===== Reserved characters
|
||||||
|
|
||||||
If you need to use any of the characters which function as operators in your
|
If you need to use any of the characters which function as operators in your
|
||||||
query itself (and not as operators), then you should escape them with
|
query itself (and not as operators), then you should escape them with
|
||||||
|
@ -290,7 +290,7 @@ index is actually `"wifi"`. Escaping the space will protect it from
|
||||||
being touched by the query string parser: `"wi\ fi"`.
|
being touched by the query string parser: `"wi\ fi"`.
|
||||||
****
|
****
|
||||||
|
|
||||||
==== Empty Query
|
===== Empty Query
|
||||||
|
|
||||||
If the query string is empty or only contains whitespaces the query will
|
If the query string is empty or only contains whitespaces the query will
|
||||||
yield an empty result set.
|
yield an empty result set.
|
||||||
|
|
|
@ -0,0 +1,77 @@
|
||||||
|
[[query-filter-context]]
|
||||||
|
== Query and filter context
|
||||||
|
|
||||||
|
The behaviour of a query clause depends on whether it is used in _query context_ or
|
||||||
|
in _filter context_:
|
||||||
|
|
||||||
|
Query context::
|
||||||
|
+
|
||||||
|
--
|
||||||
|
A query clause used in query context answers the question ``__How well does this
|
||||||
|
document match this query clause?__'' Besides deciding whether or not the
|
||||||
|
document matches, the query clause also calculated a `_score` representing how
|
||||||
|
well the document matches, relative to other documents.
|
||||||
|
|
||||||
|
Query context is in effect whenever a query clause is passed to a `query` parameter,
|
||||||
|
such as the `query` parameter in the <<search-request-query,`search`>> API.
|
||||||
|
--
|
||||||
|
|
||||||
|
Filter context::
|
||||||
|
+
|
||||||
|
--
|
||||||
|
In _filter_ context, a query clause answers the question ``__Does this document
|
||||||
|
match this query clause?__'' The answer is a simple Yes or No -- no scores are
|
||||||
|
calculated. Filter context is mostly used for filtering structured data, e.g.
|
||||||
|
|
||||||
|
* __Does this +timestamp+ fall into the range 2015 to 2016?__
|
||||||
|
* __Is the +status+ field set to ++"published"++__?
|
||||||
|
|
||||||
|
Frequently used filters will be cached automatically by Elasticsearch, to
|
||||||
|
speed up performance.
|
||||||
|
|
||||||
|
Filter context is in effect whenever a query clause is passed to a `filter`
|
||||||
|
parameter, such as the `filter` or `must_not` parameters in the
|
||||||
|
<<query-dsl-bool-query,`bool`>> query, the `filter` parameter in the
|
||||||
|
<<query-dsl-constant-score-query,`constant_score`>> query, or the
|
||||||
|
<<search-aggregations-bucket-filter-aggregation,`filter`>> aggregation.
|
||||||
|
--
|
||||||
|
|
||||||
|
Below is an example of query clauses being used in query and filter context
|
||||||
|
in the `search` API. This query will match documents where all of the following
|
||||||
|
conditions are met:
|
||||||
|
|
||||||
|
* The `title` field contains the word `search`.
|
||||||
|
* The `content` field contains the word `elasticsearch`.
|
||||||
|
* The `status` field contains the exact word `published`.
|
||||||
|
* The `publish_date` field contains a date from 1 Jan 2015 onwards.
|
||||||
|
|
||||||
|
[source,json]
|
||||||
|
------------------------------------
|
||||||
|
GET _search
|
||||||
|
{
|
||||||
|
"query": { <1>
|
||||||
|
"bool": { <2>
|
||||||
|
"must": [
|
||||||
|
{ "match": { "title": "Search" }}, <2>
|
||||||
|
{ "match": { "content": "Elasticsearch" }} <2>
|
||||||
|
],
|
||||||
|
"filter": [ <3>
|
||||||
|
{ "term": { "status": "published" }}, <4>
|
||||||
|
{ "range": { "publish_date": { "gte": "2015-01-01" }}} <4>
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
------------------------------------
|
||||||
|
<1> The `query` parameter indicates query context.
|
||||||
|
<2> The `bool` and two `match` clauses are used in query context,
|
||||||
|
which means that they are used to score how well each document
|
||||||
|
matches.
|
||||||
|
<3> The `filter` parameter indicates filter context.
|
||||||
|
<4> The `term` and `range` clauses are used in filter context.
|
||||||
|
They will filter out documents which do not match, but they will
|
||||||
|
not affect the score for matching documents.
|
||||||
|
|
||||||
|
TIP: Use query clauses in query context for conditions which should affect the
|
||||||
|
score of matching documents (i.e. how well does the document match), and use
|
||||||
|
all other query clauses in filter context.
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-range-query]]
|
[[query-dsl-range-query]]
|
||||||
== Range Query
|
=== Range Query
|
||||||
|
|
||||||
Matches documents with fields that have terms within a certain range.
|
Matches documents with fields that have terms within a certain range.
|
||||||
The type of the Lucene query depends on the field type, for `string`
|
The type of the Lucene query depends on the field type, for `string`
|
||||||
|
@ -30,7 +30,7 @@ The `range` query accepts the following parameters:
|
||||||
`boost`:: Sets the boost value of the query, defaults to `1.0`
|
`boost`:: Sets the boost value of the query, defaults to `1.0`
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Date options
|
==== Date options
|
||||||
|
|
||||||
When applied on `date` fields the `range` filter accepts also a `time_zone` parameter.
|
When applied on `date` fields the `range` filter accepts also a `time_zone` parameter.
|
||||||
The `time_zone` parameter will be applied to your input lower and upper bounds and will
|
The `time_zone` parameter will be applied to your input lower and upper bounds and will
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-regexp-query]]
|
[[query-dsl-regexp-query]]
|
||||||
== Regexp Query
|
=== Regexp Query
|
||||||
|
|
||||||
The `regexp` query allows you to use regular expression term queries.
|
The `regexp` query allows you to use regular expression term queries.
|
||||||
See <<regexp-syntax>> for details of the supported regular expression language.
|
See <<regexp-syntax>> for details of the supported regular expression language.
|
||||||
|
|
|
@ -1,17 +1,17 @@
|
||||||
[[regexp-syntax]]
|
[[regexp-syntax]]
|
||||||
=== Regular expression syntax
|
==== Regular expression syntax
|
||||||
|
|
||||||
Regular expression queries are supported by the `regexp` and the `query_string`
|
Regular expression queries are supported by the `regexp` and the `query_string`
|
||||||
queries. The Lucene regular expression engine
|
queries. The Lucene regular expression engine
|
||||||
is not Perl-compatible but supports a smaller range of operators.
|
is not Perl-compatible but supports a smaller range of operators.
|
||||||
|
|
||||||
[NOTE]
|
[NOTE]
|
||||||
====
|
=====
|
||||||
We will not attempt to explain regular expressions, but
|
We will not attempt to explain regular expressions, but
|
||||||
just explain the supported operators.
|
just explain the supported operators.
|
||||||
====
|
=====
|
||||||
|
|
||||||
==== Standard operators
|
===== Standard operators
|
||||||
|
|
||||||
Anchoring::
|
Anchoring::
|
||||||
+
|
+
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-script-query]]
|
[[query-dsl-script-query]]
|
||||||
== Script Query
|
=== Script Query
|
||||||
|
|
||||||
A query allowing to define
|
A query allowing to define
|
||||||
<<modules-scripting,scripts>> as filters. For
|
<<modules-scripting,scripts>> as filters. For
|
||||||
|
@ -20,7 +20,7 @@ example:
|
||||||
----------------------------------------------
|
----------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Custom Parameters
|
==== Custom Parameters
|
||||||
|
|
||||||
Scripts are compiled and cached for faster execution. If the same script
|
Scripts are compiled and cached for faster execution. If the same script
|
||||||
can be used, just with different parameters provider, it is preferable
|
can be used, just with different parameters provider, it is preferable
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-simple-query-string-query]]
|
[[query-dsl-simple-query-string-query]]
|
||||||
== Simple Query String Query
|
=== Simple Query String Query
|
||||||
|
|
||||||
A query that uses the SimpleQueryParser to parse its context. Unlike the
|
A query that uses the SimpleQueryParser to parse its context. Unlike the
|
||||||
regular `query_string` query, the `simple_query_string` query will never
|
regular `query_string` query, the `simple_query_string` query will never
|
||||||
|
@ -57,7 +57,7 @@ Defaults to `ROOT`.
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Simple Query String Syntax
|
===== Simple Query String Syntax
|
||||||
The `simple_query_string` supports the following special characters:
|
The `simple_query_string` supports the following special characters:
|
||||||
|
|
||||||
* `+` signifies AND operation
|
* `+` signifies AND operation
|
||||||
|
@ -73,7 +73,7 @@ In order to search for any of these special characters, they will need to
|
||||||
be escaped with `\`.
|
be escaped with `\`.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Default Field
|
==== Default Field
|
||||||
When not explicitly specifying the field to search on in the query
|
When not explicitly specifying the field to search on in the query
|
||||||
string syntax, the `index.query.default_field` will be used to derive
|
string syntax, the `index.query.default_field` will be used to derive
|
||||||
which field to search on. It defaults to `_all` field.
|
which field to search on. It defaults to `_all` field.
|
||||||
|
@ -82,7 +82,7 @@ So, if `_all` field is disabled, it might make sense to change it to set
|
||||||
a different default field.
|
a different default field.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Multi Field
|
==== Multi Field
|
||||||
The fields parameter can also include pattern based field names,
|
The fields parameter can also include pattern based field names,
|
||||||
allowing to automatically expand to the relevant fields (dynamically
|
allowing to automatically expand to the relevant fields (dynamically
|
||||||
introduced fields included). For example:
|
introduced fields included). For example:
|
||||||
|
@ -98,7 +98,7 @@ introduced fields included). For example:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Flags
|
==== Flags
|
||||||
`simple_query_string` support multiple flags to specify which parsing features
|
`simple_query_string` support multiple flags to specify which parsing features
|
||||||
should be enabled. It is specified as a `|`-delimited string with the
|
should be enabled. It is specified as a `|`-delimited string with the
|
||||||
`flags` parameter:
|
`flags` parameter:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-containing-query]]
|
[[query-dsl-span-containing-query]]
|
||||||
== Span Containing Query
|
=== Span Containing Query
|
||||||
|
|
||||||
Returns matches which enclose another span query. The span containing
|
Returns matches which enclose another span query. The span containing
|
||||||
query maps to Lucene `SpanContainingQuery`. Here is an example:
|
query maps to Lucene `SpanContainingQuery`. Here is an example:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-first-query]]
|
[[query-dsl-span-first-query]]
|
||||||
== Span First Query
|
=== Span First Query
|
||||||
|
|
||||||
Matches spans near the beginning of a field. The span first query maps
|
Matches spans near the beginning of a field. The span first query maps
|
||||||
to Lucene `SpanFirstQuery`. Here is an example:
|
to Lucene `SpanFirstQuery`. Here is an example:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-multi-term-query]]
|
[[query-dsl-span-multi-term-query]]
|
||||||
== Span Multi Term Query
|
=== Span Multi Term Query
|
||||||
|
|
||||||
The `span_multi` query allows you to wrap a `multi term query` (one of wildcard,
|
The `span_multi` query allows you to wrap a `multi term query` (one of wildcard,
|
||||||
fuzzy, prefix, term, range or regexp query) as a `span query`, so
|
fuzzy, prefix, term, range or regexp query) as a `span query`, so
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-near-query]]
|
[[query-dsl-span-near-query]]
|
||||||
== Span Near Query
|
=== Span Near Query
|
||||||
|
|
||||||
Matches spans which are near one another. One can specify _slop_, the
|
Matches spans which are near one another. One can specify _slop_, the
|
||||||
maximum number of intervening unmatched positions, as well as whether
|
maximum number of intervening unmatched positions, as well as whether
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-not-query]]
|
[[query-dsl-span-not-query]]
|
||||||
== Span Not Query
|
=== Span Not Query
|
||||||
|
|
||||||
Removes matches which overlap with another span query. The span not
|
Removes matches which overlap with another span query. The span not
|
||||||
query maps to Lucene `SpanNotQuery`. Here is an example:
|
query maps to Lucene `SpanNotQuery`. Here is an example:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-or-query]]
|
[[query-dsl-span-or-query]]
|
||||||
== Span Or Query
|
=== Span Or Query
|
||||||
|
|
||||||
Matches the union of its span clauses. The span or query maps to Lucene
|
Matches the union of its span clauses. The span or query maps to Lucene
|
||||||
`SpanOrQuery`. Here is an example:
|
`SpanOrQuery`. Here is an example:
|
||||||
|
|
|
@ -0,0 +1,65 @@
|
||||||
|
[[span-queries]]
|
||||||
|
== Span queries
|
||||||
|
|
||||||
|
Span queries are low-level positional queries which provide expert control
|
||||||
|
over the order and proximity of the specified terms. These are typically used
|
||||||
|
to implement very specific queries on legal documents or patents.
|
||||||
|
|
||||||
|
Span queries cannot be mixed with non-span queries (with the exception of the `span_multi` query).
|
||||||
|
|
||||||
|
The queries in this group are:
|
||||||
|
|
||||||
|
<<query-dsl-span-term-query,`span_term` query>>::
|
||||||
|
|
||||||
|
The equivalent of the <<query-dsl-term-query,`term` query>> but for use with
|
||||||
|
other span queries.
|
||||||
|
|
||||||
|
<<query-dsl-span-multi-term-query,`span_multi` query>>::
|
||||||
|
|
||||||
|
Wraps a <<query-dsl-term-query,`term`>>, <<query-dsl-range-query,`range`>>,
|
||||||
|
<<query-dsl-prefix-query,`prefix`>>, <<query-dsl-wildcard-query,`wildcard`>>,
|
||||||
|
<<query-dsl-regexp-query,`regexp`>>, or <<query-dsl-fuzzy-query,`fuzzy`>> query.
|
||||||
|
|
||||||
|
<<query-dsl-span-first-query,`span_first` query>>::
|
||||||
|
|
||||||
|
Accepts another span query whose matches must appear within the first N
|
||||||
|
positions of the field.
|
||||||
|
|
||||||
|
<<query-dsl-span-near-query,`span_near` query>>::
|
||||||
|
|
||||||
|
Accepts multiple span queries whose matches must be within the specified distance of each other, and possibly in the same order.
|
||||||
|
|
||||||
|
<<query-dsl-span-or-query,`span_or` query>>::
|
||||||
|
|
||||||
|
Combines multiple span queries -- returns documents which match any of the
|
||||||
|
specified queries.
|
||||||
|
|
||||||
|
<<query-dsl-span-not-query,`span_not` query>>::
|
||||||
|
|
||||||
|
Wraps another span query, and excludes any documents which match that query.
|
||||||
|
|
||||||
|
<<query-dsl-span-containing-query,`span_containing` query>>::
|
||||||
|
|
||||||
|
Accepts a list of span queries, but only returns those spans which also match a second span query.
|
||||||
|
|
||||||
|
<<query-dsl-span-within-query,`span_within` query>>::
|
||||||
|
|
||||||
|
The result from a single span query is returned as long is its span falls
|
||||||
|
within the spans returned by a list of other span queries.
|
||||||
|
|
||||||
|
|
||||||
|
include::span-term-query.asciidoc[]
|
||||||
|
|
||||||
|
include::span-multi-term-query.asciidoc[]
|
||||||
|
|
||||||
|
include::span-first-query.asciidoc[]
|
||||||
|
|
||||||
|
include::span-near-query.asciidoc[]
|
||||||
|
|
||||||
|
include::span-or-query.asciidoc[]
|
||||||
|
|
||||||
|
include::span-not-query.asciidoc[]
|
||||||
|
|
||||||
|
include::span-containing-query.asciidoc[]
|
||||||
|
|
||||||
|
include::span-within-query.asciidoc[]
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-term-query]]
|
[[query-dsl-span-term-query]]
|
||||||
== Span Term Query
|
=== Span Term Query
|
||||||
|
|
||||||
Matches spans containing a term. The span term query maps to Lucene
|
Matches spans containing a term. The span term query maps to Lucene
|
||||||
`SpanTermQuery`. Here is an example:
|
`SpanTermQuery`. Here is an example:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-span-within-query]]
|
[[query-dsl-span-within-query]]
|
||||||
== Span Within Query
|
=== Span Within Query
|
||||||
|
|
||||||
Returns matches which are enclosed inside another span query. The span within
|
Returns matches which are enclosed inside another span query. The span within
|
||||||
query maps to Lucene `SpanWithinQuery`. Here is an example:
|
query maps to Lucene `SpanWithinQuery`. Here is an example:
|
||||||
|
|
|
@ -0,0 +1,29 @@
|
||||||
|
[[specialized-queries]]
|
||||||
|
|
||||||
|
== Specialized queries
|
||||||
|
|
||||||
|
This group contains queries which do not fit into the other groups:
|
||||||
|
|
||||||
|
<<query-dsl-mlt-query,`more_like_this` query>>::
|
||||||
|
|
||||||
|
This query finds documents which are similar to the specified text, document,
|
||||||
|
or collection of documents.
|
||||||
|
|
||||||
|
<<query-dsl-template-query,`template` query>>::
|
||||||
|
|
||||||
|
The `template` query accepts a Mustache template (either inline, indexed, or
|
||||||
|
from a file), and a map of parameters, and combines the two to generate the
|
||||||
|
final query to execute.
|
||||||
|
|
||||||
|
<<query-dsl-script-query,`script` query>>::
|
||||||
|
|
||||||
|
This query allows a script to act as a filter. Also see the
|
||||||
|
<<query-dsl-function-score-query,`function_score` query>>.
|
||||||
|
|
||||||
|
|
||||||
|
include::mlt-query.asciidoc[]
|
||||||
|
|
||||||
|
include::template-query.asciidoc[]
|
||||||
|
|
||||||
|
include::script-query.asciidoc[]
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-template-query]]
|
[[query-dsl-template-query]]
|
||||||
== Template Query
|
=== Template Query
|
||||||
|
|
||||||
A query that accepts a query template and a map of key/value pairs to fill in
|
A query that accepts a query template and a map of key/value pairs to fill in
|
||||||
template parameters. Templating is based on Mustache. For simple token substitution all you provide
|
template parameters. Templating is based on Mustache. For simple token substitution all you provide
|
||||||
|
@ -56,7 +56,7 @@ GET /_search
|
||||||
<1> New line characters (`\n`) should be escaped as `\\n` or removed,
|
<1> New line characters (`\n`) should be escaped as `\\n` or removed,
|
||||||
and quotes (`"`) should be escaped as `\\"`.
|
and quotes (`"`) should be escaped as `\\"`.
|
||||||
|
|
||||||
=== Stored templates
|
==== Stored templates
|
||||||
|
|
||||||
You can register a template by storing it in the `config/scripts` directory, in a file using the `.mustache` extension.
|
You can register a template by storing it in the `config/scripts` directory, in a file using the `.mustache` extension.
|
||||||
In order to execute the stored template, reference it by name in the `file`
|
In order to execute the stored template, reference it by name in the `file`
|
||||||
|
|
|
@ -0,0 +1,93 @@
|
||||||
|
[[term-level-queries]]
|
||||||
|
== Term level queries
|
||||||
|
|
||||||
|
While the <<full-text-queries,full text queries>> will analyze the query
|
||||||
|
string before executing, the _term-level queries_ operate on the exact terms
|
||||||
|
that are stored in the inverted index.
|
||||||
|
|
||||||
|
These queries are usually used for structured data like numbers, dates, and
|
||||||
|
enums, rather than full text fields. Alternatively, they allow you to craft
|
||||||
|
low-level queries, foregoing the analysis process.
|
||||||
|
|
||||||
|
The queries in this group are:
|
||||||
|
|
||||||
|
<<query-dsl-term-query,`term` query>>::
|
||||||
|
|
||||||
|
Find documents which contain the exact term specified in the field
|
||||||
|
specified.
|
||||||
|
|
||||||
|
<<query-dsl-terms-query,`terms` query>>::
|
||||||
|
|
||||||
|
Find documents which contain any of the exact terms specified in the field
|
||||||
|
specified.
|
||||||
|
|
||||||
|
<<query-dsl-range-query,`range` query>>::
|
||||||
|
|
||||||
|
Find documents where the field specified contains values (dates, numbers,
|
||||||
|
or strings) in the range specified.
|
||||||
|
|
||||||
|
<<query-dsl-exists-query,`exists` query>>::
|
||||||
|
|
||||||
|
Find documents where the field specified contains any non-null value.
|
||||||
|
|
||||||
|
<<query-dsl-missing-query,`missing` query>>::
|
||||||
|
|
||||||
|
Find documents where the field specified does is missing or contains only
|
||||||
|
`null` values.
|
||||||
|
|
||||||
|
<<query-dsl-prefix-query,`prefix` query>>::
|
||||||
|
|
||||||
|
Find documents where the field specified contains terms which being with
|
||||||
|
the exact prefix specified.
|
||||||
|
|
||||||
|
<<query-dsl-wildcard-query,`wildcard` query>>::
|
||||||
|
|
||||||
|
Find documents where the field specified contains terms which match the
|
||||||
|
pattern specified, where the pattern supports single character wildcards
|
||||||
|
(`?`) and multi-character wildcards (`*`)
|
||||||
|
|
||||||
|
<<query-dsl-regexp-query,`regexp` query>>::
|
||||||
|
|
||||||
|
Find documents where the field specified contains terms which match the
|
||||||
|
<<regexp-syntax,regular expression>> specified.
|
||||||
|
|
||||||
|
<<query-dsl-fuzzy-query,`fuzzy` query>>::
|
||||||
|
|
||||||
|
Find documents where the field specified contains terms which are fuzzily
|
||||||
|
similar to the specified term. Fuzziness is measured as a
|
||||||
|
http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance[Levenshtein edit distance]
|
||||||
|
of 1 or 2.
|
||||||
|
|
||||||
|
<<query-dsl-type-query,`type` query>>::
|
||||||
|
|
||||||
|
Find documents of the specified type.
|
||||||
|
|
||||||
|
<<query-dsl-ids-query,`ids` query>>::
|
||||||
|
|
||||||
|
Find documents with the specified type and IDs.
|
||||||
|
|
||||||
|
|
||||||
|
include::term-query.asciidoc[]
|
||||||
|
|
||||||
|
include::terms-query.asciidoc[]
|
||||||
|
|
||||||
|
include::range-query.asciidoc[]
|
||||||
|
|
||||||
|
include::exists-query.asciidoc[]
|
||||||
|
|
||||||
|
include::missing-query.asciidoc[]
|
||||||
|
|
||||||
|
include::prefix-query.asciidoc[]
|
||||||
|
|
||||||
|
include::wildcard-query.asciidoc[]
|
||||||
|
|
||||||
|
include::regexp-query.asciidoc[]
|
||||||
|
|
||||||
|
include::fuzzy-query.asciidoc[]
|
||||||
|
|
||||||
|
include::type-query.asciidoc[]
|
||||||
|
|
||||||
|
include::ids-query.asciidoc[]
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-term-query]]
|
[[query-dsl-term-query]]
|
||||||
== Term Query
|
=== Term Query
|
||||||
|
|
||||||
The `term` query finds documents that contain the *exact* term specified
|
The `term` query finds documents that contain the *exact* term specified
|
||||||
in the inverted index. For instance:
|
in the inverted index. For instance:
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-terms-query]]
|
[[query-dsl-terms-query]]
|
||||||
== Terms Query
|
=== Terms Query
|
||||||
|
|
||||||
Filters documents that have fields that match any of the provided terms
|
Filters documents that have fields that match any of the provided terms
|
||||||
(*not analyzed*). For example:
|
(*not analyzed*). For example:
|
||||||
|
@ -19,7 +19,8 @@ The `terms` query is also aliased with `in` as the filter name for
|
||||||
simpler usage.
|
simpler usage.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Terms lookup mechanism
|
[[query-dsl-terms-lookup]]
|
||||||
|
===== Terms lookup mechanism
|
||||||
|
|
||||||
When it's needed to specify a `terms` filter with a lot of terms it can
|
When it's needed to specify a `terms` filter with a lot of terms it can
|
||||||
be beneficial to fetch those term values from a document in an index. A
|
be beneficial to fetch those term values from a document in an index. A
|
||||||
|
@ -31,21 +32,21 @@ lookup mechanism.
|
||||||
The terms lookup mechanism supports the following options:
|
The terms lookup mechanism supports the following options:
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`index`::
|
`index`::
|
||||||
The index to fetch the term values from. Defaults to the
|
The index to fetch the term values from. Defaults to the
|
||||||
current index.
|
current index.
|
||||||
|
|
||||||
`type`::
|
`type`::
|
||||||
The type to fetch the term values from.
|
The type to fetch the term values from.
|
||||||
|
|
||||||
`id`::
|
`id`::
|
||||||
The id of the document to fetch the term values from.
|
The id of the document to fetch the term values from.
|
||||||
|
|
||||||
`path`::
|
`path`::
|
||||||
The field specified as path to fetch the actual values for the
|
The field specified as path to fetch the actual values for the
|
||||||
`terms` filter.
|
`terms` filter.
|
||||||
|
|
||||||
`routing`::
|
`routing`::
|
||||||
A custom routing value to be used when retrieving the
|
A custom routing value to be used when retrieving the
|
||||||
external terms doc.
|
external terms doc.
|
||||||
|
|
||||||
|
@ -61,7 +62,7 @@ terms filter will prefer to execute the get request on a local node if
|
||||||
possible, reducing the need for networking.
|
possible, reducing the need for networking.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Terms lookup twitter example
|
===== Terms lookup twitter example
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-type-query]]
|
[[query-dsl-type-query]]
|
||||||
== Type Query
|
=== Type Query
|
||||||
|
|
||||||
Filters documents matching the provided document / mapping type.
|
Filters documents matching the provided document / mapping type.
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
[[query-dsl-wildcard-query]]
|
[[query-dsl-wildcard-query]]
|
||||||
== Wildcard Query
|
=== Wildcard Query
|
||||||
|
|
||||||
Matches documents that have fields matching a wildcard expression (*not
|
Matches documents that have fields matching a wildcard expression (*not
|
||||||
analyzed*). Supported wildcards are `*`, which matches any character
|
analyzed*). Supported wildcards are `*`, which matches any character
|
||||||
|
|
|
@ -260,7 +260,7 @@ public class MapperQueryParser extends QueryParser {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (query == null) {
|
if (query == null) {
|
||||||
query = super.getFieldQuery(currentMapper.names().indexName(), queryText, quoted);
|
query = super.getFieldQuery(currentMapper.fieldType().names().indexName(), queryText, quoted);
|
||||||
}
|
}
|
||||||
return query;
|
return query;
|
||||||
}
|
}
|
||||||
|
@ -372,7 +372,7 @@ public class MapperQueryParser extends QueryParser {
|
||||||
Query rangeQuery;
|
Query rangeQuery;
|
||||||
if (currentMapper instanceof DateFieldMapper && settings.timeZone() != null) {
|
if (currentMapper instanceof DateFieldMapper && settings.timeZone() != null) {
|
||||||
DateFieldMapper dateFieldMapper = (DateFieldMapper) this.currentMapper;
|
DateFieldMapper dateFieldMapper = (DateFieldMapper) this.currentMapper;
|
||||||
rangeQuery = dateFieldMapper.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null, parseContext);
|
rangeQuery = dateFieldMapper.fieldType().rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null, parseContext);
|
||||||
} else {
|
} else {
|
||||||
rangeQuery = currentMapper.rangeQuery(part1, part2, startInclusive, endInclusive, parseContext);
|
rangeQuery = currentMapper.rangeQuery(part1, part2, startInclusive, endInclusive, parseContext);
|
||||||
}
|
}
|
||||||
|
@ -508,7 +508,7 @@ public class MapperQueryParser extends QueryParser {
|
||||||
query = currentMapper.prefixQuery(termStr, multiTermRewriteMethod, parseContext);
|
query = currentMapper.prefixQuery(termStr, multiTermRewriteMethod, parseContext);
|
||||||
}
|
}
|
||||||
if (query == null) {
|
if (query == null) {
|
||||||
query = getPossiblyAnalyzedPrefixQuery(currentMapper.names().indexName(), termStr);
|
query = getPossiblyAnalyzedPrefixQuery(currentMapper.fieldType().names().indexName(), termStr);
|
||||||
}
|
}
|
||||||
return query;
|
return query;
|
||||||
}
|
}
|
||||||
|
@ -644,7 +644,7 @@ public class MapperQueryParser extends QueryParser {
|
||||||
if (!forcedAnalyzer) {
|
if (!forcedAnalyzer) {
|
||||||
setAnalyzer(parseContext.getSearchAnalyzer(currentMapper));
|
setAnalyzer(parseContext.getSearchAnalyzer(currentMapper));
|
||||||
}
|
}
|
||||||
indexedNameField = currentMapper.names().indexName();
|
indexedNameField = currentMapper.fieldType().names().indexName();
|
||||||
return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr);
|
return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr);
|
||||||
}
|
}
|
||||||
return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr);
|
return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr);
|
||||||
|
|
|
@ -32,6 +32,11 @@ public class TimestampParsingException extends ElasticsearchException {
|
||||||
this.timestamp = timestamp;
|
this.timestamp = timestamp;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public TimestampParsingException(String timestamp, Throwable cause) {
|
||||||
|
super("failed to parse timestamp [" + timestamp + "]", cause);
|
||||||
|
this.timestamp = timestamp;
|
||||||
|
}
|
||||||
|
|
||||||
public String timestamp() {
|
public String timestamp() {
|
||||||
return timestamp;
|
return timestamp;
|
||||||
}
|
}
|
||||||
|
|
|
@ -113,8 +113,8 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
|
||||||
if (fieldMapper.isNumeric()) {
|
if (fieldMapper.isNumeric()) {
|
||||||
throw new IllegalArgumentException("Can't process field [" + request.field() + "], Analysis requests are not supported on numeric fields");
|
throw new IllegalArgumentException("Can't process field [" + request.field() + "], Analysis requests are not supported on numeric fields");
|
||||||
}
|
}
|
||||||
analyzer = fieldMapper.indexAnalyzer();
|
analyzer = fieldMapper.fieldType().indexAnalyzer();
|
||||||
field = fieldMapper.names().indexName();
|
field = fieldMapper.fieldType().names().indexName();
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -179,7 +179,7 @@ public class TransportGetFieldMappingsIndexAction extends TransportSingleCustomO
|
||||||
for (String field : request.fields()) {
|
for (String field : request.fields()) {
|
||||||
if (Regex.isMatchAllPattern(field)) {
|
if (Regex.isMatchAllPattern(field)) {
|
||||||
for (FieldMapper fieldMapper : allFieldMappers) {
|
for (FieldMapper fieldMapper : allFieldMappers) {
|
||||||
addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
|
addFieldMapper(fieldMapper.fieldType().names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
|
||||||
}
|
}
|
||||||
} else if (Regex.isSimpleMatchPattern(field)) {
|
} else if (Regex.isSimpleMatchPattern(field)) {
|
||||||
// go through the field mappers 3 times, to make sure we give preference to the resolve order: full name, index name, name.
|
// go through the field mappers 3 times, to make sure we give preference to the resolve order: full name, index name, name.
|
||||||
|
@ -187,22 +187,22 @@ public class TransportGetFieldMappingsIndexAction extends TransportSingleCustomO
|
||||||
Collection<FieldMapper> remainingFieldMappers = Lists.newLinkedList(allFieldMappers);
|
Collection<FieldMapper> remainingFieldMappers = Lists.newLinkedList(allFieldMappers);
|
||||||
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
|
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
|
||||||
final FieldMapper fieldMapper = it.next();
|
final FieldMapper fieldMapper = it.next();
|
||||||
if (Regex.simpleMatch(field, fieldMapper.names().fullName())) {
|
if (Regex.simpleMatch(field, fieldMapper.fieldType().names().fullName())) {
|
||||||
addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
|
addFieldMapper(fieldMapper.fieldType().names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
|
||||||
it.remove();
|
it.remove();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
|
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
|
||||||
final FieldMapper fieldMapper = it.next();
|
final FieldMapper fieldMapper = it.next();
|
||||||
if (Regex.simpleMatch(field, fieldMapper.names().indexName())) {
|
if (Regex.simpleMatch(field, fieldMapper.fieldType().names().indexName())) {
|
||||||
addFieldMapper(fieldMapper.names().indexName(), fieldMapper, fieldMappings, request.includeDefaults());
|
addFieldMapper(fieldMapper.fieldType().names().indexName(), fieldMapper, fieldMappings, request.includeDefaults());
|
||||||
it.remove();
|
it.remove();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
|
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
|
||||||
final FieldMapper fieldMapper = it.next();
|
final FieldMapper fieldMapper = it.next();
|
||||||
if (Regex.simpleMatch(field, fieldMapper.names().shortName())) {
|
if (Regex.simpleMatch(field, fieldMapper.fieldType().names().shortName())) {
|
||||||
addFieldMapper(fieldMapper.names().shortName(), fieldMapper, fieldMappings, request.includeDefaults());
|
addFieldMapper(fieldMapper.fieldType().names().shortName(), fieldMapper, fieldMappings, request.includeDefaults());
|
||||||
it.remove();
|
it.remove();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -229,7 +229,7 @@ public class TransportGetFieldMappingsIndexAction extends TransportSingleCustomO
|
||||||
builder.startObject();
|
builder.startObject();
|
||||||
fieldMapper.toXContent(builder, includeDefaults ? includeDefaultsParams : ToXContent.EMPTY_PARAMS);
|
fieldMapper.toXContent(builder, includeDefaults ? includeDefaultsParams : ToXContent.EMPTY_PARAMS);
|
||||||
builder.endObject();
|
builder.endObject();
|
||||||
fieldMappings.put(field, new FieldMappingMetaData(fieldMapper.names().fullName(), builder.bytes()));
|
fieldMappings.put(field, new FieldMappingMetaData(fieldMapper.fieldType().names().fullName(), builder.bytes()));
|
||||||
} catch (IOException e) {
|
} catch (IOException e) {
|
||||||
throw new ElasticsearchException("failed to serialize XContent of field [" + field + "]", e);
|
throw new ElasticsearchException("failed to serialize XContent of field [" + field + "]", e);
|
||||||
}
|
}
|
||||||
|
|
|
@ -332,7 +332,7 @@ public class BulkRequest extends ActionRequest<BulkRequest> implements Composite
|
||||||
} else {
|
} else {
|
||||||
throw new IllegalArgumentException("Action/metadata line [" + line + "] contains an unknown parameter [" + currentFieldName + "]");
|
throw new IllegalArgumentException("Action/metadata line [" + line + "] contains an unknown parameter [" + currentFieldName + "]");
|
||||||
}
|
}
|
||||||
} else {
|
} else if (token != XContentParser.Token.VALUE_NULL) {
|
||||||
throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected a simple value for field [" + currentFieldName + "] but found [" + token + "]");
|
throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected a simple value for field [" + currentFieldName + "] but found [" + token + "]");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -161,19 +161,11 @@ public class MappingMetaData extends AbstractDiffable<MappingMetaData> {
|
||||||
public static class Timestamp {
|
public static class Timestamp {
|
||||||
|
|
||||||
public static String parseStringTimestamp(String timestampAsString, FormatDateTimeFormatter dateTimeFormatter) throws TimestampParsingException {
|
public static String parseStringTimestamp(String timestampAsString, FormatDateTimeFormatter dateTimeFormatter) throws TimestampParsingException {
|
||||||
long ts;
|
|
||||||
try {
|
try {
|
||||||
// if we manage to parse it, its a millisecond timestamp, just return the string as is
|
return Long.toString(dateTimeFormatter.parser().parseMillis(timestampAsString));
|
||||||
ts = Long.parseLong(timestampAsString);
|
} catch (RuntimeException e) {
|
||||||
return timestampAsString;
|
throw new TimestampParsingException(timestampAsString, e);
|
||||||
} catch (NumberFormatException e) {
|
|
||||||
try {
|
|
||||||
ts = dateTimeFormatter.parser().parseMillis(timestampAsString);
|
|
||||||
} catch (RuntimeException e1) {
|
|
||||||
throw new TimestampParsingException(timestampAsString);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return Long.toString(ts);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -289,7 +281,7 @@ public class MappingMetaData extends AbstractDiffable<MappingMetaData> {
|
||||||
this.id = new Id(docMapper.idFieldMapper().path());
|
this.id = new Id(docMapper.idFieldMapper().path());
|
||||||
this.routing = new Routing(docMapper.routingFieldMapper().required(), docMapper.routingFieldMapper().path());
|
this.routing = new Routing(docMapper.routingFieldMapper().required(), docMapper.routingFieldMapper().path());
|
||||||
this.timestamp = new Timestamp(docMapper.timestampFieldMapper().enabled(), docMapper.timestampFieldMapper().path(),
|
this.timestamp = new Timestamp(docMapper.timestampFieldMapper().enabled(), docMapper.timestampFieldMapper().path(),
|
||||||
docMapper.timestampFieldMapper().dateTimeFormatter().format(), docMapper.timestampFieldMapper().defaultTimestamp(),
|
docMapper.timestampFieldMapper().fieldType().dateTimeFormatter().format(), docMapper.timestampFieldMapper().defaultTimestamp(),
|
||||||
docMapper.timestampFieldMapper().ignoreMissing());
|
docMapper.timestampFieldMapper().ignoreMissing());
|
||||||
this.hasParentField = docMapper.parentFieldMapper().active();
|
this.hasParentField = docMapper.parentFieldMapper().active();
|
||||||
}
|
}
|
||||||
|
|
|
@ -728,7 +728,7 @@ public abstract class ShapeBuilder implements ToXContent {
|
||||||
Distance radius = null;
|
Distance radius = null;
|
||||||
CoordinateNode node = null;
|
CoordinateNode node = null;
|
||||||
GeometryCollectionBuilder geometryCollections = null;
|
GeometryCollectionBuilder geometryCollections = null;
|
||||||
Orientation requestedOrientation = (shapeMapper == null) ? Orientation.RIGHT : shapeMapper.orientation();
|
Orientation requestedOrientation = (shapeMapper == null) ? Orientation.RIGHT : shapeMapper.fieldType().orientation();
|
||||||
|
|
||||||
XContentParser.Token token;
|
XContentParser.Token token;
|
||||||
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
|
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
|
||||||
|
|
|
@ -19,14 +19,14 @@
|
||||||
|
|
||||||
package org.elasticsearch.common.joda;
|
package org.elasticsearch.common.joda;
|
||||||
|
|
||||||
import org.apache.commons.lang3.StringUtils;
|
|
||||||
import org.elasticsearch.ElasticsearchParseException;
|
import org.elasticsearch.ElasticsearchParseException;
|
||||||
import org.joda.time.DateTimeZone;
|
import org.joda.time.DateTimeZone;
|
||||||
import org.joda.time.MutableDateTime;
|
import org.joda.time.MutableDateTime;
|
||||||
import org.joda.time.format.DateTimeFormatter;
|
import org.joda.time.format.DateTimeFormatter;
|
||||||
|
|
||||||
import java.util.concurrent.Callable;
|
import java.util.concurrent.Callable;
|
||||||
import java.util.concurrent.TimeUnit;
|
|
||||||
|
import static com.google.common.base.Preconditions.checkNotNull;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* A parser for date/time formatted text with optional date math.
|
* A parser for date/time formatted text with optional date math.
|
||||||
|
@ -38,13 +38,10 @@ import java.util.concurrent.TimeUnit;
|
||||||
public class DateMathParser {
|
public class DateMathParser {
|
||||||
|
|
||||||
private final FormatDateTimeFormatter dateTimeFormatter;
|
private final FormatDateTimeFormatter dateTimeFormatter;
|
||||||
private final TimeUnit timeUnit;
|
|
||||||
|
|
||||||
public DateMathParser(FormatDateTimeFormatter dateTimeFormatter, TimeUnit timeUnit) {
|
public DateMathParser(FormatDateTimeFormatter dateTimeFormatter) {
|
||||||
if (dateTimeFormatter == null) throw new NullPointerException();
|
checkNotNull(dateTimeFormatter);
|
||||||
if (timeUnit == null) throw new NullPointerException();
|
|
||||||
this.dateTimeFormatter = dateTimeFormatter;
|
this.dateTimeFormatter = dateTimeFormatter;
|
||||||
this.timeUnit = timeUnit;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public long parse(String text, Callable<Long> now) {
|
public long parse(String text, Callable<Long> now) {
|
||||||
|
@ -195,17 +192,6 @@ public class DateMathParser {
|
||||||
}
|
}
|
||||||
|
|
||||||
private long parseDateTime(String value, DateTimeZone timeZone) {
|
private long parseDateTime(String value, DateTimeZone timeZone) {
|
||||||
|
|
||||||
// first check for timestamp
|
|
||||||
if (value.length() > 4 && StringUtils.isNumeric(value)) {
|
|
||||||
try {
|
|
||||||
long time = Long.parseLong(value);
|
|
||||||
return timeUnit.toMillis(time);
|
|
||||||
} catch (NumberFormatException e) {
|
|
||||||
throw new ElasticsearchParseException("failed to parse date field [" + value + "] as timestamp", e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
DateTimeFormatter parser = dateTimeFormatter.parser();
|
DateTimeFormatter parser = dateTimeFormatter.parser();
|
||||||
if (timeZone != null) {
|
if (timeZone != null) {
|
||||||
parser = parser.withZone(timeZone);
|
parser = parser.withZone(timeZone);
|
||||||
|
|
|
@ -27,6 +27,7 @@ import org.joda.time.field.ScaledDurationField;
|
||||||
import org.joda.time.format.*;
|
import org.joda.time.format.*;
|
||||||
|
|
||||||
import java.util.Locale;
|
import java.util.Locale;
|
||||||
|
import java.util.regex.Pattern;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*
|
*
|
||||||
|
@ -133,6 +134,10 @@ public class Joda {
|
||||||
formatter = ISODateTimeFormat.yearMonth();
|
formatter = ISODateTimeFormat.yearMonth();
|
||||||
} else if ("yearMonthDay".equals(input) || "year_month_day".equals(input)) {
|
} else if ("yearMonthDay".equals(input) || "year_month_day".equals(input)) {
|
||||||
formatter = ISODateTimeFormat.yearMonthDay();
|
formatter = ISODateTimeFormat.yearMonthDay();
|
||||||
|
} else if ("epoch_second".equals(input)) {
|
||||||
|
formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(false)).toFormatter();
|
||||||
|
} else if ("epoch_millis".equals(input)) {
|
||||||
|
formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(true)).toFormatter();
|
||||||
} else if (Strings.hasLength(input) && input.contains("||")) {
|
} else if (Strings.hasLength(input) && input.contains("||")) {
|
||||||
String[] formats = Strings.delimitedListToStringArray(input, "||");
|
String[] formats = Strings.delimitedListToStringArray(input, "||");
|
||||||
DateTimeParser[] parsers = new DateTimeParser[formats.length];
|
DateTimeParser[] parsers = new DateTimeParser[formats.length];
|
||||||
|
@ -192,4 +197,50 @@ public class Joda {
|
||||||
return new OffsetDateTimeField(new DividedDateTimeField(new OffsetDateTimeField(chronology.monthOfYear(), -1), QuarterOfYear, 3), 1);
|
return new OffsetDateTimeField(new DividedDateTimeField(new OffsetDateTimeField(chronology.monthOfYear(), -1), QuarterOfYear, 3), 1);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
public static class EpochTimeParser implements DateTimeParser {
|
||||||
|
|
||||||
|
private static final Pattern MILLI_SECOND_PRECISION_PATTERN = Pattern.compile("^\\d{1,13}$");
|
||||||
|
private static final Pattern SECOND_PRECISION_PATTERN = Pattern.compile("^\\d{1,10}$");
|
||||||
|
|
||||||
|
private final boolean hasMilliSecondPrecision;
|
||||||
|
private final Pattern pattern;
|
||||||
|
|
||||||
|
public EpochTimeParser(boolean hasMilliSecondPrecision) {
|
||||||
|
this.hasMilliSecondPrecision = hasMilliSecondPrecision;
|
||||||
|
this.pattern = hasMilliSecondPrecision ? MILLI_SECOND_PRECISION_PATTERN : SECOND_PRECISION_PATTERN;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public int estimateParsedLength() {
|
||||||
|
return hasMilliSecondPrecision ? 13 : 10;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public int parseInto(DateTimeParserBucket bucket, String text, int position) {
|
||||||
|
if (text.length() > estimateParsedLength() ||
|
||||||
|
// timestamps have to have UTC timezone
|
||||||
|
bucket.getZone() != DateTimeZone.UTC ||
|
||||||
|
pattern.matcher(text).matches() == false) {
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
int factor = hasMilliSecondPrecision ? 1 : 1000;
|
||||||
|
try {
|
||||||
|
long millis = Long.valueOf(text) * factor;
|
||||||
|
DateTime dt = new DateTime(millis, DateTimeZone.UTC);
|
||||||
|
bucket.saveField(DateTimeFieldType.year(), dt.getYear());
|
||||||
|
bucket.saveField(DateTimeFieldType.monthOfYear(), dt.getMonthOfYear());
|
||||||
|
bucket.saveField(DateTimeFieldType.dayOfMonth(), dt.getDayOfMonth());
|
||||||
|
bucket.saveField(DateTimeFieldType.hourOfDay(), dt.getHourOfDay());
|
||||||
|
bucket.saveField(DateTimeFieldType.minuteOfHour(), dt.getMinuteOfHour());
|
||||||
|
bucket.saveField(DateTimeFieldType.secondOfMinute(), dt.getSecondOfMinute());
|
||||||
|
bucket.saveField(DateTimeFieldType.millisOfSecond(), dt.getMillisOfSecond());
|
||||||
|
bucket.setZone(DateTimeZone.UTC);
|
||||||
|
} catch (Exception e) {
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
return text.length();
|
||||||
|
}
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
|
@ -20,7 +20,7 @@
|
||||||
package org.elasticsearch.index.fielddata;
|
package org.elasticsearch.index.fielddata;
|
||||||
|
|
||||||
import org.elasticsearch.common.settings.Settings;
|
import org.elasticsearch.common.settings.Settings;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Loading;
|
import org.elasticsearch.index.mapper.MappedFieldType.Loading;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -32,6 +32,7 @@ import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.IndexComponent;
|
import org.elasticsearch.index.IndexComponent;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
@ -77,7 +78,7 @@ public interface IndexFieldData<FD extends AtomicFieldData> extends IndexCompone
|
||||||
/**
|
/**
|
||||||
* The field name.
|
* The field name.
|
||||||
*/
|
*/
|
||||||
FieldMapper.Names getFieldNames();
|
MappedFieldType.Names getFieldNames();
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* The field data type.
|
* The field data type.
|
||||||
|
|
|
@ -23,6 +23,7 @@ import org.apache.lucene.index.LeafReaderContext;
|
||||||
import org.apache.lucene.index.IndexReader;
|
import org.apache.lucene.index.IndexReader;
|
||||||
import org.apache.lucene.util.Accountable;
|
import org.apache.lucene.util.Accountable;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* A simple field data cache abstraction on the *index* level.
|
* A simple field data cache abstraction on the *index* level.
|
||||||
|
@ -47,9 +48,9 @@ public interface IndexFieldDataCache {
|
||||||
|
|
||||||
interface Listener {
|
interface Listener {
|
||||||
|
|
||||||
void onLoad(FieldMapper.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage);
|
void onLoad(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage);
|
||||||
|
|
||||||
void onUnload(FieldMapper.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes);
|
void onUnload(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
class None implements IndexFieldDataCache {
|
class None implements IndexFieldDataCache {
|
||||||
|
|
|
@ -32,6 +32,7 @@ import org.elasticsearch.index.AbstractIndexComponent;
|
||||||
import org.elasticsearch.index.Index;
|
import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.fielddata.plain.*;
|
import org.elasticsearch.index.fielddata.plain.*;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.core.BooleanFieldMapper;
|
import org.elasticsearch.index.mapper.core.BooleanFieldMapper;
|
||||||
import org.elasticsearch.index.mapper.internal.IndexFieldMapper;
|
import org.elasticsearch.index.mapper.internal.IndexFieldMapper;
|
||||||
import org.elasticsearch.index.mapper.internal.ParentFieldMapper;
|
import org.elasticsearch.index.mapper.internal.ParentFieldMapper;
|
||||||
|
@ -46,6 +47,8 @@ import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.concurrent.ConcurrentMap;
|
import java.util.concurrent.ConcurrentMap;
|
||||||
|
|
||||||
|
import static org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
*/
|
*/
|
||||||
public class IndexFieldDataService extends AbstractIndexComponent {
|
public class IndexFieldDataService extends AbstractIndexComponent {
|
||||||
|
@ -226,12 +229,12 @@ public class IndexFieldDataService extends AbstractIndexComponent {
|
||||||
|
|
||||||
@SuppressWarnings("unchecked")
|
@SuppressWarnings("unchecked")
|
||||||
public <IFD extends IndexFieldData<?>> IFD getForField(FieldMapper mapper) {
|
public <IFD extends IndexFieldData<?>> IFD getForField(FieldMapper mapper) {
|
||||||
final FieldMapper.Names fieldNames = mapper.names();
|
final Names fieldNames = mapper.fieldType().names();
|
||||||
final FieldDataType type = mapper.fieldDataType();
|
final FieldDataType type = mapper.fieldType().fieldDataType();
|
||||||
if (type == null) {
|
if (type == null) {
|
||||||
throw new IllegalArgumentException("found no fielddata type for field [" + fieldNames.fullName() + "]");
|
throw new IllegalArgumentException("found no fielddata type for field [" + fieldNames.fullName() + "]");
|
||||||
}
|
}
|
||||||
final boolean docValues = mapper.hasDocValues();
|
final boolean docValues = mapper.fieldType().hasDocValues();
|
||||||
final String key = fieldNames.indexName();
|
final String key = fieldNames.indexName();
|
||||||
IndexFieldData<?> fieldData = loadedFieldData.get(key);
|
IndexFieldData<?> fieldData = loadedFieldData.get(key);
|
||||||
if (fieldData == null) {
|
if (fieldData == null) {
|
||||||
|
|
|
@ -26,7 +26,7 @@ import org.elasticsearch.common.metrics.CounterMetric;
|
||||||
import org.elasticsearch.common.regex.Regex;
|
import org.elasticsearch.common.regex.Regex;
|
||||||
import org.elasticsearch.common.settings.Settings;
|
import org.elasticsearch.common.settings.Settings;
|
||||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.index.shard.AbstractIndexShardComponent;
|
import org.elasticsearch.index.shard.AbstractIndexShardComponent;
|
||||||
import org.elasticsearch.index.shard.ShardId;
|
import org.elasticsearch.index.shard.ShardId;
|
||||||
|
@ -62,7 +62,7 @@ public class ShardFieldData extends AbstractIndexShardComponent implements Index
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void onLoad(FieldMapper.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage) {
|
public void onLoad(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage) {
|
||||||
totalMetric.inc(ramUsage.ramBytesUsed());
|
totalMetric.inc(ramUsage.ramBytesUsed());
|
||||||
String keyFieldName = fieldNames.indexName();
|
String keyFieldName = fieldNames.indexName();
|
||||||
CounterMetric total = perFieldTotals.get(keyFieldName);
|
CounterMetric total = perFieldTotals.get(keyFieldName);
|
||||||
|
@ -79,7 +79,7 @@ public class ShardFieldData extends AbstractIndexShardComponent implements Index
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void onUnload(FieldMapper.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes) {
|
public void onUnload(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes) {
|
||||||
if (wasEvicted) {
|
if (wasEvicted) {
|
||||||
evictionsMetric.inc();
|
evictionsMetric.inc();
|
||||||
}
|
}
|
||||||
|
|
|
@ -31,6 +31,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData;
|
import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
|
||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
|
@ -41,11 +42,11 @@ import java.util.Collections;
|
||||||
*/
|
*/
|
||||||
public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponent implements IndexOrdinalsFieldData, Accountable {
|
public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponent implements IndexOrdinalsFieldData, Accountable {
|
||||||
|
|
||||||
private final FieldMapper.Names fieldNames;
|
private final MappedFieldType.Names fieldNames;
|
||||||
private final FieldDataType fieldDataType;
|
private final FieldDataType fieldDataType;
|
||||||
private final long memorySizeInBytes;
|
private final long memorySizeInBytes;
|
||||||
|
|
||||||
protected GlobalOrdinalsIndexFieldData(Index index, Settings settings, FieldMapper.Names fieldNames, FieldDataType fieldDataType, long memorySizeInBytes) {
|
protected GlobalOrdinalsIndexFieldData(Index index, Settings settings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType, long memorySizeInBytes) {
|
||||||
super(index, settings);
|
super(index, settings);
|
||||||
this.fieldNames = fieldNames;
|
this.fieldNames = fieldNames;
|
||||||
this.fieldDataType = fieldDataType;
|
this.fieldDataType = fieldDataType;
|
||||||
|
@ -68,7 +69,7 @@ public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponen
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public FieldMapper.Names getFieldNames() {
|
public MappedFieldType.Names getFieldNames() {
|
||||||
return fieldNames;
|
return fieldNames;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -28,6 +28,7 @@ import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData;
|
||||||
import org.elasticsearch.index.fielddata.FieldDataType;
|
import org.elasticsearch.index.fielddata.FieldDataType;
|
||||||
import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData;
|
import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
|
|
||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
|
|
||||||
|
@ -38,7 +39,7 @@ final class InternalGlobalOrdinalsIndexFieldData extends GlobalOrdinalsIndexFiel
|
||||||
|
|
||||||
private final Atomic[] atomicReaders;
|
private final Atomic[] atomicReaders;
|
||||||
|
|
||||||
InternalGlobalOrdinalsIndexFieldData(Index index, Settings settings, FieldMapper.Names fieldNames, FieldDataType fieldDataType, AtomicOrdinalsFieldData[] segmentAfd, OrdinalMap ordinalMap, long memorySizeInBytes) {
|
InternalGlobalOrdinalsIndexFieldData(Index index, Settings settings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType, AtomicOrdinalsFieldData[] segmentAfd, OrdinalMap ordinalMap, long memorySizeInBytes) {
|
||||||
super(index, settings, fieldNames, fieldDataType, memorySizeInBytes);
|
super(index, settings, fieldNames, fieldDataType, memorySizeInBytes);
|
||||||
this.atomicReaders = new Atomic[segmentAfd.length];
|
this.atomicReaders = new Atomic[segmentAfd.length];
|
||||||
for (int i = 0; i < segmentAfd.length; i++) {
|
for (int i = 0; i < segmentAfd.length; i++) {
|
||||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.index.AbstractIndexComponent;
|
||||||
import org.elasticsearch.index.Index;
|
import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.fielddata.*;
|
import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
@ -38,11 +39,11 @@ import java.io.IOException;
|
||||||
*/
|
*/
|
||||||
public abstract class AbstractIndexFieldData<FD extends AtomicFieldData> extends AbstractIndexComponent implements IndexFieldData<FD> {
|
public abstract class AbstractIndexFieldData<FD extends AtomicFieldData> extends AbstractIndexComponent implements IndexFieldData<FD> {
|
||||||
|
|
||||||
private final FieldMapper.Names fieldNames;
|
private final MappedFieldType.Names fieldNames;
|
||||||
protected final FieldDataType fieldDataType;
|
protected final FieldDataType fieldDataType;
|
||||||
protected final IndexFieldDataCache cache;
|
protected final IndexFieldDataCache cache;
|
||||||
|
|
||||||
public AbstractIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames, FieldDataType fieldDataType, IndexFieldDataCache cache) {
|
public AbstractIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType, IndexFieldDataCache cache) {
|
||||||
super(index, indexSettings);
|
super(index, indexSettings);
|
||||||
this.fieldNames = fieldNames;
|
this.fieldNames = fieldNames;
|
||||||
this.fieldDataType = fieldDataType;
|
this.fieldDataType = fieldDataType;
|
||||||
|
@ -50,7 +51,7 @@ public abstract class AbstractIndexFieldData<FD extends AtomicFieldData> extends
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public FieldMapper.Names getFieldNames() {
|
public MappedFieldType.Names getFieldNames() {
|
||||||
return this.fieldNames;
|
return this.fieldNames;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -28,7 +28,7 @@ import org.elasticsearch.common.settings.Settings;
|
||||||
import org.elasticsearch.index.Index;
|
import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.fielddata.*;
|
import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
|
|
@ -29,7 +29,7 @@ import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
|
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
|
||||||
|
|
|
@ -25,7 +25,7 @@ import org.elasticsearch.index.fielddata.FieldDataType;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData;
|
import org.elasticsearch.index.fielddata.IndexFieldData;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
|
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
|
||||||
public class BinaryDVIndexFieldData extends DocValuesIndexFieldData implements IndexFieldData<BinaryDVAtomicFieldData> {
|
public class BinaryDVIndexFieldData extends DocValuesIndexFieldData implements IndexFieldData<BinaryDVAtomicFieldData> {
|
||||||
|
|
|
@ -39,7 +39,7 @@ import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;
|
||||||
import org.elasticsearch.index.fielddata.fieldcomparator.DoubleValuesComparatorSource;
|
import org.elasticsearch.index.fielddata.fieldcomparator.DoubleValuesComparatorSource;
|
||||||
import org.elasticsearch.index.fielddata.fieldcomparator.FloatValuesComparatorSource;
|
import org.elasticsearch.index.fielddata.fieldcomparator.FloatValuesComparatorSource;
|
||||||
import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource;
|
import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
|
|
@ -29,7 +29,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
|
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
@ -67,8 +67,8 @@ public class BytesBinaryDVIndexFieldData extends DocValuesIndexFieldData impleme
|
||||||
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
// Ignore breaker
|
// Ignore breaker
|
||||||
final Names fieldNames = mapper.names();
|
final Names fieldNames = mapper.fieldType().names();
|
||||||
return new BytesBinaryDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
|
return new BytesBinaryDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -25,7 +25,7 @@ import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.fielddata.*;
|
import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
@ -42,7 +42,7 @@ public final class DisabledIndexFieldData extends AbstractIndexFieldData<AtomicF
|
||||||
public IndexFieldData<AtomicFieldData> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
public IndexFieldData<AtomicFieldData> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
||||||
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
// Ignore Circuit Breaker
|
// Ignore Circuit Breaker
|
||||||
return new DisabledIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache);
|
return new DisabledIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -31,7 +31,8 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
|
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
|
||||||
import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;
|
import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.mapper.internal.IdFieldMapper;
|
import org.elasticsearch.index.mapper.internal.IdFieldMapper;
|
||||||
import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;
|
import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;
|
||||||
|
@ -93,8 +94,8 @@ public abstract class DocValuesIndexFieldData {
|
||||||
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
// Ignore Circuit Breaker
|
// Ignore Circuit Breaker
|
||||||
final FieldMapper.Names fieldNames = mapper.names();
|
final Names fieldNames = mapper.fieldType().names();
|
||||||
final Settings fdSettings = mapper.fieldDataType().getSettings();
|
final Settings fdSettings = mapper.fieldType().fieldDataType().getSettings();
|
||||||
final Map<String, Settings> filter = fdSettings.getGroups("filter");
|
final Map<String, Settings> filter = fdSettings.getGroups("filter");
|
||||||
if (filter != null && !filter.isEmpty()) {
|
if (filter != null && !filter.isEmpty()) {
|
||||||
throw new IllegalArgumentException("Doc values field data doesn't support filters [" + fieldNames.fullName() + "]");
|
throw new IllegalArgumentException("Doc values field data doesn't support filters [" + fieldNames.fullName() + "]");
|
||||||
|
@ -102,19 +103,19 @@ public abstract class DocValuesIndexFieldData {
|
||||||
|
|
||||||
if (BINARY_INDEX_FIELD_NAMES.contains(fieldNames.indexName())) {
|
if (BINARY_INDEX_FIELD_NAMES.contains(fieldNames.indexName())) {
|
||||||
assert numericType == null;
|
assert numericType == null;
|
||||||
return new BinaryDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
|
return new BinaryDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
|
||||||
} else if (NUMERIC_INDEX_FIELD_NAMES.contains(fieldNames.indexName())) {
|
} else if (NUMERIC_INDEX_FIELD_NAMES.contains(fieldNames.indexName())) {
|
||||||
assert !numericType.isFloatingPoint();
|
assert !numericType.isFloatingPoint();
|
||||||
return new NumericDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
|
return new NumericDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
|
||||||
} else if (numericType != null) {
|
} else if (numericType != null) {
|
||||||
if (Version.indexCreated(indexSettings).onOrAfter(Version.V_1_4_0_Beta1)) {
|
if (Version.indexCreated(indexSettings).onOrAfter(Version.V_1_4_0_Beta1)) {
|
||||||
return new SortedNumericDVIndexFieldData(index, fieldNames, numericType, mapper.fieldDataType());
|
return new SortedNumericDVIndexFieldData(index, fieldNames, numericType, mapper.fieldType().fieldDataType());
|
||||||
} else {
|
} else {
|
||||||
// prior to ES 1.4: multi-valued numerics were boxed inside a byte[] as BINARY
|
// prior to ES 1.4: multi-valued numerics were boxed inside a byte[] as BINARY
|
||||||
return new BinaryDVNumericIndexFieldData(index, fieldNames, numericType, mapper.fieldDataType());
|
return new BinaryDVNumericIndexFieldData(index, fieldNames, numericType, mapper.fieldType().fieldDataType());
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return new SortedSetDVOrdinalsIndexFieldData(index, cache, indexSettings, fieldNames, breakerService, mapper.fieldDataType());
|
return new SortedSetDVOrdinalsIndexFieldData(index, cache, indexSettings, fieldNames, breakerService, mapper.fieldType().fieldDataType());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -53,6 +53,7 @@ import org.elasticsearch.index.fielddata.fieldcomparator.DoubleValuesComparatorS
|
||||||
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
@ -74,11 +75,11 @@ public class DoubleArrayIndexFieldData extends AbstractIndexFieldData<AtomicNume
|
||||||
@Override
|
@Override
|
||||||
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
return new DoubleArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
|
return new DoubleArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public DoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
|
public DoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
|
||||||
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
||||||
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
||||||
this.breakerService = breakerService;
|
this.breakerService = breakerService;
|
||||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
@ -48,11 +49,11 @@ public class FSTBytesIndexFieldData extends AbstractIndexOrdinalsFieldData {
|
||||||
@Override
|
@Override
|
||||||
public IndexOrdinalsFieldData build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
public IndexOrdinalsFieldData build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
||||||
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
return new FSTBytesIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
|
return new FSTBytesIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
FSTBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames, FieldDataType fieldDataType,
|
FSTBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType,
|
||||||
IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
||||||
super(index, indexSettings, fieldNames, fieldDataType, cache, breakerService);
|
super(index, indexSettings, fieldNames, fieldDataType, cache, breakerService);
|
||||||
this.breakerService = breakerService;
|
this.breakerService = breakerService;
|
||||||
|
|
|
@ -52,6 +52,7 @@ import org.elasticsearch.index.fielddata.fieldcomparator.FloatValuesComparatorSo
|
||||||
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
@ -73,11 +74,11 @@ public class FloatArrayIndexFieldData extends AbstractIndexFieldData<AtomicNumer
|
||||||
@Override
|
@Override
|
||||||
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
return new FloatArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
|
return new FloatArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public FloatArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
|
public FloatArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
|
||||||
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
||||||
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
||||||
this.breakerService = breakerService;
|
this.breakerService = breakerService;
|
||||||
|
|
|
@ -27,7 +27,8 @@ import org.elasticsearch.index.Index;
|
||||||
import org.elasticsearch.index.fielddata.*;
|
import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
@ -65,8 +66,8 @@ public class GeoPointBinaryDVIndexFieldData extends DocValuesIndexFieldData impl
|
||||||
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
// Ignore breaker
|
// Ignore breaker
|
||||||
final FieldMapper.Names fieldNames = mapper.names();
|
final Names fieldNames = mapper.fieldType().names();
|
||||||
return new GeoPointBinaryDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
|
return new GeoPointBinaryDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -36,6 +36,7 @@ import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
|
import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
|
@ -54,7 +55,7 @@ public class GeoPointCompressedIndexFieldData extends AbstractIndexGeoPointField
|
||||||
@Override
|
@Override
|
||||||
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
FieldDataType type = mapper.fieldDataType();
|
FieldDataType type = mapper.fieldType().fieldDataType();
|
||||||
final String precisionAsString = type.getSettings().get(PRECISION_KEY);
|
final String precisionAsString = type.getSettings().get(PRECISION_KEY);
|
||||||
final Distance precision;
|
final Distance precision;
|
||||||
if (precisionAsString != null) {
|
if (precisionAsString != null) {
|
||||||
|
@ -62,13 +63,13 @@ public class GeoPointCompressedIndexFieldData extends AbstractIndexGeoPointField
|
||||||
} else {
|
} else {
|
||||||
precision = DEFAULT_PRECISION_VALUE;
|
precision = DEFAULT_PRECISION_VALUE;
|
||||||
}
|
}
|
||||||
return new GeoPointCompressedIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, precision, breakerService);
|
return new GeoPointCompressedIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, precision, breakerService);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private final GeoPointFieldMapper.Encoding encoding;
|
private final GeoPointFieldMapper.Encoding encoding;
|
||||||
|
|
||||||
public GeoPointCompressedIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
|
public GeoPointCompressedIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
|
||||||
FieldDataType fieldDataType, IndexFieldDataCache cache, Distance precision,
|
FieldDataType fieldDataType, IndexFieldDataCache cache, Distance precision,
|
||||||
CircuitBreakerService breakerService) {
|
CircuitBreakerService breakerService) {
|
||||||
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
||||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
@ -48,11 +49,11 @@ public class GeoPointDoubleArrayIndexFieldData extends AbstractIndexGeoPointFiel
|
||||||
@Override
|
@Override
|
||||||
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
return new GeoPointDoubleArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
|
return new GeoPointDoubleArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public GeoPointDoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
|
public GeoPointDoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
|
||||||
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
||||||
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
||||||
this.breakerService = breakerService;
|
this.breakerService = breakerService;
|
||||||
|
|
|
@ -34,6 +34,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
|
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
|
||||||
import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData;
|
import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
|
||||||
|
@ -47,7 +48,7 @@ public class IndexIndexFieldData extends AbstractIndexOrdinalsFieldData {
|
||||||
@Override
|
@Override
|
||||||
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
|
||||||
CircuitBreakerService breakerService, MapperService mapperService) {
|
CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
return new IndexIndexFieldData(index, mapper.names());
|
return new IndexIndexFieldData(index, mapper.fieldType().names());
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -101,7 +102,7 @@ public class IndexIndexFieldData extends AbstractIndexOrdinalsFieldData {
|
||||||
|
|
||||||
private final AtomicOrdinalsFieldData atomicFieldData;
|
private final AtomicOrdinalsFieldData atomicFieldData;
|
||||||
|
|
||||||
private IndexIndexFieldData(Index index, FieldMapper.Names names) {
|
private IndexIndexFieldData(Index index, MappedFieldType.Names names) {
|
||||||
super(index, Settings.EMPTY, names, new FieldDataType("string"), null, null);
|
super(index, Settings.EMPTY, names, new FieldDataType("string"), null, null);
|
||||||
atomicFieldData = new IndexAtomicFieldData(index().name());
|
atomicFieldData = new IndexAtomicFieldData(index().name());
|
||||||
}
|
}
|
||||||
|
|
|
@ -31,7 +31,7 @@ import org.elasticsearch.index.fielddata.FieldDataType;
|
||||||
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
|
||||||
import org.elasticsearch.index.fielddata.IndexNumericFieldData;
|
import org.elasticsearch.index.fielddata.IndexNumericFieldData;
|
||||||
import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource;
|
import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper.Names;
|
import org.elasticsearch.index.mapper.MappedFieldType.Names;
|
||||||
import org.elasticsearch.search.MultiValueMode;
|
import org.elasticsearch.search.MultiValueMode;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
|
|
@ -57,6 +57,7 @@ import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSou
|
||||||
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
@ -86,14 +87,14 @@ public class PackedArrayIndexFieldData extends AbstractIndexFieldData<AtomicNume
|
||||||
@Override
|
@Override
|
||||||
public IndexFieldData<AtomicNumericFieldData> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
public IndexFieldData<AtomicNumericFieldData> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
||||||
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
return new PackedArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, numericType, breakerService);
|
return new PackedArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, numericType, breakerService);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private final NumericType numericType;
|
private final NumericType numericType;
|
||||||
private final CircuitBreakerService breakerService;
|
private final CircuitBreakerService breakerService;
|
||||||
|
|
||||||
public PackedArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
|
public PackedArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
|
||||||
FieldDataType fieldDataType, IndexFieldDataCache cache, NumericType numericType,
|
FieldDataType fieldDataType, IndexFieldDataCache cache, NumericType numericType,
|
||||||
CircuitBreakerService breakerService) {
|
CircuitBreakerService breakerService) {
|
||||||
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
super(index, indexSettings, fieldNames, fieldDataType, cache);
|
||||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.index.fielddata.*;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
|
||||||
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
|
||||||
import org.elasticsearch.index.mapper.FieldMapper;
|
import org.elasticsearch.index.mapper.FieldMapper;
|
||||||
|
import org.elasticsearch.index.mapper.MappedFieldType;
|
||||||
import org.elasticsearch.index.mapper.MapperService;
|
import org.elasticsearch.index.mapper.MapperService;
|
||||||
import org.elasticsearch.index.settings.IndexSettings;
|
import org.elasticsearch.index.settings.IndexSettings;
|
||||||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||||
|
@ -49,11 +50,11 @@ public class PagedBytesIndexFieldData extends AbstractIndexOrdinalsFieldData {
|
||||||
@Override
|
@Override
|
||||||
public IndexOrdinalsFieldData build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
public IndexOrdinalsFieldData build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
|
||||||
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
|
||||||
return new PagedBytesIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
|
return new PagedBytesIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public PagedBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
|
public PagedBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
|
||||||
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
|
||||||
super(index, indexSettings, fieldNames, fieldDataType, cache, breakerService);
|
super(index, indexSettings, fieldNames, fieldDataType, cache, breakerService);
|
||||||
}
|
}
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue