Merge branch 'master' into feature/query-refactoring

Conflicts:
	src/main/java/org/elasticsearch/index/query/RangeQueryParser.java
	src/main/java/org/elasticsearch/index/query/SpanTermQueryParser.java
This commit is contained in:
Christoph Büscher 2015-06-04 14:44:52 +02:00
commit 313e9c6769
225 changed files with 4776 additions and 3066 deletions

View File

@ -50,7 +50,7 @@ POST /_flush
=== Synced Flush
Elasticsearch tracks the indexing activity of each shard. Shards that have not
received any indexing operations for 30 minutes are automatically marked as inactive. This presents
received any indexing operations for 5 minutes are automatically marked as inactive. This presents
an opportunity for Elasticsearch to reduce shard resources and also perform
a special kind of flush, called `synced flush`. A synced flush performs a normal flush, then adds
a generated unique marker (sync_id) to all shards.
@ -117,7 +117,7 @@ which returns something similar to:
=== Synced Flush API
The Synced Flush API allows an administrator to initiate a synced flush manually. This can be particularly useful for
a planned (rolling) cluster restart where you can stop indexing and don't want to wait the default 30 minutes for
a planned (rolling) cluster restart where you can stop indexing and don't want to wait the default 5 minutes for
idle indices to be sync-flushed automatically.
While handy, there are a couple of caveats for this API:

View File

@ -16,6 +16,25 @@ settings, you need to enable using it in elasticsearch.yml:
node.enable_custom_paths: true
--------------------------------------------------
You will also need to disable the default security manager that Elasticsearch
runs with. You can do this by either passing
`-Des.security.manager.enabled=false` with the parameters while starting
Elasticsearch, or you can disable it in elasticsearch.yml:
[source,yaml]
--------------------------------------------------
security.manager.enabled: false
--------------------------------------------------
[WARNING]
========================
Disabling the security manager means that the Elasticsearch process is not
limited to the directories and files that it can read and write. However,
because the `index.data_path` setting is set when creating the index, the
security manager would prevent writing or reading from the index's location, so
it must be disabled.
========================
You can then create an index with a custom data path, where each node will use
this path for the data:
@ -88,6 +107,12 @@ settings API:
Boolean value indicating this index uses a shared filesystem. Defaults to
the `true` if `index.shadow_replicas` is set to true, `false` otherwise.
`index.shared_filesystem.recover_on_any_node`::
Boolean value indicating whether the primary shards for the index should be
allowed to recover on any node in the cluster, regardless of the number of
replicas or whether the node has previously had the shard allocated to it
before. Defaults to `false`.
=== Node level settings related to shadow replicas
These are non-dynamic settings that need to be configured in `elasticsearch.yml`

View File

@ -198,6 +198,11 @@ year.
|`year_month_day`|A formatter for a four digit year, two digit month of
year, and two digit day of month.
|`epoch_second`|A formatter for the number of seconds since the epoch.
|`epoch_millis`|A formatter for the number of milliseconds since
the epoch.
|=======================================================================
[float]

View File

@ -79,7 +79,7 @@ format>> used to parse the provided timestamp value. For example:
}
--------------------------------------------------
Note, the default format is `dateOptionalTime`. The timestamp value will
Note, the default format is `epoch_millis||dateOptionalTime`. The timestamp value will
first be parsed as a number and if it fails the format will be tried.
[float]

View File

@ -349,7 +349,7 @@ date type:
Defaults to the property/field name.
|`format` |The <<mapping-date-format,date
format>>. Defaults to `dateOptionalTime`.
format>>. Defaults to `epoch_millis||dateOptionalTime`.
|`store` |Set to `true` to store actual field in the index, `false` to not
store it. Defaults to `false` (note, the JSON document itself is stored,

View File

@ -42,8 +42,8 @@ and will use the matching format as its format attribute. The date
format itself is explained
<<mapping-date-format,here>>.
The default formats are: `dateOptionalTime` (ISO) and
`yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z`.
The default formats are: `dateOptionalTime` (ISO),
`yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z` and `epoch_millis`.
*Note:* `dynamic_date_formats` are used *only* for dynamically added
date fields, not for `date` fields that you specify in your mapping.

View File

@ -3,32 +3,48 @@
[partintro]
--
*elasticsearch* provides a full Query DSL based on JSON to define
queries. In general, there are basic queries such as
<<query-dsl-term-query,term>> or
<<query-dsl-prefix-query,prefix>>. There are
also compound queries like the
<<query-dsl-bool-query,bool>> query.
While queries have scoring capabilities, in some contexts they will
only be used to filter the result set, such as in the
<<query-dsl-filtered-query,filtered>> or
<<query-dsl-constant-score-query,constant_score>>
queries.
Elasticsearch provides a full Query DSL based on JSON to define queries.
Think of the Query DSL as an AST of queries, consisting of two types of
clauses:
Think of the Query DSL as an AST of queries.
Some queries can be used by themselves like the
<<query-dsl-term-query,term>> query but other queries can contain
queries (like the <<query-dsl-bool-query,bool>> query), and each
of these composite queries can contain *any* query of the list of
queries, resulting in the ability to build quite
complex (and interesting) queries.
Leaf query clauses::
Queries can be used in different APIs. For example,
within a <<search-request-query,search query>>, or
as an <<search-aggregations-bucket-filter-aggregation,aggregation filter>>.
This section explains the queries that can form the AST one can use.
Leaf query clauses look for a particular value in a particular field, such as the
<<query-dsl-match-query,`match`>>, <<query-dsl-term-query,`term`>> or
<<query-dsl-range-query,`range`>> queries. These queries can be used
by themselves.
Compound query clauses::
Compound query clauses wrap other leaf *or* compound queries and are used to combine
multiple queries in a logical fashion (such as the
<<query-dsl-bool-query,`bool`>> or <<query-dsl-dis-max-query,`dis_max`>> query),
or to alter their behaviour (such as the <<query-dsl-not-query,`not`>> or
<<query-dsl-constant-score-query,`constant_score`>> query).
Query clauses behave differently depending on whether they are used in
<<query-filter-context,query context or filter context>>.
--
include::query-dsl/index.asciidoc[]
include::query-dsl/query_filter_context.asciidoc[]
include::query-dsl/match-all-query.asciidoc[]
include::query-dsl/full-text-queries.asciidoc[]
include::query-dsl/term-level-queries.asciidoc[]
include::query-dsl/compound-queries.asciidoc[]
include::query-dsl/joining-queries.asciidoc[]
include::query-dsl/geo-queries.asciidoc[]
include::query-dsl/special-queries.asciidoc[]
include::query-dsl/span-queries.asciidoc[]
include::query-dsl/minimum-should-match.asciidoc[]
include::query-dsl/multi-term-rewrite.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-and-query]]
== And Query
=== And Query
deprecated[2.0.0, Use the `bool` query instead]

View File

@ -1,5 +1,5 @@
[[query-dsl-bool-query]]
== Bool Query
=== Bool Query
A query that matches documents matching boolean combinations of other
queries. The bool query maps to Lucene `BooleanQuery`. It is built using

View File

@ -1,5 +1,5 @@
[[query-dsl-boosting-query]]
== Boosting Query
=== Boosting Query
The `boosting` query can be used to effectively demote results that
match a given query. Unlike the "NOT" clause in bool query, this still

View File

@ -1,12 +1,12 @@
[[query-dsl-common-terms-query]]
== Common Terms Query
=== Common Terms Query
The `common` terms query is a modern alternative to stopwords which
improves the precision and recall of search results (by taking stopwords
into account), without sacrificing performance.
[float]
=== The problem
==== The problem
Every term in a query has a cost. A search for `"The brown fox"`
requires three term queries, one for each of `"the"`, `"brown"` and
@ -25,7 +25,7 @@ and `"not happy"`) and we lose recall (eg text like `"The The"` or
`"To be or not to be"` would simply not exist in the index).
[float]
=== The solution
==== The solution
The `common` terms query divides the query terms into two groups: more
important (ie _low frequency_ terms) and less important (ie _high
@ -63,7 +63,7 @@ site, common terms like `"clip"` or `"video"` will automatically behave
as stopwords without the need to maintain a manual list.
[float]
=== Examples
==== Examples
In this example, words that have a document frequency greater than 0.1%
(eg `"this"` and `"is"`) will be treated as _common terms_.

View File

@ -0,0 +1,69 @@
[[compound-queries]]
== Compound queries
Compound queries wrap other compound or leaf queries, either to combine their
results and scores, to change their behaviour, or to switch from query to
filter context.
The queries in this group are:
<<query-dsl-constant-score-query,`constant_score` query>>::
A query which wraps another query, but executes it in filter context. All
matching documents are given the same ``constant'' `_score`.
<<query-dsl-bool-query,`bool` query>>::
The default query for combining multiple leaf or compound query clauses, as
`must`, `should`, `must_not`, or `filter` clauses. The `must` and `should`
clauses have their scores combined -- the more matching clauses, the better --
while the `must_not` and `filter` clauses are executed in filter context.
<<query-dsl-dis-max-query,`dis_max` query>>::
A query which accepts multiple queries, and returns any documents which match
any of the query clauses. While the `bool` query combines the scores from all
matching queries, the `dis_max` query uses the score of the single best-
matching query clause.
<<query-dsl-function-score-query,`function_score` query>>::
Modify the scores returned by the main query with functions to take into
account factors like popularity, recency, distance, or custom algorithms
implemented with scripting.
<<query-dsl-boosting-query,`boosting` query>>::
Return documents which match a `positive` query, but reduce the score of
documents which also match a `negative` query.
<<query-dsl-indices-query,`indices` query>>::
Execute one query for the specified indices, and another for other indices.
<<query-dsl-and-query,`and`>>, <<query-dsl-or-query,`or`>>, <<query-dsl-not-query,`not`>>::
Synonyms for the `bool` query.
<<query-dsl-filtered-query,`filtered` query>>::
Combine a query clause in query context with another in filter context. deprecated[2.0.0,Use the `bool` query instead]
<<query-dsl-limit-query,`limit` query>>::
Limits the number of documents examined per shard. deprecated[1.6.0]
include::constant-score-query.asciidoc[]
include::bool-query.asciidoc[]
include::dis-max-query.asciidoc[]
include::function-score-query.asciidoc[]
include::boosting-query.asciidoc[]
include::indices-query.asciidoc[]
include::and-query.asciidoc[]
include::not-query.asciidoc[]
include::or-query.asciidoc[]
include::filtered-query.asciidoc[]
include::limit-query.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-constant-score-query]]
== Constant Score Query
=== Constant Score Query
A query that wraps another query and simply returns a
constant score equal to the query boost for every document in the

View File

@ -1,5 +1,5 @@
[[query-dsl-dis-max-query]]
== Dis Max Query
=== Dis Max Query
A query that generates the union of documents produced by its
subqueries, and that scores each document with the maximum score for

View File

@ -1,5 +1,5 @@
[[query-dsl-exists-query]]
== Exists Query
=== Exists Query
Returns documents that have at least one non-`null` value in the original field:
@ -42,7 +42,7 @@ These documents would *not* match the above query:
<3> The `user` field is missing completely.
[float]
==== `null_value` mapping
===== `null_value` mapping
If the field mapping includes the `null_value` setting (see <<mapping-core-types>>)
then explicit `null` values are replaced with the specified `null_value`. For

View File

@ -1,5 +1,5 @@
[[query-dsl-filtered-query]]
== Filtered Query
=== Filtered Query
deprecated[2.0.0, Use the `bool` query instead with a `must` clause for the query and a `filter` clause for the filter]
@ -47,7 +47,7 @@ curl -XGET localhost:9200/_search -d '
<1> The `filtered` query is passed as the value of the `query`
parameter in the search request.
=== Filtering without a query
==== Filtering without a query
If a `query` is not specified, it defaults to the
<<query-dsl-match-all-query,`match_all` query>>. This means that the
@ -71,7 +71,7 @@ curl -XGET localhost:9200/_search -d '
<1> No `query` has been specified, so this request applies just the filter,
returning all documents created since yesterday.
==== Multiple filters
===== Multiple filters
Multiple filters can be applied by wrapping them in a
<<query-dsl-bool-query,`bool` query>>, for example:
@ -95,7 +95,7 @@ Multiple filters can be applied by wrapping them in a
}
--------------------------------------------------
==== Filter strategy
===== Filter strategy
You can control how the filter and query are executed with the `strategy`
parameter:

View File

@ -0,0 +1,44 @@
[[full-text-queries]]
== Full text queries
The high-level full text queries are usually used for running full text
queries on full text fields like the body of an email. They understand how the
field being queried is <<analysis,analyzed>> and will apply each field's
`analyzer` (or `search_analyzer`) to the query string before executing.
The queries in this group are:
<<query-dsl-match-query,`match` query>>::
The standard query for performing full text queries, including fuzzy matching
and phrase or proximity queries.
<<query-dsl-multi-match-query,`multi_match` query>>::
The multi-field version of the `match` query.
<<query-dsl-common-terms-query,`common_terms` query>>::
A more specialized query which gives more preference to uncommon words.
<<query-dsl-query-string-query,`query_string` query>>::
Supports the compact Lucene <<query-string-syntax,query string syntax>>,
allowing you to specify AND|OR|NOT conditions and multi-field search
within a single query string. For expert users only.
<<query-dsl-simple-query-string-query,`simple_query_string`>>::
A simpler, more robust version of the `query_string` syntax suitable
for exposing directly to users.
include::match-query.asciidoc[]
include::multi-match-query.asciidoc[]
include::common-terms-query.asciidoc[]
include::query-string-query.asciidoc[]
include::simple-query-string-query.asciidoc[]

View File

@ -1,15 +1,13 @@
[[query-dsl-function-score-query]]
== Function Score Query
=== Function Score Query
The `function_score` allows you to modify the score of documents that are
retrieved by a query. This can be useful if, for example, a score
function is computationally expensive and it is sufficient to compute
the score on a filtered set of documents.
=== Using function score
To use `function_score`, the user has to define a query and one or
several functions, that compute a new score for each document returned
more functions, that compute a new score for each document returned
by the query.
`function_score` can be used with only one function like this:
@ -89,13 +87,11 @@ query. The parameter `boost_mode` defines how:
`min`:: min of query score and function score
By default, modifying the score does not change which documents match. To exclude
documents that do not meet a certain score threshold the `min_score` parameter can be set to the desired score threshold.
==== Score functions
documents that do not meet a certain score threshold the `min_score` parameter can be set to the desired score threshold.
The `function_score` query provides several types of score functions.
===== Script score
==== Script score
The `script_score` function allows you to wrap another query and customize
the scoring of it optionally with a computation derived from other numeric
@ -135,7 +131,7 @@ Note that unlike the `custom_score` query, the
score of the query is multiplied with the result of the script scoring. If
you wish to inhibit this, set `"boost_mode": "replace"`
===== Weight
==== Weight
The `weight` score allows you to multiply the score by the provided
`weight`. This can sometimes be desired since boost value set on
@ -147,7 +143,7 @@ not.
"weight" : number
--------------------------------------------------
===== Random
==== Random
The `random_score` generates scores using a hash of the `_uid` field,
with a `seed` for variation. If `seed` is not specified, the current
@ -163,7 +159,7 @@ be a memory intensive operation since the values are unique.
}
--------------------------------------------------
===== Field Value factor
==== Field Value factor
The `field_value_factor` function allows you to use a field from a document to
influence the score. It's similar to using the `script_score` function, however,
@ -207,7 +203,7 @@ is an illegal operation, and an exception will be thrown. Be sure to limit the
values of the field with a range filter to avoid this, or use `log1p` and
`ln1p`.
===== Decay functions
==== Decay functions
Decay functions score a document with a function that decays depending
on the distance of a numeric field value of the document from a user
@ -254,13 +250,13 @@ The `offset` and `decay` parameters are optional.
[horizontal]
`origin`::
The point of origin used for calculating distance. Must be given as a
number for numeric field, date for date fields and geo point for geo fields.
The point of origin used for calculating distance. Must be given as a
number for numeric field, date for date fields and geo point for geo fields.
Required for geo and numeric field. For date fields the default is `now`. Date
math (for example `now-1h`) is supported for origin.
`scale`::
Required for all types. Defines the distance from origin at which the computed
Required for all types. Defines the distance from origin at which the computed
score will equal `decay` parameter. For geo fields: Can be defined as number+unit (1km, 12m,...).
Default unit is meters. For date fields: Can to be defined as a number+unit ("1h", "10d",...).
Default unit is milliseconds. For numeric field: Any number.
@ -360,7 +356,7 @@ Example:
==== Detailed example
===== Detailed example
Suppose you are searching for a hotel in a certain town. Your budget is
limited. Also, you would like the hotel to be close to the town center,
@ -480,7 +476,7 @@ image::https://f.cloud.github.com/assets/4320215/768161/082975c0-e899-11e2-86f7-
image::https://f.cloud.github.com/assets/4320215/768162/0b606884-e899-11e2-907b-aefc77eefef6.png[width="700px"]
===== Linear' decay, keyword `linear`
===== Linear decay, keyword `linear`
When choosing `linear` as the decay function in the above example, the
contour and surface plot of the multiplier looks like this:

View File

@ -1,10 +1,10 @@
[[query-dsl-fuzzy-query]]
== Fuzzy Query
=== Fuzzy Query
The fuzzy query uses similarity based on Levenshtein edit distance for
`string` fields, and a `+/-` margin on numeric and date fields.
=== String fields
==== String fields
The `fuzzy` query generates all possible matching terms that are within the
maximum edit distance specified in `fuzziness` and then checks the term
@ -38,7 +38,7 @@ Or with more advanced settings:
--------------------------------------------------
[float]
==== Parameters
===== Parameters
[horizontal]
`fuzziness`::
@ -62,7 +62,7 @@ are both set to `0`. This could cause every term in the index to be examined!
[float]
=== Numeric and date fields
==== Numeric and date fields
Performs a <<query-dsl-range-query>> ``around'' the value using the
`fuzziness` value as a `+/-` range, where:

View File

@ -1,5 +1,5 @@
[[query-dsl-geo-bounding-box-query]]
== Geo Bounding Box Query
=== Geo Bounding Box Query
A query allowing to filter hits based on a point location using a
bounding box. Assuming the following indexed document:
@ -45,13 +45,13 @@ Then the following simple query can be executed with a
--------------------------------------------------
[float]
=== Accepted Formats
==== Accepted Formats
In much the same way the geo_point type can accept different
representation of the geo point, the filter can accept it as well:
[float]
==== Lat Lon As Properties
===== Lat Lon As Properties
[source,js]
--------------------------------------------------
@ -79,7 +79,7 @@ representation of the geo point, the filter can accept it as well:
--------------------------------------------------
[float]
==== Lat Lon As Array
===== Lat Lon As Array
Format in `[lon, lat]`, note, the order of lon/lat here in order to
conform with http://geojson.org/[GeoJSON].
@ -104,7 +104,7 @@ conform with http://geojson.org/[GeoJSON].
--------------------------------------------------
[float]
==== Lat Lon As String
===== Lat Lon As String
Format in `lat,lon`.
@ -128,7 +128,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
==== Geohash
===== Geohash
[source,js]
--------------------------------------------------
@ -150,7 +150,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
=== Vertices
==== Vertices
The vertices of the bounding box can either be set by `top_left` and
`bottom_right` or by `top_right` and `bottom_left` parameters. More
@ -182,20 +182,20 @@ values separately.
[float]
=== geo_point Type
==== geo_point Type
The filter *requires* the `geo_point` type to be set on the relevant
field.
[float]
=== Multi Location Per Document
==== Multi Location Per Document
The filter can work with multiple locations / points per document. Once
a single location / point matches the filter, the document will be
included in the filter
[float]
=== Type
==== Type
The type of the bounding box execution by default is set to `memory`,
which means in memory checks if the doc falls within the bounding box

View File

@ -1,5 +1,5 @@
[[query-dsl-geo-distance-query]]
== Geo Distance Query
=== Geo Distance Query
Filters documents that include only hits that exists within a specific
distance from a geo point. Assuming the following indexed json:
@ -40,13 +40,13 @@ filter:
--------------------------------------------------
[float]
=== Accepted Formats
==== Accepted Formats
In much the same way the `geo_point` type can accept different
representation of the geo point, the filter can accept it as well:
[float]
==== Lat Lon As Properties
===== Lat Lon As Properties
[source,js]
--------------------------------------------------
@ -69,7 +69,7 @@ representation of the geo point, the filter can accept it as well:
--------------------------------------------------
[float]
==== Lat Lon As Array
===== Lat Lon As Array
Format in `[lon, lat]`, note, the order of lon/lat here in order to
conform with http://geojson.org/[GeoJSON].
@ -92,7 +92,7 @@ conform with http://geojson.org/[GeoJSON].
--------------------------------------------------
[float]
==== Lat Lon As String
===== Lat Lon As String
Format in `lat,lon`.
@ -114,7 +114,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
==== Geohash
===== Geohash
[source,js]
--------------------------------------------------
@ -134,7 +134,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
=== Options
==== Options
The following are options allowed on the filter:
@ -160,13 +160,13 @@ The following are options allowed on the filter:
[float]
=== geo_point Type
==== geo_point Type
The filter *requires* the `geo_point` type to be set on the relevant
field.
[float]
=== Multi Location Per Document
==== Multi Location Per Document
The `geo_distance` filter can work with multiple locations / points per
document. Once a single location / point matches the filter, the

View File

@ -1,5 +1,5 @@
[[query-dsl-geo-distance-range-query]]
== Geo Distance Range Query
=== Geo Distance Range Query
Filters documents that exists within a range from a specific point:

View File

@ -1,5 +1,5 @@
[[query-dsl-geo-polygon-query]]
== Geo Polygon Query
=== Geo Polygon Query
A query allowing to include hits that only fall within a polygon of
points. Here is an example:
@ -27,10 +27,10 @@ points. Here is an example:
--------------------------------------------------
[float]
=== Allowed Formats
==== Allowed Formats
[float]
==== Lat Long as Array
===== Lat Long as Array
Format in `[lon, lat]`, note, the order of lon/lat here in order to
conform with http://geojson.org/[GeoJSON].
@ -58,7 +58,7 @@ conform with http://geojson.org/[GeoJSON].
--------------------------------------------------
[float]
==== Lat Lon as String
===== Lat Lon as String
Format in `lat,lon`.
@ -85,7 +85,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
==== Geohash
===== Geohash
[source,js]
--------------------------------------------------
@ -110,7 +110,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
=== geo_point Type
==== geo_point Type
The filter *requires* the
<<mapping-geo-point-type,geo_point>> type to be

View File

@ -0,0 +1,50 @@
[[geo-queries]]
== Geo queries
Elasticsearch supports two types of geo data:
<<mapping-geo-point-type,`geo_point`>> fields which support lat/lon pairs, and
<<mapping-geo-shape-type,`geo_shape`>> fields, which support points,
lines, circles, polygons, multi-polygons etc.
The queries in this group are:
<<query-dsl-geo-shape-query,`geo_shape`>> query::
Find document with geo-shapes which either intersect, are contained by, or
do not interesect with the specified geo-shape.
<<query-dsl-geo-bounding-box-query,`geo_bounding_box`>> query::
Finds documents with geo-points that fall into the specified rectangle.
<<query-dsl-geo-distance-query,`geo_distance`>> query::
Finds document with geo-points within the specified distance of a central
point.
<<query-dsl-geo-distance-range-query,`geo_distance_range`>> query::
Like the `geo_point` query, but the range starts at a specified distance
from the central point.
<<query-dsl-geo-polygon-query,`geo_polygon`>> query::
Find documents with geo-points within the specified polygon.
<<query-dsl-geohash-cell-query,`geohash_cell`>> query::
Find geo-points whose geohash intersects with the geohash of the specified
point.
include::geo-shape-query.asciidoc[]
include::geo-bounding-box-query.asciidoc[]
include::geo-distance-query.asciidoc[]
include::geo-distance-range-query.asciidoc[]
include::geo-polygon-query.asciidoc[]
include::geohash-cell-query.asciidoc[]

View File

@ -1,26 +1,21 @@
[[query-dsl-geo-shape-query]]
== GeoShape Filter
=== GeoShape Query
Filter documents indexed using the `geo_shape` type.
Requires the <<mapping-geo-shape-type,geo_shape
Mapping>>.
Requires the <<mapping-geo-shape-type,geo_shape Mapping>>.
The `geo_shape` query uses the same grid square representation as the
geo_shape mapping to find documents that have a shape that intersects
with the query shape. It will also use the same PrefixTree configuration
as defined for the field mapping.
[float]
==== Filter Format
The Filter supports two ways of defining the Filter shape, either by
The query supports two ways of defining the query shape, either by
providing a whole shape definition, or by referencing the name of a shape
pre-indexed in another index. Both formats are defined below with
examples.
[float]
===== Provided Shape Definition
==== Inline Shape Definition
Similar to the `geo_shape` type, the `geo_shape` Filter uses
http://www.geojson.org[GeoJSON] to represent shapes.
@ -64,8 +59,7 @@ The following query will find the point using the Elasticsearch's
}
--------------------------------------------------
[float]
===== Pre-Indexed Shape
==== Pre-Indexed Shape
The Filter also supports using a shape which has already been indexed in
another index and/or index type. This is particularly useful for when

View File

@ -1,5 +1,5 @@
[[query-dsl-geohash-cell-query]]
== Geohash Cell Query
=== Geohash Cell Query
The `geohash_cell` query provides access to a hierarchy of geohashes.
By defining a geohash cell, only <<mapping-geo-point-type,geopoints>>

View File

@ -1,5 +1,5 @@
[[query-dsl-has-child-query]]
== Has Child Query
=== Has Child Query
The `has_child` filter accepts a query and the child type to run against, and
results in parent documents that have child docs matching the query. Here is
@ -20,7 +20,7 @@ an example:
--------------------------------------------------
[float]
=== Scoring capabilities
==== Scoring capabilities
The `has_child` also has scoring support. The
supported score types are `min`, `max`, `sum`, `avg` or `none`. The default is
@ -46,7 +46,7 @@ inside the `has_child` query:
--------------------------------------------------
[float]
=== Min/Max Children
==== Min/Max Children
The `has_child` query allows you to specify that a minimum and/or maximum
number of children are required to match for the parent doc to be considered

View File

@ -1,5 +1,5 @@
[[query-dsl-has-parent-query]]
== Has Parent Query
=== Has Parent Query
The `has_parent` query accepts a query and a parent type. The query is
executed in the parent document space, which is specified by the parent
@ -22,7 +22,7 @@ in the same manner as the `has_child` query.
--------------------------------------------------
[float]
=== Scoring capabilities
==== Scoring capabilities
The `has_parent` also has scoring support. The
supported score types are `score` or `none`. The default is `none` and

View File

@ -1,5 +1,5 @@
[[query-dsl-ids-query]]
== Ids Query
=== Ids Query
Filters documents that only have the provided ids. Note, this query
uses the <<mapping-uid-field,_uid>> field.

View File

@ -1,3 +1,5 @@
include::query_filter_context.asciidoc[]
include::match-query.asciidoc[]
include::multi-match-query.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-indices-query]]
== Indices Query
=== Indices Query
The `indices` query can be used when executed across multiple indices,
allowing to have a query that executes only when executed on an index
@ -29,9 +29,9 @@ documents), and `all` (to match all). Defaults to `all`.
`query` is mandatory, as well as `indices` (or `index`).
[TIP]
===================================================================
====================================================================
The fields order is important: if the `indices` are provided before `query`
or `no_match_query`, the related queries get parsed only against the indices
that they are going to be executed on. This is useful to avoid parsing queries
when it is not necessary and prevent potential mapping errors.
===================================================================
====================================================================

View File

@ -0,0 +1,32 @@
[[joining-queries]]
== Joining queries
Performing full SQL-style joins in a distributed system like Elasticsearch is
prohibitively expensive. Instead, Elasticsearch offers two forms of join
which are designed to scale horizontally.
<<query-dsl-nested-query,`nested` query>>::
Documents may contains fields of type <<mapping-nested-type,`nested`>>. These
fields are used to index arrays of objects, where each object can be queried
(with the `nested` query) as an independent document.
<<query-dsl-has-child-query,`has_child`>> and <<query-dsl-has-parent-query,`has_parent`>> queries::
A <<mapping-parent-field,parent-child relationship>> can exist between two
document types within a single index. The `has_child` query returns parent
documents whose child documents match the specified query, while the
`has_parent` query returns child documents whose parent document matches the
specified query.
Also see the <<query-dsl-terms-lookup,terms-lookup mechanism>> in the `terms`
query, which allows you to build a `terms` query from values contained in
another document.
include::nested-query.asciidoc[]
include::has-child-query.asciidoc[]
include::has-parent-query.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-limit-query]]
== Limit Query
=== Limit Query
deprecated[1.6.0, Use <<search-request-body,terminate_after>> instead]

View File

@ -1,20 +1,17 @@
[[query-dsl-match-all-query]]
== Match All Query
A query that matches all documents. Maps to Lucene `MatchAllDocsQuery`.
The most simple query, which matches all documents, giving them all a `_score`
of `1.0`.
[source,js]
--------------------------------------------------
{
"match_all" : { }
}
{ "match_all": {} }
--------------------------------------------------
Which can also have boost associated with it:
The `_score` can be changed with the `boost` parameter:
[source,js]
--------------------------------------------------
{
"match_all" : { "boost" : 1.2 }
}
{ "match_all": { "boost" : 1.2 }}
--------------------------------------------------

View File

@ -1,5 +1,5 @@
[[query-dsl-match-query]]
== Match Query
=== Match Query
A family of `match` queries that accept text/numerics/dates, analyzes
it, and constructs a query out of it. For example:
@ -16,10 +16,8 @@ it, and constructs a query out of it. For example:
Note, `message` is the name of a field, you can substitute the name of
any field (including `_all`) instead.
[float]
=== Types of Match Queries
There are three types of `match` query: `boolean`, `phrase`, and `phrase_prefix`:
[float]
[[query-dsl-match-query-boolean]]
==== boolean
@ -40,7 +38,6 @@ data-type mismatches, such as trying to query a numeric field with a text
query string. Defaults to `false`.
[[query-dsl-match-query-fuzziness]]
[float]
===== Fuzziness
`fuzziness` allows _fuzzy matching_ based on the type of field being queried.
@ -69,7 +66,6 @@ change in structure, `message` is the field name):
--------------------------------------------------
[[query-dsl-match-query-zero]]
[float]
===== Zero terms query
If the analyzer used removes all tokens in a query like a `stop` filter
does, the default behavior is to match no documents at all. In order to
@ -90,7 +86,6 @@ change that the `zero_terms_query` option can be used, which accepts
--------------------------------------------------
[[query-dsl-match-query-cutoff]]
[float]
===== Cutoff frequency
The match query supports a `cutoff_frequency` that allows
@ -132,7 +127,6 @@ that when trying it out on test indexes with low document numbers you
should follow the advice in {defguide}/relevance-is-broken.html[Relevance is broken].
[[query-dsl-match-query-phrase]]
[float]
==== phrase
The `match_phrase` query analyzes the text and creates a `phrase` query
@ -181,9 +175,8 @@ definition, or the default search analyzer, for example:
}
--------------------------------------------------
[float]
[[query-dsl-match-query-phrase-prefix]]
===== match_phrase_prefix
==== match_phrase_prefix
The `match_phrase_prefix` is the same as `match_phrase`, except that it
allows for prefix matches on the last term in the text. For example:

View File

@ -1,5 +1,5 @@
[[query-dsl-missing-query]]
== Missing Query
=== Missing Query
Returns documents that have only `null` values or no value in the original field:
@ -42,7 +42,7 @@ These documents would *not* match the above filter:
<3> This field has one non-`null` value.
[float]
=== `null_value` mapping
==== `null_value` mapping
If the field mapping includes a `null_value` (see <<mapping-core-types>>) then explicit `null` values
are replaced with the specified `null_value`. For instance, if the `user` field were mapped
@ -75,7 +75,7 @@ no values in the `user` field and thus would match the `missing` filter:
--------------------------------------------------
[float]
==== `existence` and `null_value` parameters
===== `existence` and `null_value` parameters
When the field being queried has a `null_value` mapping, then the behaviour of
the `missing` filter can be altered with the `existence` and `null_value`

View File

@ -1,5 +1,5 @@
[[query-dsl-mlt-query]]
== More Like This Query
=== More Like This Query
The More Like This Query (MLT Query) finds documents that are "like" a given
set of documents. In order to do so, MLT selects a set of representative terms
@ -87,7 +87,7 @@ present in the index, the syntax is similar to <<docs-termvectors-artificial-doc
}
--------------------------------------------------
=== How it Works
==== How it Works
Suppose we wanted to find all documents similar to a given input document.
Obviously, the input document itself should be its best match for that type of
@ -139,14 +139,14 @@ curl -s -XPUT 'http://localhost:9200/imdb/' -d '{
}
--------------------------------------------------
=== Parameters
==== Parameters
The only required parameter is `like`, all other parameters have sensible
defaults. There are three types of parameters: one to specify the document
input, the other one for term selection and for query formation.
[float]
=== Document Input Parameters
==== Document Input Parameters
[horizontal]
`like`:: coming[2.0]
@ -179,7 +179,7 @@ A list of documents following the same syntax as the <<docs-multi-get,Multi GET
[float]
[[mlt-query-term-selection]]
=== Term Selection Parameters
==== Term Selection Parameters
[horizontal]
`max_query_terms`::
@ -219,7 +219,7 @@ The analyzer that is used to analyze the free form text. Defaults to the
analyzer associated with the first field in `fields`.
[float]
=== Query Formation Parameters
==== Query Formation Parameters
[horizontal]
`minimum_should_match`::

View File

@ -1,5 +1,5 @@
[[query-dsl-multi-match-query]]
== Multi Match Query
=== Multi Match Query
The `multi_match` query builds on the <<query-dsl-match-query,`match` query>>
to allow multi-field queries:
@ -17,7 +17,7 @@ to allow multi-field queries:
<2> The fields to be queried.
[float]
=== `fields` and per-field boosting
==== `fields` and per-field boosting
Fields can be specified with wildcards, eg:
@ -47,7 +47,7 @@ Individual fields can be boosted with the caret (`^`) notation:
[[multi-match-types]]
[float]
=== Types of `multi_match` query:
==== Types of `multi_match` query:
The way the `multi_match` query is executed internally depends on the `type`
parameter, which can be set to:
@ -70,7 +70,7 @@ parameter, which can be set to:
combines the `_score` from each field. See <<type-phrase>>.
[[type-best-fields]]
=== `best_fields`
==== `best_fields`
The `best_fields` type is most useful when you are searching for multiple
words best found in the same field. For instance ``brown fox'' in a single
@ -121,7 +121,7 @@ and `cutoff_frequency`, as explained in <<query-dsl-match-query, match query>>.
[IMPORTANT]
[[operator-min]]
.`operator` and `minimum_should_match`
==================================================
===================================================
The `best_fields` and `most_fields` types are _field-centric_ -- they generate
a `match` query *per field*. This means that the `operator` and
@ -153,10 +153,10 @@ to match.
See <<type-cross-fields>> for a better solution.
==================================================
===================================================
[[type-most-fields]]
=== `most_fields`
==== `most_fields`
The `most_fields` type is most useful when querying multiple fields that
contain the same text analyzed in different ways. For instance, the main
@ -203,7 +203,7 @@ and `cutoff_frequency`, as explained in <<query-dsl-match-query,match query>>, b
*see <<operator-min>>*.
[[type-phrase]]
=== `phrase` and `phrase_prefix`
==== `phrase` and `phrase_prefix`
The `phrase` and `phrase_prefix` types behave just like <<type-best-fields>>,
but they use a `match_phrase` or `match_phrase_prefix` query instead of a
@ -240,7 +240,7 @@ in <<query-dsl-match-query>>. Type `phrase_prefix` additionally accepts
`max_expansions`.
[[type-cross-fields]]
=== `cross_fields`
==== `cross_fields`
The `cross_fields` type is particularly useful with structured documents where
multiple fields *should* match. For instance, when querying the `first_name`
@ -317,7 +317,7 @@ Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`,
`zero_terms_query` and `cutoff_frequency`, as explained in
<<query-dsl-match-query, match query>>.
==== `cross_field` and analysis
===== `cross_field` and analysis
The `cross_field` type can only work in term-centric mode on fields that have
the same analyzer. Fields with the same analyzer are grouped together as in
@ -411,7 +411,7 @@ which will be executed as:
blended("will", fields: [first, first.edge, last.edge, last])
blended("smith", fields: [first, first.edge, last.edge, last])
==== `tie_breaker`
===== `tie_breaker`
By default, each per-term `blended` query will use the best score returned by
any field in a group, then these scores are added together to give the final

View File

@ -1,5 +1,5 @@
[[query-dsl-nested-query]]
== Nested Query
=== Nested Query
Nested query allows to query nested objects / docs (see
<<mapping-nested-type,nested mapping>>). The

View File

@ -1,5 +1,5 @@
[[query-dsl-not-query]]
== Not Query
=== Not Query
A query that filters out matched documents using a query. For example:

View File

@ -1,5 +1,5 @@
[[query-dsl-or-query]]
== Or Query
=== Or Query
deprecated[2.0.0, Use the `bool` query instead]

View File

@ -1,5 +1,5 @@
[[query-dsl-prefix-query]]
== Prefix Query
=== Prefix Query
Matches documents that have fields containing terms with a specified
prefix (*not analyzed*). The prefix query maps to Lucene `PrefixQuery`.

View File

@ -1,5 +1,5 @@
[[query-dsl-query-string-query]]
== Query String Query
=== Query String Query
A query that uses a query parser in order to parse its content. Here is
an example:
@ -89,7 +89,7 @@ rewritten using the
parameter.
[float]
=== Default Field
==== Default Field
When not explicitly specifying the field to search on in the query
string syntax, the `index.query.default_field` will be used to derive
@ -99,7 +99,7 @@ So, if `_all` field is disabled, it might make sense to change it to set
a different default field.
[float]
=== Multi Field
==== Multi Field
The `query_string` query can also run against multiple fields. Fields can be
provided via the `"fields"` parameter (example below).

View File

@ -1,6 +1,6 @@
[[query-string-syntax]]
=== Query string syntax
==== Query string syntax
The query string ``mini-language'' is used by the
<<query-dsl-query-string-query>> and by the
@ -14,7 +14,7 @@ phrase, in the same order.
Operators allow you to customize the search -- the available options are
explained below.
==== Field names
===== Field names
As mentioned in <<query-dsl-query-string-query>>, the `default_field` is searched for the
search terms, but it is possible to specify other fields in the query syntax:
@ -46,7 +46,7 @@ search terms, but it is possible to specify other fields in the query syntax:
_exists_:title
==== Wildcards
===== Wildcards
Wildcard searches can be run on individual terms, using `?` to replace
a single character, and `*` to replace zero or more characters:
@ -58,12 +58,12 @@ perform very badly -- just think how many terms need to be queried to
match the query string `"a* b* c*"`.
[WARNING]
======
=======
Allowing a wildcard at the beginning of a word (eg `"*ing"`) is particularly
heavy, because all terms in the index need to be examined, just in case
they match. Leading wildcards can be disabled by setting
`allow_leading_wildcard` to `false`.
======
=======
Wildcarded terms are not analyzed by default -- they are lowercased
(`lowercase_expanded_terms` defaults to `true`) but no further analysis
@ -72,7 +72,7 @@ is missing some of its letters. However, by setting `analyze_wildcard` to
`true`, an attempt will be made to analyze wildcarded words before searching
the term list for matching terms.
==== Regular expressions
===== Regular expressions
Regular expression patterns can be embedded in the query string by
wrapping them in forward-slashes (`"/"`):
@ -82,7 +82,7 @@ wrapping them in forward-slashes (`"/"`):
The supported regular expression syntax is explained in <<regexp-syntax>>.
[WARNING]
======
=======
The `allow_leading_wildcard` parameter does not have any control over
regular expressions. A query string such as the following would force
Elasticsearch to visit every term in the index:
@ -90,9 +90,9 @@ Elasticsearch to visit every term in the index:
/.*n/
Use with caution!
======
=======
==== Fuzziness
===== Fuzziness
We can search for terms that are
similar to, but not exactly like our search terms, using the ``fuzzy''
@ -112,7 +112,7 @@ sufficient to catch 80% of all human misspellings. It can be specified as:
quikc~1
==== Proximity searches
===== Proximity searches
While a phrase query (eg `"john smith"`) expects all of the terms in exactly
the same order, a proximity query allows the specified words to be further
@ -127,7 +127,7 @@ query string, the more relevant that document is considered to be. When
compared to the above example query, the phrase `"quick fox"` would be
considered more relevant than `"quick brown fox"`.
==== Ranges
===== Ranges
Ranges can be specified for date, numeric or string fields. Inclusive ranges
are specified with square brackets `[min TO max]` and exclusive ranges with
@ -168,20 +168,20 @@ Ranges with one side unbounded can use the following syntax:
age:<=10
[NOTE]
===================================================================
====================================================================
To combine an upper and lower bound with the simplified syntax, you
would need to join two clauses with an `AND` operator:
age:(>=10 AND <20)
age:(+>=10 +<20)
===================================================================
====================================================================
The parsing of ranges in query strings can be complex and error prone. It is
much more reliable to use an explicit <<query-dsl-range-query,`range` query>>.
==== Boosting
===== Boosting
Use the _boost_ operator `^` to make one term more relevant than another.
For instance, if we want to find all documents about foxes, but we are
@ -196,7 +196,7 @@ Boosts can also be applied to phrases or to groups:
"john smith"^2 (foo bar)^4
==== Boolean operators
===== Boolean operators
By default, all terms are optional, as long as one term matches. A search
for `foo bar baz` will find any document that contains one or more of
@ -256,7 +256,7 @@ would look like this:
****
==== Grouping
===== Grouping
Multiple terms or clauses can be grouped together with parentheses, to form
sub-queries:
@ -268,7 +268,7 @@ of a sub-query:
status:(active OR pending) title:(full text search)^2
==== Reserved characters
===== Reserved characters
If you need to use any of the characters which function as operators in your
query itself (and not as operators), then you should escape them with
@ -290,7 +290,7 @@ index is actually `"wifi"`. Escaping the space will protect it from
being touched by the query string parser: `"wi\ fi"`.
****
==== Empty Query
===== Empty Query
If the query string is empty or only contains whitespaces the query will
yield an empty result set.

View File

@ -0,0 +1,77 @@
[[query-filter-context]]
== Query and filter context
The behaviour of a query clause depends on whether it is used in _query context_ or
in _filter context_:
Query context::
+
--
A query clause used in query context answers the question ``__How well does this
document match this query clause?__'' Besides deciding whether or not the
document matches, the query clause also calculated a `_score` representing how
well the document matches, relative to other documents.
Query context is in effect whenever a query clause is passed to a `query` parameter,
such as the `query` parameter in the <<search-request-query,`search`>> API.
--
Filter context::
+
--
In _filter_ context, a query clause answers the question ``__Does this document
match this query clause?__'' The answer is a simple Yes or No -- no scores are
calculated. Filter context is mostly used for filtering structured data, e.g.
* __Does this +timestamp+ fall into the range 2015 to 2016?__
* __Is the +status+ field set to ++"published"++__?
Frequently used filters will be cached automatically by Elasticsearch, to
speed up performance.
Filter context is in effect whenever a query clause is passed to a `filter`
parameter, such as the `filter` or `must_not` parameters in the
<<query-dsl-bool-query,`bool`>> query, the `filter` parameter in the
<<query-dsl-constant-score-query,`constant_score`>> query, or the
<<search-aggregations-bucket-filter-aggregation,`filter`>> aggregation.
--
Below is an example of query clauses being used in query and filter context
in the `search` API. This query will match documents where all of the following
conditions are met:
* The `title` field contains the word `search`.
* The `content` field contains the word `elasticsearch`.
* The `status` field contains the exact word `published`.
* The `publish_date` field contains a date from 1 Jan 2015 onwards.
[source,json]
------------------------------------
GET _search
{
"query": { <1>
"bool": { <2>
"must": [
{ "match": { "title": "Search" }}, <2>
{ "match": { "content": "Elasticsearch" }} <2>
],
"filter": [ <3>
{ "term": { "status": "published" }}, <4>
{ "range": { "publish_date": { "gte": "2015-01-01" }}} <4>
]
}
}
}
------------------------------------
<1> The `query` parameter indicates query context.
<2> The `bool` and two `match` clauses are used in query context,
which means that they are used to score how well each document
matches.
<3> The `filter` parameter indicates filter context.
<4> The `term` and `range` clauses are used in filter context.
They will filter out documents which do not match, but they will
not affect the score for matching documents.
TIP: Use query clauses in query context for conditions which should affect the
score of matching documents (i.e. how well does the document match), and use
all other query clauses in filter context.

View File

@ -1,5 +1,5 @@
[[query-dsl-range-query]]
== Range Query
=== Range Query
Matches documents with fields that have terms within a certain range.
The type of the Lucene query depends on the field type, for `string`
@ -30,7 +30,7 @@ The `range` query accepts the following parameters:
`boost`:: Sets the boost value of the query, defaults to `1.0`
[float]
=== Date options
==== Date options
When applied on `date` fields the `range` filter accepts also a `time_zone` parameter.
The `time_zone` parameter will be applied to your input lower and upper bounds and will

View File

@ -1,5 +1,5 @@
[[query-dsl-regexp-query]]
== Regexp Query
=== Regexp Query
The `regexp` query allows you to use regular expression term queries.
See <<regexp-syntax>> for details of the supported regular expression language.

View File

@ -1,17 +1,17 @@
[[regexp-syntax]]
=== Regular expression syntax
==== Regular expression syntax
Regular expression queries are supported by the `regexp` and the `query_string`
queries. The Lucene regular expression engine
is not Perl-compatible but supports a smaller range of operators.
[NOTE]
====
=====
We will not attempt to explain regular expressions, but
just explain the supported operators.
====
=====
==== Standard operators
===== Standard operators
Anchoring::
+

View File

@ -1,5 +1,5 @@
[[query-dsl-script-query]]
== Script Query
=== Script Query
A query allowing to define
<<modules-scripting,scripts>> as filters. For
@ -20,7 +20,7 @@ example:
----------------------------------------------
[float]
=== Custom Parameters
==== Custom Parameters
Scripts are compiled and cached for faster execution. If the same script
can be used, just with different parameters provider, it is preferable

View File

@ -1,5 +1,5 @@
[[query-dsl-simple-query-string-query]]
== Simple Query String Query
=== Simple Query String Query
A query that uses the SimpleQueryParser to parse its context. Unlike the
regular `query_string` query, the `simple_query_string` query will never
@ -57,7 +57,7 @@ Defaults to `ROOT`.
|=======================================================================
[float]
==== Simple Query String Syntax
===== Simple Query String Syntax
The `simple_query_string` supports the following special characters:
* `+` signifies AND operation
@ -73,7 +73,7 @@ In order to search for any of these special characters, they will need to
be escaped with `\`.
[float]
=== Default Field
==== Default Field
When not explicitly specifying the field to search on in the query
string syntax, the `index.query.default_field` will be used to derive
which field to search on. It defaults to `_all` field.
@ -82,7 +82,7 @@ So, if `_all` field is disabled, it might make sense to change it to set
a different default field.
[float]
=== Multi Field
==== Multi Field
The fields parameter can also include pattern based field names,
allowing to automatically expand to the relevant fields (dynamically
introduced fields included). For example:
@ -98,7 +98,7 @@ introduced fields included). For example:
--------------------------------------------------
[float]
=== Flags
==== Flags
`simple_query_string` support multiple flags to specify which parsing features
should be enabled. It is specified as a `|`-delimited string with the
`flags` parameter:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-containing-query]]
== Span Containing Query
=== Span Containing Query
Returns matches which enclose another span query. The span containing
query maps to Lucene `SpanContainingQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-first-query]]
== Span First Query
=== Span First Query
Matches spans near the beginning of a field. The span first query maps
to Lucene `SpanFirstQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-multi-term-query]]
== Span Multi Term Query
=== Span Multi Term Query
The `span_multi` query allows you to wrap a `multi term query` (one of wildcard,
fuzzy, prefix, term, range or regexp query) as a `span query`, so

View File

@ -1,5 +1,5 @@
[[query-dsl-span-near-query]]
== Span Near Query
=== Span Near Query
Matches spans which are near one another. One can specify _slop_, the
maximum number of intervening unmatched positions, as well as whether

View File

@ -1,5 +1,5 @@
[[query-dsl-span-not-query]]
== Span Not Query
=== Span Not Query
Removes matches which overlap with another span query. The span not
query maps to Lucene `SpanNotQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-or-query]]
== Span Or Query
=== Span Or Query
Matches the union of its span clauses. The span or query maps to Lucene
`SpanOrQuery`. Here is an example:

View File

@ -0,0 +1,65 @@
[[span-queries]]
== Span queries
Span queries are low-level positional queries which provide expert control
over the order and proximity of the specified terms. These are typically used
to implement very specific queries on legal documents or patents.
Span queries cannot be mixed with non-span queries (with the exception of the `span_multi` query).
The queries in this group are:
<<query-dsl-span-term-query,`span_term` query>>::
The equivalent of the <<query-dsl-term-query,`term` query>> but for use with
other span queries.
<<query-dsl-span-multi-term-query,`span_multi` query>>::
Wraps a <<query-dsl-term-query,`term`>>, <<query-dsl-range-query,`range`>>,
<<query-dsl-prefix-query,`prefix`>>, <<query-dsl-wildcard-query,`wildcard`>>,
<<query-dsl-regexp-query,`regexp`>>, or <<query-dsl-fuzzy-query,`fuzzy`>> query.
<<query-dsl-span-first-query,`span_first` query>>::
Accepts another span query whose matches must appear within the first N
positions of the field.
<<query-dsl-span-near-query,`span_near` query>>::
Accepts multiple span queries whose matches must be within the specified distance of each other, and possibly in the same order.
<<query-dsl-span-or-query,`span_or` query>>::
Combines multiple span queries -- returns documents which match any of the
specified queries.
<<query-dsl-span-not-query,`span_not` query>>::
Wraps another span query, and excludes any documents which match that query.
<<query-dsl-span-containing-query,`span_containing` query>>::
Accepts a list of span queries, but only returns those spans which also match a second span query.
<<query-dsl-span-within-query,`span_within` query>>::
The result from a single span query is returned as long is its span falls
within the spans returned by a list of other span queries.
include::span-term-query.asciidoc[]
include::span-multi-term-query.asciidoc[]
include::span-first-query.asciidoc[]
include::span-near-query.asciidoc[]
include::span-or-query.asciidoc[]
include::span-not-query.asciidoc[]
include::span-containing-query.asciidoc[]
include::span-within-query.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-span-term-query]]
== Span Term Query
=== Span Term Query
Matches spans containing a term. The span term query maps to Lucene
`SpanTermQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-within-query]]
== Span Within Query
=== Span Within Query
Returns matches which are enclosed inside another span query. The span within
query maps to Lucene `SpanWithinQuery`. Here is an example:

View File

@ -0,0 +1,29 @@
[[specialized-queries]]
== Specialized queries
This group contains queries which do not fit into the other groups:
<<query-dsl-mlt-query,`more_like_this` query>>::
This query finds documents which are similar to the specified text, document,
or collection of documents.
<<query-dsl-template-query,`template` query>>::
The `template` query accepts a Mustache template (either inline, indexed, or
from a file), and a map of parameters, and combines the two to generate the
final query to execute.
<<query-dsl-script-query,`script` query>>::
This query allows a script to act as a filter. Also see the
<<query-dsl-function-score-query,`function_score` query>>.
include::mlt-query.asciidoc[]
include::template-query.asciidoc[]
include::script-query.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-template-query]]
== Template Query
=== Template Query
A query that accepts a query template and a map of key/value pairs to fill in
template parameters. Templating is based on Mustache. For simple token substitution all you provide
@ -56,7 +56,7 @@ GET /_search
<1> New line characters (`\n`) should be escaped as `\\n` or removed,
and quotes (`"`) should be escaped as `\\"`.
=== Stored templates
==== Stored templates
You can register a template by storing it in the `config/scripts` directory, in a file using the `.mustache` extension.
In order to execute the stored template, reference it by name in the `file`

View File

@ -0,0 +1,93 @@
[[term-level-queries]]
== Term level queries
While the <<full-text-queries,full text queries>> will analyze the query
string before executing, the _term-level queries_ operate on the exact terms
that are stored in the inverted index.
These queries are usually used for structured data like numbers, dates, and
enums, rather than full text fields. Alternatively, they allow you to craft
low-level queries, foregoing the analysis process.
The queries in this group are:
<<query-dsl-term-query,`term` query>>::
Find documents which contain the exact term specified in the field
specified.
<<query-dsl-terms-query,`terms` query>>::
Find documents which contain any of the exact terms specified in the field
specified.
<<query-dsl-range-query,`range` query>>::
Find documents where the field specified contains values (dates, numbers,
or strings) in the range specified.
<<query-dsl-exists-query,`exists` query>>::
Find documents where the field specified contains any non-null value.
<<query-dsl-missing-query,`missing` query>>::
Find documents where the field specified does is missing or contains only
`null` values.
<<query-dsl-prefix-query,`prefix` query>>::
Find documents where the field specified contains terms which being with
the exact prefix specified.
<<query-dsl-wildcard-query,`wildcard` query>>::
Find documents where the field specified contains terms which match the
pattern specified, where the pattern supports single character wildcards
(`?`) and multi-character wildcards (`*`)
<<query-dsl-regexp-query,`regexp` query>>::
Find documents where the field specified contains terms which match the
<<regexp-syntax,regular expression>> specified.
<<query-dsl-fuzzy-query,`fuzzy` query>>::
Find documents where the field specified contains terms which are fuzzily
similar to the specified term. Fuzziness is measured as a
http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance[Levenshtein edit distance]
of 1 or 2.
<<query-dsl-type-query,`type` query>>::
Find documents of the specified type.
<<query-dsl-ids-query,`ids` query>>::
Find documents with the specified type and IDs.
include::term-query.asciidoc[]
include::terms-query.asciidoc[]
include::range-query.asciidoc[]
include::exists-query.asciidoc[]
include::missing-query.asciidoc[]
include::prefix-query.asciidoc[]
include::wildcard-query.asciidoc[]
include::regexp-query.asciidoc[]
include::fuzzy-query.asciidoc[]
include::type-query.asciidoc[]
include::ids-query.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-term-query]]
== Term Query
=== Term Query
The `term` query finds documents that contain the *exact* term specified
in the inverted index. For instance:

View File

@ -1,5 +1,5 @@
[[query-dsl-terms-query]]
== Terms Query
=== Terms Query
Filters documents that have fields that match any of the provided terms
(*not analyzed*). For example:
@ -19,7 +19,8 @@ The `terms` query is also aliased with `in` as the filter name for
simpler usage.
[float]
==== Terms lookup mechanism
[[query-dsl-terms-lookup]]
===== Terms lookup mechanism
When it's needed to specify a `terms` filter with a lot of terms it can
be beneficial to fetch those term values from a document in an index. A
@ -31,21 +32,21 @@ lookup mechanism.
The terms lookup mechanism supports the following options:
[horizontal]
`index`::
`index`::
The index to fetch the term values from. Defaults to the
current index.
`type`::
`type`::
The type to fetch the term values from.
`id`::
`id`::
The id of the document to fetch the term values from.
`path`::
`path`::
The field specified as path to fetch the actual values for the
`terms` filter.
`routing`::
`routing`::
A custom routing value to be used when retrieving the
external terms doc.
@ -61,7 +62,7 @@ terms filter will prefer to execute the get request on a local node if
possible, reducing the need for networking.
[float]
==== Terms lookup twitter example
===== Terms lookup twitter example
[source,js]
--------------------------------------------------

View File

@ -1,5 +1,5 @@
[[query-dsl-type-query]]
== Type Query
=== Type Query
Filters documents matching the provided document / mapping type.

View File

@ -1,5 +1,5 @@
[[query-dsl-wildcard-query]]
== Wildcard Query
=== Wildcard Query
Matches documents that have fields matching a wildcard expression (*not
analyzed*). Supported wildcards are `*`, which matches any character

View File

@ -260,7 +260,7 @@ public class MapperQueryParser extends QueryParser {
}
}
if (query == null) {
query = super.getFieldQuery(currentMapper.names().indexName(), queryText, quoted);
query = super.getFieldQuery(currentMapper.fieldType().names().indexName(), queryText, quoted);
}
return query;
}
@ -372,7 +372,7 @@ public class MapperQueryParser extends QueryParser {
Query rangeQuery;
if (currentMapper instanceof DateFieldMapper && settings.timeZone() != null) {
DateFieldMapper dateFieldMapper = (DateFieldMapper) this.currentMapper;
rangeQuery = dateFieldMapper.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null, parseContext);
rangeQuery = dateFieldMapper.fieldType().rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null, parseContext);
} else {
rangeQuery = currentMapper.rangeQuery(part1, part2, startInclusive, endInclusive, parseContext);
}
@ -508,7 +508,7 @@ public class MapperQueryParser extends QueryParser {
query = currentMapper.prefixQuery(termStr, multiTermRewriteMethod, parseContext);
}
if (query == null) {
query = getPossiblyAnalyzedPrefixQuery(currentMapper.names().indexName(), termStr);
query = getPossiblyAnalyzedPrefixQuery(currentMapper.fieldType().names().indexName(), termStr);
}
return query;
}
@ -644,7 +644,7 @@ public class MapperQueryParser extends QueryParser {
if (!forcedAnalyzer) {
setAnalyzer(parseContext.getSearchAnalyzer(currentMapper));
}
indexedNameField = currentMapper.names().indexName();
indexedNameField = currentMapper.fieldType().names().indexName();
return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr);
}
return getPossiblyAnalyzedWildcardQuery(indexedNameField, termStr);

View File

@ -32,6 +32,11 @@ public class TimestampParsingException extends ElasticsearchException {
this.timestamp = timestamp;
}
public TimestampParsingException(String timestamp, Throwable cause) {
super("failed to parse timestamp [" + timestamp + "]", cause);
this.timestamp = timestamp;
}
public String timestamp() {
return timestamp;
}

View File

@ -113,8 +113,8 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
if (fieldMapper.isNumeric()) {
throw new IllegalArgumentException("Can't process field [" + request.field() + "], Analysis requests are not supported on numeric fields");
}
analyzer = fieldMapper.indexAnalyzer();
field = fieldMapper.names().indexName();
analyzer = fieldMapper.fieldType().indexAnalyzer();
field = fieldMapper.fieldType().names().indexName();
}
}

View File

@ -179,7 +179,7 @@ public class TransportGetFieldMappingsIndexAction extends TransportSingleCustomO
for (String field : request.fields()) {
if (Regex.isMatchAllPattern(field)) {
for (FieldMapper fieldMapper : allFieldMappers) {
addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
addFieldMapper(fieldMapper.fieldType().names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
}
} else if (Regex.isSimpleMatchPattern(field)) {
// go through the field mappers 3 times, to make sure we give preference to the resolve order: full name, index name, name.
@ -187,22 +187,22 @@ public class TransportGetFieldMappingsIndexAction extends TransportSingleCustomO
Collection<FieldMapper> remainingFieldMappers = Lists.newLinkedList(allFieldMappers);
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
final FieldMapper fieldMapper = it.next();
if (Regex.simpleMatch(field, fieldMapper.names().fullName())) {
addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
if (Regex.simpleMatch(field, fieldMapper.fieldType().names().fullName())) {
addFieldMapper(fieldMapper.fieldType().names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());
it.remove();
}
}
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
final FieldMapper fieldMapper = it.next();
if (Regex.simpleMatch(field, fieldMapper.names().indexName())) {
addFieldMapper(fieldMapper.names().indexName(), fieldMapper, fieldMappings, request.includeDefaults());
if (Regex.simpleMatch(field, fieldMapper.fieldType().names().indexName())) {
addFieldMapper(fieldMapper.fieldType().names().indexName(), fieldMapper, fieldMappings, request.includeDefaults());
it.remove();
}
}
for (Iterator<FieldMapper> it = remainingFieldMappers.iterator(); it.hasNext(); ) {
final FieldMapper fieldMapper = it.next();
if (Regex.simpleMatch(field, fieldMapper.names().shortName())) {
addFieldMapper(fieldMapper.names().shortName(), fieldMapper, fieldMappings, request.includeDefaults());
if (Regex.simpleMatch(field, fieldMapper.fieldType().names().shortName())) {
addFieldMapper(fieldMapper.fieldType().names().shortName(), fieldMapper, fieldMappings, request.includeDefaults());
it.remove();
}
}
@ -229,7 +229,7 @@ public class TransportGetFieldMappingsIndexAction extends TransportSingleCustomO
builder.startObject();
fieldMapper.toXContent(builder, includeDefaults ? includeDefaultsParams : ToXContent.EMPTY_PARAMS);
builder.endObject();
fieldMappings.put(field, new FieldMappingMetaData(fieldMapper.names().fullName(), builder.bytes()));
fieldMappings.put(field, new FieldMappingMetaData(fieldMapper.fieldType().names().fullName(), builder.bytes()));
} catch (IOException e) {
throw new ElasticsearchException("failed to serialize XContent of field [" + field + "]", e);
}

View File

@ -332,7 +332,7 @@ public class BulkRequest extends ActionRequest<BulkRequest> implements Composite
} else {
throw new IllegalArgumentException("Action/metadata line [" + line + "] contains an unknown parameter [" + currentFieldName + "]");
}
} else {
} else if (token != XContentParser.Token.VALUE_NULL) {
throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected a simple value for field [" + currentFieldName + "] but found [" + token + "]");
}
}

View File

@ -161,19 +161,11 @@ public class MappingMetaData extends AbstractDiffable<MappingMetaData> {
public static class Timestamp {
public static String parseStringTimestamp(String timestampAsString, FormatDateTimeFormatter dateTimeFormatter) throws TimestampParsingException {
long ts;
try {
// if we manage to parse it, its a millisecond timestamp, just return the string as is
ts = Long.parseLong(timestampAsString);
return timestampAsString;
} catch (NumberFormatException e) {
try {
ts = dateTimeFormatter.parser().parseMillis(timestampAsString);
} catch (RuntimeException e1) {
throw new TimestampParsingException(timestampAsString);
}
return Long.toString(dateTimeFormatter.parser().parseMillis(timestampAsString));
} catch (RuntimeException e) {
throw new TimestampParsingException(timestampAsString, e);
}
return Long.toString(ts);
}
@ -289,7 +281,7 @@ public class MappingMetaData extends AbstractDiffable<MappingMetaData> {
this.id = new Id(docMapper.idFieldMapper().path());
this.routing = new Routing(docMapper.routingFieldMapper().required(), docMapper.routingFieldMapper().path());
this.timestamp = new Timestamp(docMapper.timestampFieldMapper().enabled(), docMapper.timestampFieldMapper().path(),
docMapper.timestampFieldMapper().dateTimeFormatter().format(), docMapper.timestampFieldMapper().defaultTimestamp(),
docMapper.timestampFieldMapper().fieldType().dateTimeFormatter().format(), docMapper.timestampFieldMapper().defaultTimestamp(),
docMapper.timestampFieldMapper().ignoreMissing());
this.hasParentField = docMapper.parentFieldMapper().active();
}

View File

@ -728,7 +728,7 @@ public abstract class ShapeBuilder implements ToXContent {
Distance radius = null;
CoordinateNode node = null;
GeometryCollectionBuilder geometryCollections = null;
Orientation requestedOrientation = (shapeMapper == null) ? Orientation.RIGHT : shapeMapper.orientation();
Orientation requestedOrientation = (shapeMapper == null) ? Orientation.RIGHT : shapeMapper.fieldType().orientation();
XContentParser.Token token;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {

View File

@ -19,14 +19,14 @@
package org.elasticsearch.common.joda;
import org.apache.commons.lang3.StringUtils;
import org.elasticsearch.ElasticsearchParseException;
import org.joda.time.DateTimeZone;
import org.joda.time.MutableDateTime;
import org.joda.time.format.DateTimeFormatter;
import java.util.concurrent.Callable;
import java.util.concurrent.TimeUnit;
import static com.google.common.base.Preconditions.checkNotNull;
/**
* A parser for date/time formatted text with optional date math.
@ -38,13 +38,10 @@ import java.util.concurrent.TimeUnit;
public class DateMathParser {
private final FormatDateTimeFormatter dateTimeFormatter;
private final TimeUnit timeUnit;
public DateMathParser(FormatDateTimeFormatter dateTimeFormatter, TimeUnit timeUnit) {
if (dateTimeFormatter == null) throw new NullPointerException();
if (timeUnit == null) throw new NullPointerException();
public DateMathParser(FormatDateTimeFormatter dateTimeFormatter) {
checkNotNull(dateTimeFormatter);
this.dateTimeFormatter = dateTimeFormatter;
this.timeUnit = timeUnit;
}
public long parse(String text, Callable<Long> now) {
@ -195,17 +192,6 @@ public class DateMathParser {
}
private long parseDateTime(String value, DateTimeZone timeZone) {
// first check for timestamp
if (value.length() > 4 && StringUtils.isNumeric(value)) {
try {
long time = Long.parseLong(value);
return timeUnit.toMillis(time);
} catch (NumberFormatException e) {
throw new ElasticsearchParseException("failed to parse date field [" + value + "] as timestamp", e);
}
}
DateTimeFormatter parser = dateTimeFormatter.parser();
if (timeZone != null) {
parser = parser.withZone(timeZone);

View File

@ -27,6 +27,7 @@ import org.joda.time.field.ScaledDurationField;
import org.joda.time.format.*;
import java.util.Locale;
import java.util.regex.Pattern;
/**
*
@ -133,6 +134,10 @@ public class Joda {
formatter = ISODateTimeFormat.yearMonth();
} else if ("yearMonthDay".equals(input) || "year_month_day".equals(input)) {
formatter = ISODateTimeFormat.yearMonthDay();
} else if ("epoch_second".equals(input)) {
formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(false)).toFormatter();
} else if ("epoch_millis".equals(input)) {
formatter = new DateTimeFormatterBuilder().append(new EpochTimeParser(true)).toFormatter();
} else if (Strings.hasLength(input) && input.contains("||")) {
String[] formats = Strings.delimitedListToStringArray(input, "||");
DateTimeParser[] parsers = new DateTimeParser[formats.length];
@ -192,4 +197,50 @@ public class Joda {
return new OffsetDateTimeField(new DividedDateTimeField(new OffsetDateTimeField(chronology.monthOfYear(), -1), QuarterOfYear, 3), 1);
}
};
public static class EpochTimeParser implements DateTimeParser {
private static final Pattern MILLI_SECOND_PRECISION_PATTERN = Pattern.compile("^\\d{1,13}$");
private static final Pattern SECOND_PRECISION_PATTERN = Pattern.compile("^\\d{1,10}$");
private final boolean hasMilliSecondPrecision;
private final Pattern pattern;
public EpochTimeParser(boolean hasMilliSecondPrecision) {
this.hasMilliSecondPrecision = hasMilliSecondPrecision;
this.pattern = hasMilliSecondPrecision ? MILLI_SECOND_PRECISION_PATTERN : SECOND_PRECISION_PATTERN;
}
@Override
public int estimateParsedLength() {
return hasMilliSecondPrecision ? 13 : 10;
}
@Override
public int parseInto(DateTimeParserBucket bucket, String text, int position) {
if (text.length() > estimateParsedLength() ||
// timestamps have to have UTC timezone
bucket.getZone() != DateTimeZone.UTC ||
pattern.matcher(text).matches() == false) {
return -1;
}
int factor = hasMilliSecondPrecision ? 1 : 1000;
try {
long millis = Long.valueOf(text) * factor;
DateTime dt = new DateTime(millis, DateTimeZone.UTC);
bucket.saveField(DateTimeFieldType.year(), dt.getYear());
bucket.saveField(DateTimeFieldType.monthOfYear(), dt.getMonthOfYear());
bucket.saveField(DateTimeFieldType.dayOfMonth(), dt.getDayOfMonth());
bucket.saveField(DateTimeFieldType.hourOfDay(), dt.getHourOfDay());
bucket.saveField(DateTimeFieldType.minuteOfHour(), dt.getMinuteOfHour());
bucket.saveField(DateTimeFieldType.secondOfMinute(), dt.getSecondOfMinute());
bucket.saveField(DateTimeFieldType.millisOfSecond(), dt.getMillisOfSecond());
bucket.setZone(DateTimeZone.UTC);
} catch (Exception e) {
return -1;
}
return text.length();
}
};
}

View File

@ -20,7 +20,7 @@
package org.elasticsearch.index.fielddata;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.mapper.FieldMapper.Loading;
import org.elasticsearch.index.mapper.MappedFieldType.Loading;
/**
*/

View File

@ -32,6 +32,7 @@ import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexComponent;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -77,7 +78,7 @@ public interface IndexFieldData<FD extends AtomicFieldData> extends IndexCompone
/**
* The field name.
*/
FieldMapper.Names getFieldNames();
MappedFieldType.Names getFieldNames();
/**
* The field data type.

View File

@ -23,6 +23,7 @@ import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.util.Accountable;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
/**
* A simple field data cache abstraction on the *index* level.
@ -47,9 +48,9 @@ public interface IndexFieldDataCache {
interface Listener {
void onLoad(FieldMapper.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage);
void onLoad(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage);
void onUnload(FieldMapper.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes);
void onUnload(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes);
}
class None implements IndexFieldDataCache {

View File

@ -32,6 +32,7 @@ import org.elasticsearch.index.AbstractIndexComponent;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.fielddata.plain.*;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.core.BooleanFieldMapper;
import org.elasticsearch.index.mapper.internal.IndexFieldMapper;
import org.elasticsearch.index.mapper.internal.ParentFieldMapper;
@ -46,6 +47,8 @@ import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentMap;
import static org.elasticsearch.index.mapper.MappedFieldType.Names;
/**
*/
public class IndexFieldDataService extends AbstractIndexComponent {
@ -226,12 +229,12 @@ public class IndexFieldDataService extends AbstractIndexComponent {
@SuppressWarnings("unchecked")
public <IFD extends IndexFieldData<?>> IFD getForField(FieldMapper mapper) {
final FieldMapper.Names fieldNames = mapper.names();
final FieldDataType type = mapper.fieldDataType();
final Names fieldNames = mapper.fieldType().names();
final FieldDataType type = mapper.fieldType().fieldDataType();
if (type == null) {
throw new IllegalArgumentException("found no fielddata type for field [" + fieldNames.fullName() + "]");
}
final boolean docValues = mapper.hasDocValues();
final boolean docValues = mapper.fieldType().hasDocValues();
final String key = fieldNames.indexName();
IndexFieldData<?> fieldData = loadedFieldData.get(key);
if (fieldData == null) {

View File

@ -26,7 +26,7 @@ import org.elasticsearch.common.metrics.CounterMetric;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.index.shard.AbstractIndexShardComponent;
import org.elasticsearch.index.shard.ShardId;
@ -62,7 +62,7 @@ public class ShardFieldData extends AbstractIndexShardComponent implements Index
}
@Override
public void onLoad(FieldMapper.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage) {
public void onLoad(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, Accountable ramUsage) {
totalMetric.inc(ramUsage.ramBytesUsed());
String keyFieldName = fieldNames.indexName();
CounterMetric total = perFieldTotals.get(keyFieldName);
@ -79,7 +79,7 @@ public class ShardFieldData extends AbstractIndexShardComponent implements Index
}
@Override
public void onUnload(FieldMapper.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes) {
public void onUnload(MappedFieldType.Names fieldNames, FieldDataType fieldDataType, boolean wasEvicted, long sizeInBytes) {
if (wasEvicted) {
evictionsMetric.inc();
}

View File

@ -31,6 +31,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.search.MultiValueMode;
import java.util.Collection;
@ -41,11 +42,11 @@ import java.util.Collections;
*/
public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponent implements IndexOrdinalsFieldData, Accountable {
private final FieldMapper.Names fieldNames;
private final MappedFieldType.Names fieldNames;
private final FieldDataType fieldDataType;
private final long memorySizeInBytes;
protected GlobalOrdinalsIndexFieldData(Index index, Settings settings, FieldMapper.Names fieldNames, FieldDataType fieldDataType, long memorySizeInBytes) {
protected GlobalOrdinalsIndexFieldData(Index index, Settings settings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType, long memorySizeInBytes) {
super(index, settings);
this.fieldNames = fieldNames;
this.fieldDataType = fieldDataType;
@ -68,7 +69,7 @@ public abstract class GlobalOrdinalsIndexFieldData extends AbstractIndexComponen
}
@Override
public FieldMapper.Names getFieldNames() {
public MappedFieldType.Names getFieldNames() {
return fieldNames;
}

View File

@ -28,6 +28,7 @@ import org.elasticsearch.index.fielddata.AtomicOrdinalsFieldData;
import org.elasticsearch.index.fielddata.FieldDataType;
import org.elasticsearch.index.fielddata.plain.AbstractAtomicOrdinalsFieldData;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import java.util.Collection;
@ -38,7 +39,7 @@ final class InternalGlobalOrdinalsIndexFieldData extends GlobalOrdinalsIndexFiel
private final Atomic[] atomicReaders;
InternalGlobalOrdinalsIndexFieldData(Index index, Settings settings, FieldMapper.Names fieldNames, FieldDataType fieldDataType, AtomicOrdinalsFieldData[] segmentAfd, OrdinalMap ordinalMap, long memorySizeInBytes) {
InternalGlobalOrdinalsIndexFieldData(Index index, Settings settings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType, AtomicOrdinalsFieldData[] segmentAfd, OrdinalMap ordinalMap, long memorySizeInBytes) {
super(index, settings, fieldNames, fieldDataType, memorySizeInBytes);
this.atomicReaders = new Atomic[segmentAfd.length];
for (int i = 0; i < segmentAfd.length; i++) {

View File

@ -30,6 +30,7 @@ import org.elasticsearch.index.AbstractIndexComponent;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.settings.IndexSettings;
import java.io.IOException;
@ -38,11 +39,11 @@ import java.io.IOException;
*/
public abstract class AbstractIndexFieldData<FD extends AtomicFieldData> extends AbstractIndexComponent implements IndexFieldData<FD> {
private final FieldMapper.Names fieldNames;
private final MappedFieldType.Names fieldNames;
protected final FieldDataType fieldDataType;
protected final IndexFieldDataCache cache;
public AbstractIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames, FieldDataType fieldDataType, IndexFieldDataCache cache) {
public AbstractIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType, IndexFieldDataCache cache) {
super(index, indexSettings);
this.fieldNames = fieldNames;
this.fieldDataType = fieldDataType;
@ -50,7 +51,7 @@ public abstract class AbstractIndexFieldData<FD extends AtomicFieldData> extends
}
@Override
public FieldMapper.Names getFieldNames() {
public MappedFieldType.Names getFieldNames() {
return this.fieldNames;
}

View File

@ -28,7 +28,7 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.search.MultiValueMode;
import java.io.IOException;

View File

@ -29,7 +29,7 @@ import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.search.MultiValueMode;

View File

@ -25,7 +25,7 @@ import org.elasticsearch.index.fielddata.FieldDataType;
import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.search.MultiValueMode;
public class BinaryDVIndexFieldData extends DocValuesIndexFieldData implements IndexFieldData<BinaryDVAtomicFieldData> {

View File

@ -39,7 +39,7 @@ import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;
import org.elasticsearch.index.fielddata.fieldcomparator.DoubleValuesComparatorSource;
import org.elasticsearch.index.fielddata.fieldcomparator.FloatValuesComparatorSource;
import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.search.MultiValueMode;
import java.io.IOException;

View File

@ -29,7 +29,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.search.MultiValueMode;
@ -67,8 +67,8 @@ public class BytesBinaryDVIndexFieldData extends DocValuesIndexFieldData impleme
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
// Ignore breaker
final Names fieldNames = mapper.names();
return new BytesBinaryDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
final Names fieldNames = mapper.fieldType().names();
return new BytesBinaryDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
}
}

View File

@ -25,7 +25,7 @@ import org.elasticsearch.index.Index;
import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.search.MultiValueMode;
@ -42,7 +42,7 @@ public final class DisabledIndexFieldData extends AbstractIndexFieldData<AtomicF
public IndexFieldData<AtomicFieldData> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
// Ignore Circuit Breaker
return new DisabledIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache);
return new DisabledIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache);
}
}

View File

@ -31,7 +31,8 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.fielddata.IndexNumericFieldData.NumericType;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.internal.IdFieldMapper;
import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;
@ -93,8 +94,8 @@ public abstract class DocValuesIndexFieldData {
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
// Ignore Circuit Breaker
final FieldMapper.Names fieldNames = mapper.names();
final Settings fdSettings = mapper.fieldDataType().getSettings();
final Names fieldNames = mapper.fieldType().names();
final Settings fdSettings = mapper.fieldType().fieldDataType().getSettings();
final Map<String, Settings> filter = fdSettings.getGroups("filter");
if (filter != null && !filter.isEmpty()) {
throw new IllegalArgumentException("Doc values field data doesn't support filters [" + fieldNames.fullName() + "]");
@ -102,19 +103,19 @@ public abstract class DocValuesIndexFieldData {
if (BINARY_INDEX_FIELD_NAMES.contains(fieldNames.indexName())) {
assert numericType == null;
return new BinaryDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
return new BinaryDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
} else if (NUMERIC_INDEX_FIELD_NAMES.contains(fieldNames.indexName())) {
assert !numericType.isFloatingPoint();
return new NumericDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
return new NumericDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
} else if (numericType != null) {
if (Version.indexCreated(indexSettings).onOrAfter(Version.V_1_4_0_Beta1)) {
return new SortedNumericDVIndexFieldData(index, fieldNames, numericType, mapper.fieldDataType());
return new SortedNumericDVIndexFieldData(index, fieldNames, numericType, mapper.fieldType().fieldDataType());
} else {
// prior to ES 1.4: multi-valued numerics were boxed inside a byte[] as BINARY
return new BinaryDVNumericIndexFieldData(index, fieldNames, numericType, mapper.fieldDataType());
return new BinaryDVNumericIndexFieldData(index, fieldNames, numericType, mapper.fieldType().fieldDataType());
}
} else {
return new SortedSetDVOrdinalsIndexFieldData(index, cache, indexSettings, fieldNames, breakerService, mapper.fieldDataType());
return new SortedSetDVOrdinalsIndexFieldData(index, cache, indexSettings, fieldNames, breakerService, mapper.fieldType().fieldDataType());
}
}

View File

@ -53,6 +53,7 @@ import org.elasticsearch.index.fielddata.fieldcomparator.DoubleValuesComparatorS
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -74,11 +75,11 @@ public class DoubleArrayIndexFieldData extends AbstractIndexFieldData<AtomicNume
@Override
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
return new DoubleArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
return new DoubleArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
}
}
public DoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
public DoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
super(index, indexSettings, fieldNames, fieldDataType, cache);
this.breakerService = breakerService;

View File

@ -33,6 +33,7 @@ import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -48,11 +49,11 @@ public class FSTBytesIndexFieldData extends AbstractIndexOrdinalsFieldData {
@Override
public IndexOrdinalsFieldData build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
return new FSTBytesIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
return new FSTBytesIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
}
}
FSTBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames, FieldDataType fieldDataType,
FSTBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames, FieldDataType fieldDataType,
IndexFieldDataCache cache, CircuitBreakerService breakerService) {
super(index, indexSettings, fieldNames, fieldDataType, cache, breakerService);
this.breakerService = breakerService;

View File

@ -52,6 +52,7 @@ import org.elasticsearch.index.fielddata.fieldcomparator.FloatValuesComparatorSo
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -73,11 +74,11 @@ public class FloatArrayIndexFieldData extends AbstractIndexFieldData<AtomicNumer
@Override
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
return new FloatArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
return new FloatArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
}
}
public FloatArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
public FloatArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
super(index, indexSettings, fieldNames, fieldDataType, cache);
this.breakerService = breakerService;

View File

@ -27,7 +27,8 @@ import org.elasticsearch.index.Index;
import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.search.MultiValueMode;
@ -65,8 +66,8 @@ public class GeoPointBinaryDVIndexFieldData extends DocValuesIndexFieldData impl
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
// Ignore breaker
final FieldMapper.Names fieldNames = mapper.names();
return new GeoPointBinaryDVIndexFieldData(index, fieldNames, mapper.fieldDataType());
final Names fieldNames = mapper.fieldType().names();
return new GeoPointBinaryDVIndexFieldData(index, fieldNames, mapper.fieldType().fieldDataType());
}
}

View File

@ -36,6 +36,7 @@ import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
import org.elasticsearch.index.settings.IndexSettings;
@ -54,7 +55,7 @@ public class GeoPointCompressedIndexFieldData extends AbstractIndexGeoPointField
@Override
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
FieldDataType type = mapper.fieldDataType();
FieldDataType type = mapper.fieldType().fieldDataType();
final String precisionAsString = type.getSettings().get(PRECISION_KEY);
final Distance precision;
if (precisionAsString != null) {
@ -62,13 +63,13 @@ public class GeoPointCompressedIndexFieldData extends AbstractIndexGeoPointField
} else {
precision = DEFAULT_PRECISION_VALUE;
}
return new GeoPointCompressedIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, precision, breakerService);
return new GeoPointCompressedIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, precision, breakerService);
}
}
private final GeoPointFieldMapper.Encoding encoding;
public GeoPointCompressedIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
public GeoPointCompressedIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
FieldDataType fieldDataType, IndexFieldDataCache cache, Distance precision,
CircuitBreakerService breakerService) {
super(index, indexSettings, fieldNames, fieldDataType, cache);

View File

@ -33,6 +33,7 @@ import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -48,11 +49,11 @@ public class GeoPointDoubleArrayIndexFieldData extends AbstractIndexGeoPointFiel
@Override
public IndexFieldData<?> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
return new GeoPointDoubleArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
return new GeoPointDoubleArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
}
}
public GeoPointDoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
public GeoPointDoubleArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
super(index, indexSettings, fieldNames, fieldDataType, cache);
this.breakerService = breakerService;

View File

@ -34,6 +34,7 @@ import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.index.fielddata.IndexFieldDataCache;
import org.elasticsearch.index.fielddata.IndexOrdinalsFieldData;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -47,7 +48,7 @@ public class IndexIndexFieldData extends AbstractIndexOrdinalsFieldData {
@Override
public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper mapper, IndexFieldDataCache cache,
CircuitBreakerService breakerService, MapperService mapperService) {
return new IndexIndexFieldData(index, mapper.names());
return new IndexIndexFieldData(index, mapper.fieldType().names());
}
}
@ -101,7 +102,7 @@ public class IndexIndexFieldData extends AbstractIndexOrdinalsFieldData {
private final AtomicOrdinalsFieldData atomicFieldData;
private IndexIndexFieldData(Index index, FieldMapper.Names names) {
private IndexIndexFieldData(Index index, MappedFieldType.Names names) {
super(index, Settings.EMPTY, names, new FieldDataType("string"), null, null);
atomicFieldData = new IndexAtomicFieldData(index().name());
}

View File

@ -31,7 +31,7 @@ import org.elasticsearch.index.fielddata.FieldDataType;
import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource.Nested;
import org.elasticsearch.index.fielddata.IndexNumericFieldData;
import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSource;
import org.elasticsearch.index.mapper.FieldMapper.Names;
import org.elasticsearch.index.mapper.MappedFieldType.Names;
import org.elasticsearch.search.MultiValueMode;
import java.io.IOException;

View File

@ -57,6 +57,7 @@ import org.elasticsearch.index.fielddata.fieldcomparator.LongValuesComparatorSou
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -86,14 +87,14 @@ public class PackedArrayIndexFieldData extends AbstractIndexFieldData<AtomicNume
@Override
public IndexFieldData<AtomicNumericFieldData> build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
return new PackedArrayIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, numericType, breakerService);
return new PackedArrayIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, numericType, breakerService);
}
}
private final NumericType numericType;
private final CircuitBreakerService breakerService;
public PackedArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
public PackedArrayIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
FieldDataType fieldDataType, IndexFieldDataCache cache, NumericType numericType,
CircuitBreakerService breakerService) {
super(index, indexSettings, fieldNames, fieldDataType, cache);

View File

@ -33,6 +33,7 @@ import org.elasticsearch.index.fielddata.*;
import org.elasticsearch.index.fielddata.ordinals.Ordinals;
import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
@ -49,11 +50,11 @@ public class PagedBytesIndexFieldData extends AbstractIndexOrdinalsFieldData {
@Override
public IndexOrdinalsFieldData build(Index index, @IndexSettings Settings indexSettings, FieldMapper mapper,
IndexFieldDataCache cache, CircuitBreakerService breakerService, MapperService mapperService) {
return new PagedBytesIndexFieldData(index, indexSettings, mapper.names(), mapper.fieldDataType(), cache, breakerService);
return new PagedBytesIndexFieldData(index, indexSettings, mapper.fieldType().names(), mapper.fieldType().fieldDataType(), cache, breakerService);
}
}
public PagedBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, FieldMapper.Names fieldNames,
public PagedBytesIndexFieldData(Index index, @IndexSettings Settings indexSettings, MappedFieldType.Names fieldNames,
FieldDataType fieldDataType, IndexFieldDataCache cache, CircuitBreakerService breakerService) {
super(index, indexSettings, fieldNames, fieldDataType, cache, breakerService);
}

Some files were not shown because too many files have changed in this diff Show More