Merge pull request #10985 from jpountz/enhancement/remove_filters

Query DSL: Remove filter parsers.
This commit is contained in:
Adrien Grand 2015-05-07 20:15:31 +02:00
commit e7540e9598
329 changed files with 2729 additions and 9471 deletions

View File

@ -57,7 +57,7 @@ Response:
==== High-precision requests
When requesting detailed buckets (typically for displaying a "zoomed in" map) a filter like <<query-dsl-geo-bounding-box-filter,geo_bounding_box>> should be applied to narrow the subject area otherwise potentially millions of buckets will be created and returned.
When requesting detailed buckets (typically for displaying a "zoomed in" map) a filter like <<query-dsl-geo-bounding-box-query,geo_bounding_box>> should be applied to narrow the subject area otherwise potentially millions of buckets will be created and returned.
[source,js]
--------------------------------------------------

View File

@ -157,7 +157,7 @@ can be specified as a whole number representing time in milliseconds, or as a ti
=== Distance Units
Wherever distances need to be specified, such as the `distance` parameter in
the <<query-dsl-geo-distance-filter>>), the default unit if none is specified is
the <<query-dsl-geo-distance-query>>), the default unit if none is specified is
the meter. Distances can be specified in other units, such as `"1km"` or
`"2mi"` (2 miles).
@ -174,7 +174,7 @@ Centimeter:: `cm` or `centimeters`
Millimeter:: `mm` or `millimeters`
Nautical mile:: `NM`, `nmi` or `nauticalmiles`
The `precision` parameter in the <<query-dsl-geohash-cell-filter>> accepts
The `precision` parameter in the <<query-dsl-geohash-cell-query>> accepts
distances with the above units, but if no unit is specified, then the
precision is interpreted as the length of the geohash.

View File

@ -865,12 +865,9 @@ curl -XPOST 'localhost:9200/bank/_search?pretty' -d '
In the previous section, we skipped over a little detail called the document score (`_score` field in the search results). The score is a numeric value that is a relative measure of how well the document matches the search query that we specified. The higher the score, the more relevant the document is, the lower the score, the less relevant the document is.
All queries in Elasticsearch trigger computation of the relevance scores. In cases where we do not need the relevance scores, Elasticsearch provides another query capability in the form of <<query-dsl-filters,filters>. Filters are similar in concept to queries except that they are optimized for much faster execution speeds for two primary reasons:
But queries do not always to produce scores, in particular when they are only used for "filtering" the document set. Elasticsearch detects these situations and automatically optimizes query execution in order not to compute useless scores.
* Filters do not score so they are faster to execute than queries
* Filters can be http://www.elastic.co/blog/all-about-elasticsearch-filter-bitsets/[cached in memory] allowing repeated search executions to be significantly faster than queries
To understand filters, let's first introduce the <<query-dsl-filtered-query,`filtered` query>>, which allows you to combine a query (like `match_all`, `match`, `bool`, etc.) together with a filter. As an example, let's introduce the <<query-dsl-range-filter,`range` filter>>, which allows us to filter documents by a range of values. This is generally used for numeric or date filtering.
To understand filters, let's first introduce the <<query-dsl-filtered-query,`filtered` query>>, which allows you to combine a query (like `match_all`, `match`, `bool`, etc.) together with another query which is only used for filtering. As an example, let's introduce the <<query-dsl-range-query,`range` query>>, which allows us to filter documents by a range of values. This is generally used for numeric or date filtering.
This example uses a filtered query to return all accounts with balances between 20000 and 30000, inclusive. In other words, we want to find accounts with a balance that is greater than or equal to 20000 and less than or equal to 30000.
@ -894,11 +891,9 @@ curl -XPOST 'localhost:9200/bank/_search?pretty' -d '
}'
--------------------------------------------------
Dissecting the above, the filtered query contains a `match_all` query (the query part) and a `range` filter (the filter part). We can substitute any other query into the query part as well as any other filter into the filter part. In the above case, the range filter makes perfect sense since documents falling into the range all match "equally", i.e., no document is more relevant than another.
Dissecting the above, the filtered query contains a `match_all` query (the query part) and a `range` query (the filter part). We can substitute any other queries into the query and the filter parts. In the above case, the range query makes perfect sense since documents falling into the range all match "equally", i.e., no document is more relevant than another.
In general, the easiest way to decide whether you want a filter or a query is to ask yourself if you care about the relevance score or not. If relevance is not important, use filters, otherwise, use queries. If you come from a SQL background, queries and filters are similar in concept to the `SELECT WHERE` clause, although more so for filters than queries.
In addition to the `match_all`, `match`, `bool`, `filtered`, and `range` queries, there are a lot of other query/filter types that are available and we won't go into them here. Since we already have a basic understanding of how they work, it shouldn't be too difficult to apply this knowledge in learning and experimenting with the other query/filter types.
In addition to the `match_all`, `match`, `bool`, `filtered`, and `range` queries, there are a lot of other query types that are available and we won't go into them here. Since we already have a basic understanding of how they work, it shouldn't be too difficult to apply this knowledge in learning and experimenting with the other query types.
=== Executing Aggregations

View File

@ -52,7 +52,7 @@ length (eg `12`, the default) or to a distance (eg `1km`).
More usefully, set the `geohash_prefix` option to `true` to not only index
the geohash value, but all the enclosing cells as well. For instance, a
geohash of `u30` will be indexed as `[u,u3,u30]`. This option can be used
by the <<query-dsl-geohash-cell-filter>> to find geopoints within a
by the <<query-dsl-geohash-cell-query>> to find geopoints within a
particular cell very efficiently.
[float]

View File

@ -7,9 +7,7 @@ used when either the data being indexed or the queries being executed
contain shapes other than just points.
You can query documents using this type using
<<query-dsl-geo-shape-filter,geo_shape Filter>>
or <<query-dsl-geo-shape-query,geo_shape
Query>>.
<<query-dsl-geo-shape-query,geo_shape Query>>.
[[geo-shape-mapping-options]]
[float]

View File

@ -64,8 +64,7 @@ By keeping each nested object separate, the association between the
smith` would *not* match this document.
Searching on nested docs can be done using either the
<<query-dsl-nested-query,nested query>> or
<<query-dsl-nested-filter,nested filter>>.
<<query-dsl-nested-query,nested query>>.
==== Mapping

View File

@ -20,7 +20,7 @@ two ways to make sure that a field mapping exist:
[float]
=== Aliases
<<indices-aliases,Aliases>> can include <<query-dsl-filters,filters>> which
<<indices-aliases,Aliases>> can include <<query-dsl,filters>> which
are automatically applied to any search performed via the alias.
<<filtered,Filtered aliases>> created with version `1.4.0` or later can only
refer to field names which exist in the mappings of the index (or indices)

View File

@ -108,7 +108,11 @@ in addition to the actual HTTP status code. We removed `status` field in json re
=== Java API
Some query builders have been removed or renamed:
`org.elasticsearch.index.queries.FilterBuilders` has been removed as part of the merge of
queries and filters. These filters are now available in `QueryBuilders` with the same name.
All methods that used to accept a `FilterBuilder` now accept a `QueryBuilder` instead.
In addition some query builders have been removed or renamed:
* `commonTerms(...)` renamed with `commonTermsQuery(...)`
* `queryString(...)` renamed with `queryStringQuery(...)`
@ -436,6 +440,14 @@ ignored. Instead filters are always used as their own cache key and elasticsearc
makes decisions by itself about whether it should cache filters based on how
often they are used.
==== Query/filter merge
Elasticsearch no longer makes a difference between queries and filters in the
DSL; it detects when scores are not needed and automatically optimizes the
query to not compute scores and optionally caches the result.
As a consequence the `query` filter serves no purpose anymore and is deprecated.
=== Snapshot and Restore
The obsolete parameters `expand_wildcards_open` and `expand_wildcards_close` are no longer

View File

@ -8,35 +8,27 @@ queries. In general, there are basic queries such as
<<query-dsl-term-query,term>> or
<<query-dsl-prefix-query,prefix>>. There are
also compound queries like the
<<query-dsl-bool-query,bool>> query. Queries can
also have filters associated with them such as the
<<query-dsl-bool-query,bool>> query.
While queries have scoring capabilities, in some contexts they will
only be used to filter the result set, such as in the
<<query-dsl-filtered-query,filtered>> or
<<query-dsl-constant-score-query,constant_score>>
queries, with specific filter queries.
queries.
Think of the Query DSL as an AST of queries. Certain queries can contain
other queries (like the
<<query-dsl-bool-query,bool>> query), others can
contain filters (like the
<<query-dsl-constant-score-query,constant_score>>),
and some can contain both a query and a filter (like the
<<query-dsl-filtered-query,filtered>>). Each of
those can contain *any* query of the list of queries or *any* filter
from the list of filters, resulting in the ability to build quite
Think of the Query DSL as an AST of queries.
Some queries can be used by themselves like the
<<query-dsl-term-query,term>> query but other queries can contain
queries (like the <<query-dsl-bool-query,bool>> query), and each
of these composite queries can contain *any* query of the list of
queries, resulting in the ability to build quite
complex (and interesting) queries.
Both queries and filters can be used in different APIs. For example,
Queries can be used in different APIs. For example,
within a <<search-request-query,search query>>, or
as an <<search-aggregations-bucket-filter-aggregation,aggregation filter>>.
This section explains the components (queries and filters) that can form the
AST one can use.
Filters are very handy since they perform an order of magnitude better
than plain queries since no scoring is performed and they are
automatically cached.
This section explains the queries that can form the AST one can use.
--
include::query-dsl/queries.asciidoc[]
include::query-dsl/filters.asciidoc[]
include::query-dsl/index.asciidoc[]

View File

@ -1,10 +1,10 @@
[[query-dsl-and-filter]]
=== And Filter
[[query-dsl-and-query]]
== And Query
deprecated[2.0.0, Use the `bool` filter instead]
deprecated[2.0.0, Use the `bool` query instead]
A filter that matches documents using the `AND` boolean operator on other
filters. Can be placed within queries that accept a filter.
A query that matches documents using the `AND` boolean operator on other
queries.
[source,js]
--------------------------------------------------

View File

@ -1,5 +1,5 @@
[[query-dsl-bool-query]]
=== Bool Query
== Bool Query
A query that matches documents matching boolean combinations of other
queries. The bool query maps to Lucene `BooleanQuery`. It is built using
@ -22,6 +22,9 @@ parameter.
documents.
|=======================================================================
IMPORTANT: If this query is used in a filter context and it has `should`
clauses then at least one `should` clause is required to match.
The bool query also supports `disable_coord` parameter (defaults to
`false`). Basically the coord similarity computes a score factor based
on the fraction of all query terms that a document contains. See Lucene

View File

@ -1,5 +1,5 @@
[[query-dsl-boosting-query]]
=== Boosting Query
== Boosting Query
The `boosting` query can be used to effectively demote results that
match a given query. Unlike the "NOT" clause in bool query, this still

View File

@ -1,12 +1,12 @@
[[query-dsl-common-terms-query]]
=== Common Terms Query
== Common Terms Query
The `common` terms query is a modern alternative to stopwords which
improves the precision and recall of search results (by taking stopwords
into account), without sacrificing performance.
[float]
==== The problem
=== The problem
Every term in a query has a cost. A search for `"The brown fox"`
requires three term queries, one for each of `"the"`, `"brown"` and
@ -25,7 +25,7 @@ and `"not happy"`) and we lose recall (eg text like `"The The"` or
`"To be or not to be"` would simply not exist in the index).
[float]
==== The solution
=== The solution
The `common` terms query divides the query terms into two groups: more
important (ie _low frequency_ terms) and less important (ie _high
@ -63,7 +63,7 @@ site, common terms like `"clip"` or `"video"` will automatically behave
as stopwords without the need to maintain a manual list.
[float]
==== Examples
=== Examples
In this example, words that have a document frequency greater than 0.1%
(eg `"this"` and `"is"`) will be treated as _common terms_.

View File

@ -0,0 +1,18 @@
[[query-dsl-constant-score-query]]
== Constant Score Query
A query that wraps another query and simply returns a
constant score equal to the query boost for every document in the
filter. Maps to Lucene `ConstantScoreQuery`.
[source,js]
--------------------------------------------------
{
"constant_score" : {
"filter" : {
"term" : { "user" : "kimchy"}
},
"boost" : 1.2
}
}
--------------------------------------------------

View File

@ -1,5 +1,5 @@
[[query-dsl-dis-max-query]]
=== Dis Max Query
== Dis Max Query
A query that generates the union of documents produced by its
subqueries, and that scores each document with the maximum score for

View File

@ -1,5 +1,5 @@
[[query-dsl-exists-filter]]
=== Exists Filter
[[query-dsl-exists-query]]
== Exists Query
Returns documents that have at least one non-`null` value in the original field:
@ -14,7 +14,7 @@ Returns documents that have at least one non-`null` value in the original field:
}
--------------------------------------------------
For instance, these documents would all match the above filter:
For instance, these documents would all match the above query:
[source,js]
--------------------------------------------------
@ -28,7 +28,7 @@ For instance, these documents would all match the above filter:
<2> Even though the `standard` analyzer would emit zero tokens, the original field is non-`null`.
<3> At least one non-`null` value is required.
These documents would *not* match the above filter:
These documents would *not* match the above query:
[source,js]
--------------------------------------------------

View File

@ -1,14 +1,9 @@
[[query-dsl-filtered-query]]
=== Filtered Query
== Filtered Query
The `filtered` query is used to combine another query with any
<<query-dsl-filters,filter>>. Filters are usually faster than queries because:
* they don't have to calculate the relevance `_score` for each document --
the answer is just a boolean ``Yes, the document matches the filter'' or
``No, the document does not match the filter''.
* the results from most filters can be cached in memory, making subsequent
executions faster.
The `filtered` query is used to combine a query which will be used for
scoring with another query which will only be used for filtering the result
set.
TIP: Exclude as many document as you can with a filter, then query just the
documents that remain.
@ -50,7 +45,7 @@ curl -XGET localhost:9200/_search -d '
<1> The `filtered` query is passed as the value of the `query`
parameter in the search request.
==== Filtering without a query
=== Filtering without a query
If a `query` is not specified, it defaults to the
<<query-dsl-match-all-query,`match_all` query>>. This means that the
@ -77,7 +72,7 @@ curl -XGET localhost:9200/_search -d '
==== Multiple filters
Multiple filters can be applied by wrapping them in a
<<query-dsl-bool-filter,`bool` filter>>, for example:
<<query-dsl-bool-query,`bool` query>>, for example:
[source,js]
--------------------------------------------------
@ -98,9 +93,6 @@ Multiple filters can be applied by wrapping them in a
}
--------------------------------------------------
Similarly, multiple queries can be combined with a
<<query-dsl-bool-query,`bool` query>>.
==== Filter strategy
You can control how the filter and query are executed with the `strategy`

View File

@ -1,77 +0,0 @@
[[query-dsl-filters]]
== Filters
As a general rule, filters should be used instead of queries:
* for binary yes/no searches
* for queries on exact values
[float]
[[caching]]
=== Filters and Caching
Filters can be a great candidate for caching. Caching the document set that
a filter matches does not require much memory and can help improve
execution speed of queries.
Elasticsearch decides to cache filters based on how often they are used. For
this reason you might occasionally see better performance by splitting
complex filters into a static part that Elasticsearch will cache and a dynamic
part which is least costly than the original filter.
include::filters/and-filter.asciidoc[]
include::filters/bool-filter.asciidoc[]
include::filters/exists-filter.asciidoc[]
include::filters/geo-bounding-box-filter.asciidoc[]
include::filters/geo-distance-filter.asciidoc[]
include::filters/geo-distance-range-filter.asciidoc[]
include::filters/geo-polygon-filter.asciidoc[]
include::filters/geo-shape-filter.asciidoc[]
include::filters/geohash-cell-filter.asciidoc[]
include::filters/has-child-filter.asciidoc[]
include::filters/has-parent-filter.asciidoc[]
include::filters/ids-filter.asciidoc[]
include::filters/indices-filter.asciidoc[]
include::filters/limit-filter.asciidoc[]
include::filters/match-all-filter.asciidoc[]
include::filters/missing-filter.asciidoc[]
include::filters/nested-filter.asciidoc[]
include::filters/not-filter.asciidoc[]
include::filters/or-filter.asciidoc[]
include::filters/prefix-filter.asciidoc[]
include::filters/query-filter.asciidoc[]
include::filters/range-filter.asciidoc[]
include::filters/regexp-filter.asciidoc[]
include::filters/script-filter.asciidoc[]
include::filters/term-filter.asciidoc[]
include::filters/terms-filter.asciidoc[]
include::filters/type-filter.asciidoc[]

View File

@ -1,43 +0,0 @@
[[query-dsl-bool-filter]]
=== Bool Filter
A filter that matches documents matching boolean combinations of other
queries. Similar in concept to
<<query-dsl-bool-query,Boolean query>>, except
that the clauses are other filters. Can be placed within queries that
accept a filter.
[source,js]
--------------------------------------------------
{
"filtered" : {
"query" : {
"queryString" : {
"default_field" : "message",
"query" : "elasticsearch"
}
},
"filter" : {
"bool" : {
"must" : {
"term" : { "tag" : "wow" }
},
"must_not" : {
"range" : {
"age" : { "gte" : 10, "lt" : 20 }
}
},
"should" : [
{
"term" : { "tag" : "sometag" }
},
{
"term" : { "tag" : "sometagtag" }
}
]
}
}
}
}
--------------------------------------------------

View File

@ -1,90 +0,0 @@
[[query-dsl-has-child-filter]]
=== Has Child Filter
The `has_child` filter accepts a query and the child type to run
against, and results in parent documents that have child docs matching
the query. Here is an example:
[source,js]
--------------------------------------------------
{
"has_child" : {
"type" : "blog_tag",
"query" : {
"term" : {
"tag" : "something"
}
}
}
}
--------------------------------------------------
The `type` is the child type to query against. The parent type to return
is automatically detected based on the mappings.
The way that the filter is implemented is by first running the child
query, doing the matching up to the parent doc for each document
matched.
The `has_child` filter also accepts a filter instead of a query:
[source,js]
--------------------------------------------------
{
"has_child" : {
"type" : "comment",
"filter" : {
"term" : {
"user" : "john"
}
}
}
}
--------------------------------------------------
[float]
==== Min/Max Children
The `has_child` filter allows you to specify that a minimum and/or maximum
number of children are required to match for the parent doc to be considered
a match:
[source,js]
--------------------------------------------------
{
"has_child" : {
"type" : "comment",
"min_children": 2, <1>
"max_children": 10, <1>
"filter" : {
"term" : {
"user" : "john"
}
}
}
}
--------------------------------------------------
<1> Both `min_children` and `max_children` are optional.
The execution speed of the `has_child` filter is equivalent
to that of the `has_child` query when `min_children` or `max_children`
is specified.
[float]
==== Memory Considerations
In order to support parent-child joins, all of the (string) parent IDs
must be resident in memory (in the <<index-modules-fielddata,field data cache>>.
Additionally, every child document is mapped to its parent using a long
value (approximately). It is advisable to keep the string parent ID short
in order to reduce memory usage.
You can check how much memory is being used by the ID cache using the
<<indices-stats,indices stats>> or <<cluster-nodes-stats,nodes stats>>
APIS, eg:
[source,js]
--------------------------------------------------
curl -XGET "http://localhost:9200/_stats/id_cache?pretty&human"
--------------------------------------------------

View File

@ -1,65 +0,0 @@
[[query-dsl-has-parent-filter]]
=== Has Parent Filter
The `has_parent` filter accepts a query and a parent type. The query is
executed in the parent document space, which is specified by the parent
type. This filter returns child documents which associated parents have
matched. For the rest `has_parent` filter has the same options and works
in the same manner as the `has_child` filter.
[float]
==== Filter example
[source,js]
--------------------------------------------------
{
"has_parent" : {
"parent_type" : "blog",
"query" : {
"term" : {
"tag" : "something"
}
}
}
}
--------------------------------------------------
The `parent_type` field name can also be abbreviated to `type`.
The way that the filter is implemented is by first running the parent
query, doing the matching up to the child doc for each document matched.
The `has_parent` filter also accepts a filter instead of a query:
[source,js]
--------------------------------------------------
{
"has_parent" : {
"type" : "blog",
"filter" : {
"term" : {
"text" : "bonsai three"
}
}
}
}
--------------------------------------------------
[float]
==== Memory Considerations
In order to support parent-child joins, all of the (string) parent IDs
must be resident in memory (in the <<index-modules-fielddata,field data cache>>.
Additionally, every child document is mapped to its parent using a long
value (approximately). It is advisable to keep the string parent ID short
in order to reduce memory usage.
You can check how much memory is being used by the ID cache using the
<<indices-stats,indices stats>> or <<cluster-nodes-stats,nodes stats>>
APIS, eg:
[source,js]
--------------------------------------------------
curl -XGET "http://localhost:9200/_stats/id_cache?pretty&human"
--------------------------------------------------

View File

@ -1,20 +0,0 @@
[[query-dsl-ids-filter]]
=== Ids Filter
Filters documents that only have the provided ids. Note, this filter
does not require the <<mapping-id-field,_id>>
field to be indexed since it works using the
<<mapping-uid-field,_uid>> field.
[source,js]
--------------------------------------------------
{
"ids" : {
"type" : "my_type",
"values" : ["1", "4", "100"]
}
}
--------------------------------------------------
The `type` is optional and can be omitted, and can also accept an array
of values.

View File

@ -1,37 +0,0 @@
[[query-dsl-indices-filter]]
=== Indices Filter
The `indices` filter can be used when executed across multiple indices,
allowing to have a filter that executes only when executed on an index
that matches a specific list of indices, and another filter that executes
when it is executed on an index that does not match the listed indices.
[source,js]
--------------------------------------------------
{
"indices" : {
"indices" : ["index1", "index2"],
"filter" : {
"term" : { "tag" : "wow" }
},
"no_match_filter" : {
"term" : { "tag" : "kow" }
}
}
}
--------------------------------------------------
You can use the `index` field to provide a single index.
`no_match_filter` can also have "string" value of `none` (to match no
documents), and `all` (to match all). Defaults to `all`.
`filter` is mandatory, as well as `indices` (or `index`).
[TIP]
===================================================================
The fields order is important: if the `indices` are provided before `filter`
or `no_match_filter`, the related filters get parsed only against the indices
that they are going to be executed on. This is useful to avoid parsing filters
when it is not necessary and prevent potential mapping errors.
===================================================================

View File

@ -1,15 +0,0 @@
[[query-dsl-match-all-filter]]
=== Match All Filter
A filter that matches on all documents:
[source,js]
--------------------------------------------------
{
"constant_score" : {
"filter" : {
"match_all" : { }
}
}
}
--------------------------------------------------

View File

@ -1,74 +0,0 @@
[[query-dsl-nested-filter]]
=== Nested Filter
A `nested` filter works in a similar fashion to the
<<query-dsl-nested-query,nested>> query. For example:
[source,js]
--------------------------------------------------
{
"filtered" : {
"query" : { "match_all" : {} },
"filter" : {
"nested" : {
"path" : "obj1",
"filter" : {
"bool" : {
"must" : [
{
"term" : {"obj1.name" : "blue"}
},
{
"range" : {"obj1.count" : {"gt" : 5}}
}
]
}
}
}
}
}
}
--------------------------------------------------
[float]
==== Join option
The nested filter also supports a `join` option which controls whether to perform the block join or not.
By default, it's enabled. But when it's disabled, it emits the hidden nested documents as hits instead of the joined root document.
This is useful when a `nested` filter is used in a facet where nested is enabled, like you can see in the example below:
[source,js]
--------------------------------------------------
{
"query" : {
"nested" : {
"path" : "offers",
"query" : {
"match" : {
"offers.color" : "blue"
}
}
}
},
"facets" : {
"size" : {
"terms" : {
"field" : "offers.size"
},
"facet_filter" : {
"nested" : {
"path" : "offers",
"query" : {
"match" : {
"offers.color" : "blue"
}
},
"join" : false
}
},
"nested" : "offers"
}
}
}'
--------------------------------------------------

View File

@ -1,18 +0,0 @@
[[query-dsl-prefix-filter]]
=== Prefix Filter
Filters documents that have fields containing terms with a specified
prefix (*not analyzed*). Similar to prefix query, except that it acts as
a filter. Can be placed within queries that accept a filter.
[source,js]
--------------------------------------------------
{
"constant_score" : {
"filter" : {
"prefix" : { "user" : "ki" }
}
}
}
--------------------------------------------------

View File

@ -1,21 +0,0 @@
[[query-dsl-query-filter]]
=== Query Filter
Wraps any query to be used as a filter. Can be placed within queries
that accept a filter.
[source,js]
--------------------------------------------------
{
"constantScore" : {
"filter" : {
"query" : {
"query_string" : {
"query" : "this AND that OR thus"
}
}
}
}
}
--------------------------------------------------

View File

@ -1,97 +0,0 @@
[[query-dsl-range-filter]]
=== Range Filter
Filters documents with fields that have terms within a certain range.
Similar to <<query-dsl-range-query,range
query>>, except that it acts as a filter. Can be placed within queries
that accept a filter.
[source,js]
--------------------------------------------------
{
"constant_score" : {
"filter" : {
"range" : {
"age" : {
"gte": 10,
"lte": 20
}
}
}
}
}
--------------------------------------------------
The `range` filter accepts the following parameters:
[horizontal]
`gte`:: Greater-than or equal to
`gt`:: Greater-than
`lte`:: Less-than or equal to
`lt`:: Less-than
[float]
==== Date options
When applied on `date` fields the `range` filter accepts also a `time_zone` parameter.
The `time_zone` parameter will be applied to your input lower and upper bounds and will
move them to UTC time based date:
[source,js]
--------------------------------------------------
{
"constant_score": {
"filter": {
"range" : {
"born" : {
"gte": "2012-01-01",
"lte": "now",
"time_zone": "+1:00"
}
}
}
}
}
--------------------------------------------------
In the above example, `gte` will be actually moved to `2011-12-31T23:00:00` UTC date.
NOTE: if you give a date with a timezone explicitly defined and use the `time_zone` parameter, `time_zone` will be
ignored. For example, setting `gte` to `2012-01-01T00:00:00+01:00` with `"time_zone":"+10:00"` will still use `+01:00` time zone.
When applied on `date` fields the `range` filter accepts also a `format` parameter.
The `format` parameter will help support another date format than the one defined in mapping:
[source,js]
--------------------------------------------------
{
"constant_score": {
"filter": {
"range" : {
"born" : {
"gte": "01/01/2012",
"lte": "2013",
"format": "dd/MM/yyyy||yyyy"
}
}
}
}
}
--------------------------------------------------
[float]
==== Execution
The `execution` option controls how the range filter internally executes. The `execution` option accepts the following values:
[horizontal]
`index`:: Uses the field's inverted index in order to determine whether documents fall within the specified range.
`fielddata`:: Uses fielddata in order to determine whether documents fall within the specified range.
In general for small ranges the `index` execution is faster and for longer ranges the `fielddata` execution is faster.
The `fielddata` execution, as the name suggests, uses field data and therefore
requires more memory, so make sure you have sufficient memory on your nodes in
order to use this execution mode. It usually makes sense to use it on fields
you're already aggregating or sorting by.

View File

@ -1,59 +0,0 @@
[[query-dsl-regexp-filter]]
=== Regexp Filter
The `regexp` filter is similar to the
<<query-dsl-regexp-query,regexp>> query, except
that it is cacheable and can speedup performance in case you are reusing
this filter in your queries.
See <<regexp-syntax>> for details of the supported regular expression language.
[source,js]
--------------------------------------------------
{
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"regexp":{
"name.first" : "s.*y"
}
}
}
}
--------------------------------------------------
You can also select the cache name and use the same regexp flags in the
filter as in the query.
Regular expressions are dangerous because it's easy to accidentally
create an innocuous looking one that requires an exponential number of
internal determinized automaton states (and corresponding RAM and CPU)
for Lucene to execute. Lucene prevents these using the
`max_determinized_states` setting (defaults to 10000). You can raise
this limit to allow more complex regular expressions to execute.
You have to enable caching explicitly in order to have the
`regexp` filter cached.
[source,js]
--------------------------------------------------
{
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"regexp":{
"name.first" : {
"value" : "s.*y",
"flags" : "INTERSECTION|COMPLEMENT|EMPTY",
"max_determinized_states": 20000
},
"_name":"test"
}
}
}
}
--------------------------------------------------

View File

@ -1,19 +0,0 @@
[[query-dsl-term-filter]]
=== Term Filter
Filters documents that have fields that contain a term (*not analyzed*).
Similar to <<query-dsl-term-query,term query>>,
except that it acts as a filter. Can be placed within queries that
accept a filter, for example:
[source,js]
--------------------------------------------------
{
"constant_score" : {
"filter" : {
"term" : { "user" : "kimchy"}
}
}
}
--------------------------------------------------

View File

@ -1,12 +1,12 @@
[[query-dsl-function-score-query]]
=== Function Score Query
== Function Score Query
The `function_score` allows you to modify the score of documents that are
retrieved by a query. This can be useful if, for example, a score
function is computationally expensive and it is sufficient to compute
the score on a filtered set of documents.
==== Using function score
=== Using function score
To use `function_score`, the user has to define a query and one or
several functions, that compute a new score for each document returned
@ -17,7 +17,7 @@ by the query.
[source,js]
--------------------------------------------------
"function_score": {
"(query|filter)": {},
"query": {},
"boost": "boost for the whole query",
"FUNCTION": {},
"boost_mode":"(multiply|replace|...)"
@ -26,12 +26,12 @@ by the query.
Furthermore, several functions can be combined. In this case one can
optionally choose to apply the function only if a document matches a
given filter:
given filtering query
[source,js]
--------------------------------------------------
"function_score": {
"(query|filter)": {},
"query": {},
"boost": "boost for the whole query",
"functions": [
{
@ -54,7 +54,9 @@ given filter:
}
--------------------------------------------------
If no filter is given with a function this is equivalent to specifying
NOTE: The scores produced by the filtering query of each function do not matter.
If no query is given with a function this is equivalent to specifying
`"match_all": {}`
First, each document is scored by the defined functions. The parameter

View File

@ -1,10 +1,10 @@
[[query-dsl-fuzzy-query]]
=== Fuzzy Query
== Fuzzy Query
The fuzzy query uses similarity based on Levenshtein edit distance for
`string` fields, and a `+/-` margin on numeric and date fields.
==== String fields
=== String fields
The `fuzzy` query generates all possible matching terms that are within the
maximum edit distance specified in `fuzziness` and then checks the term
@ -38,7 +38,7 @@ Or with more advanced settings:
--------------------------------------------------
[float]
===== Parameters
==== Parameters
[horizontal]
`fuzziness`::
@ -62,7 +62,7 @@ are both set to `0`. This could cause every term in the index to be examined!
[float]
==== Numeric and date fields
=== Numeric and date fields
Performs a <<query-dsl-range-query>> ``around'' the value using the
`fuzziness` value as a `+/-` range, where:

View File

@ -1,7 +1,7 @@
[[query-dsl-geo-bounding-box-filter]]
=== Geo Bounding Box Filter
[[query-dsl-geo-bounding-box-query]]
== Geo Bounding Box Query
A filter allowing to filter hits based on a point location using a
A query allowing to filter hits based on a point location using a
bounding box. Assuming the following indexed document:
[source,js]
@ -45,13 +45,13 @@ Then the following simple query can be executed with a
--------------------------------------------------
[float]
==== Accepted Formats
=== Accepted Formats
In much the same way the geo_point type can accept different
representation of the geo point, the filter can accept it as well:
[float]
===== Lat Lon As Properties
==== Lat Lon As Properties
[source,js]
--------------------------------------------------
@ -79,7 +79,7 @@ representation of the geo point, the filter can accept it as well:
--------------------------------------------------
[float]
===== Lat Lon As Array
==== Lat Lon As Array
Format in `[lon, lat]`, note, the order of lon/lat here in order to
conform with http://geojson.org/[GeoJSON].
@ -104,7 +104,7 @@ conform with http://geojson.org/[GeoJSON].
--------------------------------------------------
[float]
===== Lat Lon As String
==== Lat Lon As String
Format in `lat,lon`.
@ -128,7 +128,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
===== Geohash
==== Geohash
[source,js]
--------------------------------------------------
@ -150,7 +150,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
==== Vertices
=== Vertices
The vertices of the bounding box can either be set by `top_left` and
`bottom_right` or by `top_right` and `bottom_left` parameters. More
@ -182,20 +182,20 @@ values separately.
[float]
==== geo_point Type
=== geo_point Type
The filter *requires* the `geo_point` type to be set on the relevant
field.
[float]
==== Multi Location Per Document
=== Multi Location Per Document
The filter can work with multiple locations / points per document. Once
a single location / point matches the filter, the document will be
included in the filter
[float]
==== Type
=== Type
The type of the bounding box execution by default is set to `memory`,
which means in memory checks if the doc falls within the bounding box

View File

@ -1,5 +1,5 @@
[[query-dsl-geo-distance-filter]]
=== Geo Distance Filter
[[query-dsl-geo-distance-query]]
== Geo Distance Query
Filters documents that include only hits that exists within a specific
distance from a geo point. Assuming the following indexed json:
@ -40,13 +40,13 @@ filter:
--------------------------------------------------
[float]
==== Accepted Formats
=== Accepted Formats
In much the same way the `geo_point` type can accept different
representation of the geo point, the filter can accept it as well:
[float]
===== Lat Lon As Properties
==== Lat Lon As Properties
[source,js]
--------------------------------------------------
@ -69,7 +69,7 @@ representation of the geo point, the filter can accept it as well:
--------------------------------------------------
[float]
===== Lat Lon As Array
==== Lat Lon As Array
Format in `[lon, lat]`, note, the order of lon/lat here in order to
conform with http://geojson.org/[GeoJSON].
@ -92,7 +92,7 @@ conform with http://geojson.org/[GeoJSON].
--------------------------------------------------
[float]
===== Lat Lon As String
==== Lat Lon As String
Format in `lat,lon`.
@ -114,7 +114,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
===== Geohash
==== Geohash
[source,js]
--------------------------------------------------
@ -134,7 +134,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
==== Options
=== Options
The following are options allowed on the filter:
@ -160,13 +160,13 @@ The following are options allowed on the filter:
[float]
==== geo_point Type
=== geo_point Type
The filter *requires* the `geo_point` type to be set on the relevant
field.
[float]
==== Multi Location Per Document
=== Multi Location Per Document
The `geo_distance` filter can work with multiple locations / points per
document. Once a single location / point matches the filter, the

View File

@ -1,5 +1,5 @@
[[query-dsl-geo-distance-range-filter]]
=== Geo Distance Range Filter
[[query-dsl-geo-distance-range-query]]
== Geo Distance Range Query
Filters documents that exists within a range from a specific point:
@ -25,6 +25,6 @@ Filters documents that exists within a range from a specific point:
--------------------------------------------------
Supports the same point location parameter as the
<<query-dsl-geo-distance-filter,geo_distance>>
<<query-dsl-geo-distance-query,geo_distance>>
filter. And also support the common parameters for range (lt, lte, gt,
gte, from, to, include_upper and include_lower).

View File

@ -1,7 +1,7 @@
[[query-dsl-geo-polygon-filter]]
=== Geo Polygon Filter
[[query-dsl-geo-polygon-query]]
== Geo Polygon Query
A filter allowing to include hits that only fall within a polygon of
A query allowing to include hits that only fall within a polygon of
points. Here is an example:
[source,js]
@ -27,10 +27,10 @@ points. Here is an example:
--------------------------------------------------
[float]
==== Allowed Formats
=== Allowed Formats
[float]
===== Lat Long as Array
==== Lat Long as Array
Format in `[lon, lat]`, note, the order of lon/lat here in order to
conform with http://geojson.org/[GeoJSON].
@ -58,7 +58,7 @@ conform with http://geojson.org/[GeoJSON].
--------------------------------------------------
[float]
===== Lat Lon as String
==== Lat Lon as String
Format in `lat,lon`.
@ -85,7 +85,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
===== Geohash
==== Geohash
[source,js]
--------------------------------------------------
@ -110,7 +110,7 @@ Format in `lat,lon`.
--------------------------------------------------
[float]
==== geo_point Type
=== geo_point Type
The filter *requires* the
<<mapping-geo-point-type,geo_point>> type to be

View File

@ -1,15 +1,12 @@
[[query-dsl-geo-shape-filter]]
=== GeoShape Filter
[[query-dsl-geo-shape-query]]
== GeoShape Filter
Filter documents indexed using the `geo_shape` type.
Requires the <<mapping-geo-shape-type,geo_shape
Mapping>>.
You may also use the
<<query-dsl-geo-shape-query,geo_shape Query>>.
The `geo_shape` Filter uses the same grid square representation as the
The `geo_shape` query uses the same grid square representation as the
geo_shape mapping to find documents that have a shape that intersects
with the query shape. It will also use the same PrefixTree configuration
as defined for the field mapping.

View File

@ -1,7 +1,7 @@
[[query-dsl-geohash-cell-filter]]
=== Geohash Cell Filter
[[query-dsl-geohash-cell-query]]
== Geohash Cell Query
The `geohash_cell` filter provides access to a hierarchy of geohashes.
The `geohash_cell` query provides access to a hierarchy of geohashes.
By defining a geohash cell, only <<mapping-geo-point-type,geopoints>>
within this cell will match this filter.

View File

@ -1,12 +1,9 @@
[[query-dsl-has-child-query]]
=== Has Child Query
== Has Child Query
The `has_child` query works the same as the
<<query-dsl-has-child-filter,has_child>> filter,
by automatically wrapping the filter with a
<<query-dsl-constant-score-query,constant_score>>
(when using the default score type). It has the same syntax as the
<<query-dsl-has-child-filter,has_child>> filter:
The `has_child` filter accepts a query and the child type to run against, and
results in parent documents that have child docs matching the query. Here is
an example:
[source,js]
--------------------------------------------------
@ -28,7 +25,7 @@ can be executed in one or more iteration. When using the `has_child`
query the `total_hits` is always correct.
[float]
==== Scoring capabilities
=== Scoring capabilities
The `has_child` also has scoring support. The
supported score types are `min`, `max`, `sum`, `avg` or `none`. The default is
@ -54,7 +51,7 @@ inside the `has_child` query:
--------------------------------------------------
[float]
==== Min/Max Children
=== Min/Max Children
The `has_child` query allows you to specify that a minimum and/or maximum
number of children are required to match for the parent doc to be considered
@ -82,7 +79,7 @@ The `min_children` and `max_children` parameters can be combined with
the `score_mode` parameter.
[float]
==== Memory Considerations
=== Memory Considerations
In order to support parent-child joins, all of the (string) parent IDs
must be resident in memory (in the <<index-modules-fielddata,field data cache>>.

View File

@ -1,12 +1,11 @@
[[query-dsl-has-parent-query]]
=== Has Parent Query
== Has Parent Query
The `has_parent` query works the same as the
<<query-dsl-has-parent-filter,has_parent>>
filter, by automatically wrapping the filter with a constant_score (when
using the default score type). It has the same syntax as the
<<query-dsl-has-parent-filter,has_parent>>
filter.
The `has_parent` query accepts a query and a parent type. The query is
executed in the parent document space, which is specified by the parent
type. This query returns child documents which associated parents have
matched. For the rest `has_parent` query has the same options and works
in the same manner as the `has_child` query.
[source,js]
--------------------------------------------------
@ -23,7 +22,7 @@ filter.
--------------------------------------------------
[float]
==== Scoring capabilities
=== Scoring capabilities
The `has_parent` also has scoring support. The
supported score types are `score` or `none`. The default is `none` and
@ -50,7 +49,7 @@ matching parent document. The score type can be specified with the
--------------------------------------------------
[float]
==== Memory Considerations
=== Memory Considerations
In order to support parent-child joins, all of the (string) parent IDs
must be resident in memory (in the <<index-modules-fielddata,field data cache>>.

View File

@ -1,7 +1,7 @@
[[query-dsl-ids-query]]
=== Ids Query
== Ids Query
Filters documents that only have the provided ids. Note, this filter
Filters documents that only have the provided ids. Note, this query
does not require the <<mapping-id-field,_id>>
field to be indexed since it works using the
<<mapping-uid-field,_uid>> field.

View File

@ -0,0 +1,97 @@
include::match-query.asciidoc[]
include::multi-match-query.asciidoc[]
include::and-query.asciidoc[]
include::bool-query.asciidoc[]
include::boosting-query.asciidoc[]
include::common-terms-query.asciidoc[]
include::constant-score-query.asciidoc[]
include::dis-max-query.asciidoc[]
include::filtered-query.asciidoc[]
include::function-score-query.asciidoc[]
include::fuzzy-query.asciidoc[]
include::geo-shape-query.asciidoc[]
include::geo-bounding-box-query.asciidoc[]
include::geo-distance-query.asciidoc[]
include::geo-distance-range-query.asciidoc[]
include::geohash-cell-query.asciidoc[]
include::geo-polygon-query.asciidoc[]
include::has-child-query.asciidoc[]
include::has-parent-query.asciidoc[]
include::ids-query.asciidoc[]
include::indices-query.asciidoc[]
include::limit-query.asciidoc[]
include::match-all-query.asciidoc[]
include::mlt-query.asciidoc[]
include::nested-query.asciidoc[]
include::not-query.asciidoc[]
include::or-query.asciidoc[]
include::prefix-query.asciidoc[]
include::query-string-query.asciidoc[]
include::simple-query-string-query.asciidoc[]
include::range-query.asciidoc[]
include::regexp-query.asciidoc[]
include::span-containing-query.asciidoc[]
include::span-first-query.asciidoc[]
include::span-multi-term-query.asciidoc[]
include::span-near-query.asciidoc[]
include::span-not-query.asciidoc[]
include::span-or-query.asciidoc[]
include::span-term-query.asciidoc[]
include::span-within-query.asciidoc[]
include::term-query.asciidoc[]
include::terms-query.asciidoc[]
include::top-children-query.asciidoc[]
include::wildcard-query.asciidoc[]
include::minimum-should-match.asciidoc[]
include::multi-term-rewrite.asciidoc[]
include::script-query.asciidoc[]
include::template-query.asciidoc[]
include::type-query.asciidoc[]

View File

@ -1,5 +1,5 @@
[[query-dsl-indices-query]]
=== Indices Query
== Indices Query
The `indices` query can be used when executed across multiple indices,
allowing to have a query that executes only when executed on an index

View File

@ -1,9 +1,9 @@
[[query-dsl-limit-filter]]
=== Limit Filter
[[query-dsl-limit-query]]
== Limit Query
deprecated[1.6.0, Use <<search-request-body,terminate_after>> instead]
A limit filter limits the number of documents (per shard) to execute on.
A limit query limits the number of documents (per shard) to execute on.
For example:
[source,js]

View File

@ -1,5 +1,5 @@
[[query-dsl-match-all-query]]
=== Match All Query
== Match All Query
A query that matches all documents. Maps to Lucene `MatchAllDocsQuery`.

View File

@ -1,5 +1,5 @@
[[query-dsl-match-query]]
=== Match Query
== Match Query
A family of `match` queries that accept text/numerics/dates, analyzes
it, and constructs a query out of it. For example:

View File

@ -1,5 +1,5 @@
[[query-dsl-minimum-should-match]]
=== Minimum Should Match
== Minimum Should Match
The `minimum_should_match` parameter possible values:

View File

@ -1,5 +1,5 @@
[[query-dsl-missing-filter]]
=== Missing Filter
[[query-dsl-missing-query]]
== Missing Query
Returns documents that have only `null` values or no value in the original field:
@ -42,7 +42,7 @@ These documents would *not* match the above filter:
<3> This field has one non-`null` value.
[float]
==== `null_value` mapping
=== `null_value` mapping
If the field mapping includes a `null_value` (see <<mapping-core-types>>) then explicit `null` values
are replaced with the specified `null_value`. For instance, if the `user` field were mapped
@ -75,7 +75,7 @@ no values in the `user` field and thus would match the `missing` filter:
--------------------------------------------------
[float]
===== `existence` and `null_value` parameters
==== `existence` and `null_value` parameters
When the field being queried has a `null_value` mapping, then the behaviour of
the `missing` filter can be altered with the `existence` and `null_value`

View File

@ -1,5 +1,5 @@
[[query-dsl-mlt-query]]
=== More Like This Query
== More Like This Query
The More Like This Query (MLT Query) finds documents that are "like" a given
set of documents. In order to do so, MLT selects a set of representative terms
@ -87,7 +87,7 @@ present in the index, the syntax is similar to <<docs-termvectors-artificial-doc
}
--------------------------------------------------
==== How it Works
=== How it Works
Suppose we wanted to find all documents similar to a given input document.
Obviously, the input document itself should be its best match for that type of
@ -139,14 +139,14 @@ curl -s -XPUT 'http://localhost:9200/imdb/' -d '{
}
--------------------------------------------------
==== Parameters
=== Parameters
The only required parameter is `like`, all other parameters have sensible
defaults. There are three types of parameters: one to specify the document
input, the other one for term selection and for query formation.
[float]
==== Document Input Parameters
=== Document Input Parameters
[horizontal]
`like`:: coming[2.0]
@ -179,7 +179,7 @@ A list of documents following the same syntax as the <<docs-multi-get,Multi GET
[float]
[[mlt-query-term-selection]]
==== Term Selection Parameters
=== Term Selection Parameters
[horizontal]
`max_query_terms`::
@ -219,7 +219,7 @@ The analyzer that is used to analyze the free form text. Defaults to the
analyzer associated with the first field in `fields`.
[float]
==== Query Formation Parameters
=== Query Formation Parameters
[horizontal]
`minimum_should_match`::

View File

@ -1,5 +1,5 @@
[[query-dsl-multi-match-query]]
=== Multi Match Query
== Multi Match Query
The `multi_match` query builds on the <<query-dsl-match-query,`match` query>>
to allow multi-field queries:
@ -70,7 +70,7 @@ parameter, which can be set to:
combines the `_score` from each field. See <<type-phrase>>.
[[type-best-fields]]
==== `best_fields`
=== `best_fields`
The `best_fields` type is most useful when you are searching for multiple
words best found in the same field. For instance ``brown fox'' in a single
@ -156,7 +156,7 @@ See <<type-cross-fields>> for a better solution.
==================================================
[[type-most-fields]]
==== `most_fields`
=== `most_fields`
The `most_fields` type is most useful when querying multiple fields that
contain the same text analyzed in different ways. For instance, the main
@ -203,7 +203,7 @@ and `cutoff_frequency`, as explained in <<query-dsl-match-query,match query>>, b
*see <<operator-min>>*.
[[type-phrase]]
==== `phrase` and `phrase_prefix`
=== `phrase` and `phrase_prefix`
The `phrase` and `phrase_prefix` types behave just like <<type-best-fields>>,
but they use a `match_phrase` or `match_phrase_prefix` query instead of a
@ -240,7 +240,7 @@ in <<query-dsl-match-query>>. Type `phrase_prefix` additionally accepts
`max_expansions`.
[[type-cross-fields]]
==== `cross_fields`
=== `cross_fields`
The `cross_fields` type is particularly useful with structured documents where
multiple fields *should* match. For instance, when querying the `first_name`
@ -317,7 +317,7 @@ Also, accepts `analyzer`, `boost`, `operator`, `minimum_should_match`,
`zero_terms_query` and `cutoff_frequency`, as explained in
<<query-dsl-match-query, match query>>.
===== `cross_field` and analysis
==== `cross_field` and analysis
The `cross_field` type can only work in term-centric mode on fields that have
the same analyzer. Fields with the same analyzer are grouped together as in
@ -411,7 +411,7 @@ which will be executed as:
blended("will", fields: [first, first.edge, last.edge, last])
blended("smith", fields: [first, first.edge, last.edge, last])
===== `tie_breaker`
==== `tie_breaker`
By default, each per-term `blended` query will use the best score returned by
any field in a group, then these scores are added together to give the final

View File

@ -1,5 +1,5 @@
[[query-dsl-multi-term-rewrite]]
=== Multi Term Query Rewrite
== Multi Term Query Rewrite
Multi term queries, like
<<query-dsl-wildcard-query,wildcard>> and

View File

@ -1,5 +1,5 @@
[[query-dsl-nested-query]]
=== Nested Query
== Nested Query
Nested query allows to query nested objects / docs (see
<<mapping-nested-type,nested mapping>>). The

View File

@ -1,8 +1,7 @@
[[query-dsl-not-filter]]
=== Not Filter
[[query-dsl-not-query]]
== Not Query
A filter that filters out matched documents using a query. Can be placed
within queries that accept a filter.
A query that filters out matched documents using a query. For example:
[source,js]
--------------------------------------------------

View File

@ -1,10 +1,10 @@
[[query-dsl-or-filter]]
=== Or Filter
[[query-dsl-or-query]]
== Or Query
deprecated[2.0.0, Use the `bool` filter instead]
deprecated[2.0.0, Use the `bool` query instead]
A filter that matches documents using the `OR` boolean operator on other
filters. Can be placed within queries that accept a filter.
A query that matches documents using the `OR` boolean operator on other
queries.
[source,js]
--------------------------------------------------

View File

@ -1,5 +1,5 @@
[[query-dsl-prefix-query]]
=== Prefix Query
== Prefix Query
Matches documents that have fields containing terms with a specified
prefix (*not analyzed*). The prefix query maps to Lucene `PrefixQuery`.

View File

@ -1,83 +0,0 @@
[[query-dsl-queries]]
== Queries
As a general rule, queries should be used instead of filters:
* for full text search
* where the result depends on a relevance score
include::queries/match-query.asciidoc[]
include::queries/multi-match-query.asciidoc[]
include::queries/bool-query.asciidoc[]
include::queries/boosting-query.asciidoc[]
include::queries/common-terms-query.asciidoc[]
include::queries/constant-score-query.asciidoc[]
include::queries/dis-max-query.asciidoc[]
include::queries/filtered-query.asciidoc[]
include::queries/function-score-query.asciidoc[]
include::queries/fuzzy-query.asciidoc[]
include::queries/geo-shape-query.asciidoc[]
include::queries/has-child-query.asciidoc[]
include::queries/has-parent-query.asciidoc[]
include::queries/ids-query.asciidoc[]
include::queries/indices-query.asciidoc[]
include::queries/match-all-query.asciidoc[]
include::queries/mlt-query.asciidoc[]
include::queries/nested-query.asciidoc[]
include::queries/prefix-query.asciidoc[]
include::queries/query-string-query.asciidoc[]
include::queries/simple-query-string-query.asciidoc[]
include::queries/range-query.asciidoc[]
include::queries/regexp-query.asciidoc[]
include::queries/span-containing-query.asciidoc[]
include::queries/span-first-query.asciidoc[]
include::queries/span-multi-term-query.asciidoc[]
include::queries/span-near-query.asciidoc[]
include::queries/span-not-query.asciidoc[]
include::queries/span-or-query.asciidoc[]
include::queries/span-term-query.asciidoc[]
include::queries/span-within-query.asciidoc[]
include::queries/term-query.asciidoc[]
include::queries/terms-query.asciidoc[]
include::queries/top-children-query.asciidoc[]
include::queries/wildcard-query.asciidoc[]
include::queries/minimum-should-match.asciidoc[]
include::queries/multi-term-rewrite.asciidoc[]
include::queries/template-query.asciidoc[]

View File

@ -1,36 +0,0 @@
[[query-dsl-constant-score-query]]
=== Constant Score Query
A query that wraps a filter or another query and simply returns a
constant score equal to the query boost for every document in the
filter. Maps to Lucene `ConstantScoreQuery`.
[source,js]
--------------------------------------------------
{
"constant_score" : {
"filter" : {
"term" : { "user" : "kimchy"}
},
"boost" : 1.2
}
}
--------------------------------------------------
The filter object can hold only filter elements, not queries. Filters
can be much faster compared to queries since they don't perform any
scoring, especially when they are cached.
A query can also be wrapped in a `constant_score` query:
[source,js]
--------------------------------------------------
{
"constant_score" : {
"query" : {
"term" : { "user" : "kimchy"}
},
"boost" : 1.2
}
}
--------------------------------------------------

View File

@ -1,49 +0,0 @@
[[query-dsl-geo-shape-query]]
=== GeoShape Query
Query version of the
<<query-dsl-geo-shape-filter,geo_shape Filter>>.
Requires the <<mapping-geo-shape-type,geo_shape
Mapping>>.
Given a document that looks like this:
[source,js]
--------------------------------------------------
{
"name": "Wind & Wetter, Berlin, Germany",
"location": {
"type": "Point",
"coordinates": [13.400544, 52.530286]
}
}
--------------------------------------------------
The following query will find the point:
[source,js]
--------------------------------------------------
{
"query": {
"geo_shape": {
"location": {
"shape": {
"type": "envelope",
"coordinates": [[13, 53],[14, 52]]
}
}
}
}
}
--------------------------------------------------
See the Filter's documentation for more information.
[float]
==== Relevancy and Score
Currently Elasticsearch does not have any notion of geo shape relevancy,
consequently the Query internally uses a `constant_score` Query which
wraps a <<query-dsl-geo-shape-filter,geo_shape
filter>>.

View File

@ -1,19 +0,0 @@
[[query-dsl-terms-query]]
=== Terms Query
A query that match on any (configurable) of the provided terms. This is
a simpler syntax query for using a `bool` query with several `term`
queries in the `should` clauses. For example:
[source,js]
--------------------------------------------------
{
"terms" : {
"tags" : [ "blue", "pill" ],
"minimum_should_match" : 1
}
}
--------------------------------------------------
The `terms` query is also aliased with `in` as the query name for
simpler usage.

View File

@ -1,5 +1,5 @@
[[query-dsl-query-string-query]]
=== Query String Query
== Query String Query
A query that uses a query parser in order to parse its content. Here is
an example:
@ -89,7 +89,7 @@ rewritten using the
parameter.
[float]
==== Default Field
=== Default Field
When not explicitly specifying the field to search on in the query
string syntax, the `index.query.default_field` will be used to derive
@ -99,7 +99,7 @@ So, if `_all` field is disabled, it might make sense to change it to set
a different default field.
[float]
==== Multi Field
=== Multi Field
The `query_string` query can also run against multiple fields. Fields can be
provided via the `"fields"` parameter (example below).

View File

@ -1,6 +1,6 @@
[[query-string-syntax]]
==== Query string syntax
=== Query string syntax
The query string ``mini-language'' is used by the
<<query-dsl-query-string-query>> and by the
@ -14,7 +14,7 @@ phrase, in the same order.
Operators allow you to customize the search -- the available options are
explained below.
===== Field names
==== Field names
As mentioned in <<query-dsl-query-string-query>>, the `default_field` is searched for the
search terms, but it is possible to specify other fields in the query syntax:
@ -46,7 +46,7 @@ search terms, but it is possible to specify other fields in the query syntax:
_exists_:title
===== Wildcards
==== Wildcards
Wildcard searches can be run on individual terms, using `?` to replace
a single character, and `*` to replace zero or more characters:
@ -72,7 +72,7 @@ is missing some of its letters. However, by setting `analyze_wildcard` to
`true`, an attempt will be made to analyze wildcarded words before searching
the term list for matching terms.
===== Regular expressions
==== Regular expressions
Regular expression patterns can be embedded in the query string by
wrapping them in forward-slashes (`"/"`):
@ -92,7 +92,7 @@ Elasticsearch to visit every term in the index:
Use with caution!
======
===== Fuzziness
==== Fuzziness
We can search for terms that are
similar to, but not exactly like our search terms, using the ``fuzzy''
@ -112,7 +112,7 @@ sufficient to catch 80% of all human misspellings. It can be specified as:
quikc~1
===== Proximity searches
==== Proximity searches
While a phrase query (eg `"john smith"`) expects all of the terms in exactly
the same order, a proximity query allows the specified words to be further
@ -127,7 +127,7 @@ query string, the more relevant that document is considered to be. When
compared to the above example query, the phrase `"quick fox"` would be
considered more relevant than `"quick brown fox"`.
===== Ranges
==== Ranges
Ranges can be specified for date, numeric or string fields. Inclusive ranges
are specified with square brackets `[min TO max]` and exclusive ranges with
@ -178,10 +178,10 @@ would need to join two clauses with an `AND` operator:
===================================================================
The parsing of ranges in query strings can be complex and error prone. It is
much more reliable to use an explicit <<query-dsl-range-filter,`range` filter>>.
much more reliable to use an explicit <<query-dsl-range-query,`range` query>>.
===== Boosting
==== Boosting
Use the _boost_ operator `^` to make one term more relevant than another.
For instance, if we want to find all documents about foxes, but we are
@ -196,7 +196,7 @@ Boosts can also be applied to phrases or to groups:
"john smith"^2 (foo bar)^4
===== Boolean operators
==== Boolean operators
By default, all terms are optional, as long as one term matches. A search
for `foo bar baz` will find any document that contains one or more of
@ -256,7 +256,7 @@ would look like this:
****
===== Grouping
==== Grouping
Multiple terms or clauses can be grouped together with parentheses, to form
sub-queries:
@ -268,7 +268,7 @@ of a sub-query:
status:(active OR pending) title:(full text search)^2
===== Reserved characters
==== Reserved characters
If you need to use any of the characters which function as operators in your
query itself (and not as operators), then you should escape them with
@ -290,7 +290,7 @@ index is actually `"wifi"`. Escaping the space will protect it from
being touched by the query string parser: `"wi\ fi"`.
****
===== Empty Query
==== Empty Query
If the query string is empty or only contains whitespaces the query will
yield an empty result set.

View File

@ -1,5 +1,5 @@
[[query-dsl-range-query]]
=== Range Query
== Range Query
Matches documents with fields that have terms within a certain range.
The type of the Lucene query depends on the field type, for `string`
@ -30,7 +30,7 @@ The `range` query accepts the following parameters:
`boost`:: Sets the boost value of the query, defaults to `1.0`
[float]
==== Date options
=== Date options
When applied on `date` fields the `range` filter accepts also a `time_zone` parameter.
The `time_zone` parameter will be applied to your input lower and upper bounds and will

View File

@ -1,5 +1,5 @@
[[query-dsl-regexp-query]]
=== Regexp Query
== Regexp Query
The `regexp` query allows you to use regular expression term queries.
See <<regexp-syntax>> for details of the supported regular expression language.

View File

@ -1,5 +1,5 @@
[[regexp-syntax]]
==== Regular expression syntax
=== Regular expression syntax
Regular expression queries are supported by the `regexp` and the `query_string`
queries. The Lucene regular expression engine
@ -11,7 +11,7 @@ We will not attempt to explain regular expressions, but
just explain the supported operators.
====
===== Standard operators
==== Standard operators
Anchoring::
+

View File

@ -1,7 +1,7 @@
[[query-dsl-script-filter]]
=== Script Filter
[[query-dsl-script-query]]
== Script Query
A filter allowing to define
A query allowing to define
<<modules-scripting,scripts>> as filters. For
example:
@ -20,7 +20,7 @@ example:
----------------------------------------------
[float]
==== Custom Parameters
=== Custom Parameters
Scripts are compiled and cached for faster execution. If the same script
can be used, just with different parameters provider, it is preferable

View File

@ -1,5 +1,5 @@
[[query-dsl-simple-query-string-query]]
=== Simple Query String Query
== Simple Query String Query
A query that uses the SimpleQueryParser to parse its context. Unlike the
regular `query_string` query, the `simple_query_string` query will never
@ -73,7 +73,7 @@ In order to search for any of these special characters, they will need to
be escaped with `\`.
[float]
==== Default Field
=== Default Field
When not explicitly specifying the field to search on in the query
string syntax, the `index.query.default_field` will be used to derive
which field to search on. It defaults to `_all` field.
@ -82,7 +82,7 @@ So, if `_all` field is disabled, it might make sense to change it to set
a different default field.
[float]
==== Multi Field
=== Multi Field
The fields parameter can also include pattern based field names,
allowing to automatically expand to the relevant fields (dynamically
introduced fields included). For example:
@ -98,7 +98,7 @@ introduced fields included). For example:
--------------------------------------------------
[float]
==== Flags
=== Flags
`simple_query_string` support multiple flags to specify which parsing features
should be enabled. It is specified as a `|`-delimited string with the
`flags` parameter:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-containing-query]]
=== Span Containing Query
== Span Containing Query
Returns matches which enclose another span query. The span containing
query maps to Lucene `SpanContainingQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-first-query]]
=== Span First Query
== Span First Query
Matches spans near the beginning of a field. The span first query maps
to Lucene `SpanFirstQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-multi-term-query]]
=== Span Multi Term Query
== Span Multi Term Query
The `span_multi` query allows you to wrap a `multi term query` (one of wildcard,
fuzzy, prefix, term, range or regexp query) as a `span query`, so

View File

@ -1,5 +1,5 @@
[[query-dsl-span-near-query]]
=== Span Near Query
== Span Near Query
Matches spans which are near one another. One can specify _slop_, the
maximum number of intervening unmatched positions, as well as whether

View File

@ -1,5 +1,5 @@
[[query-dsl-span-not-query]]
=== Span Not Query
== Span Not Query
Removes matches which overlap with another span query. The span not
query maps to Lucene `SpanNotQuery`. Here is an example:
@ -38,4 +38,4 @@ Other top level options:
`pre`:: If set the amount of tokens before the include span can't have overlap with the exclude span.
`post`:: If set the amount of tokens after the include span can't have overlap with the exclude span.
`dist`:: If set the amount of tokens from within the include span can't have overlap with the exclude span. Equivalent
of setting both `pre` and `post`.
of setting both `pre` and `post`.

View File

@ -1,5 +1,5 @@
[[query-dsl-span-or-query]]
=== Span Or Query
== Span Or Query
Matches the union of its span clauses. The span or query maps to Lucene
`SpanOrQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-term-query]]
=== Span Term Query
== Span Term Query
Matches spans containing a term. The span term query maps to Lucene
`SpanTermQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-span-within-query]]
=== Span Within Query
== Span Within Query
Returns matches which are enclosed inside another span query. The span within
query maps to Lucene `SpanWithinQuery`. Here is an example:

View File

@ -1,5 +1,5 @@
[[query-dsl-template-query]]
=== Template Query
== Template Query
A query that accepts a query template and a map of key/value pairs to fill in
template parameters. Templating is based on Mustache. For simple token substitution all you provide
@ -56,7 +56,7 @@ GET /_search
<1> New line characters (`\n`) should be escaped as `\\n` or removed,
and quotes (`"`) should be escaped as `\\"`.
==== Stored templates
=== Stored templates
You can register a template by storing it in the `config/scripts` directory, in a file using the `.mustache` extension.
In order to execute the stored template, reference it by name in the `file`

View File

@ -1,5 +1,5 @@
[[query-dsl-term-query]]
=== Term Query
== Term Query
Matches documents that have fields that contain a term (*not analyzed*).
The term query maps to Lucene `TermQuery`. The following matches

View File

@ -1,5 +1,5 @@
[[query-dsl-terms-filter]]
=== Terms Filter
[[query-dsl-terms-query]]
== Terms Query
Filters documents that have fields that match any of the provided terms
(*not analyzed*). For example:
@ -15,7 +15,7 @@ Filters documents that have fields that match any of the provided terms
}
--------------------------------------------------
The `terms` filter is also aliased with `in` as the filter name for
The `terms` query is also aliased with `in` as the filter name for
simpler usage.
[float]

View File

@ -1,5 +1,5 @@
[[query-dsl-top-children-query]]
=== Top Children Query
== Top Children Query
deprecated[1.6.0, Use the `has_child` query instead]
@ -45,7 +45,7 @@ including the default values:
--------------------------------------------------
[float]
==== Scope
=== Scope
A `_scope` can be defined on the query allowing to run aggregations on the
same scope name that will work against the child documents. For example:
@ -66,7 +66,7 @@ same scope name that will work against the child documents. For example:
--------------------------------------------------
[float]
==== Memory Considerations
=== Memory Considerations
In order to support parent-child joins, all of the (string) parent IDs
must be resident in memory (in the <<index-modules-fielddata,field data cache>>.

View File

@ -1,8 +1,8 @@
[[query-dsl-type-filter]]
=== Type Filter
[[query-dsl-type-query]]
== Type Query
Filters documents matching the provided document / mapping type. Note,
this filter can work even when the `_type` field is not indexed (using
this query can work even when the `_type` field is not indexed (using
the <<mapping-uid-field,_uid>> field).
[source,js]

View File

@ -1,5 +1,5 @@
[[query-dsl-wildcard-query]]
=== Wildcard Query
== Wildcard Query
Matches documents that have fields matching a wildcard expression (*not
analyzed*). Supported wildcards are `*`, which matches any character

View File

@ -21,7 +21,7 @@ package org.apache.lucene.queryparser.classic;
import org.apache.lucene.search.ConstantScoreQuery;
import org.apache.lucene.search.Query;
import org.elasticsearch.index.query.ExistsFilterParser;
import org.elasticsearch.index.query.ExistsQueryParser;
import org.elasticsearch.index.query.QueryParseContext;
/**
@ -33,6 +33,6 @@ public class ExistsFieldQueryExtension implements FieldQueryExtension {
@Override
public Query query(QueryParseContext parseContext, String queryText) {
return new ConstantScoreQuery(ExistsFilterParser.newFilter(parseContext, queryText, null));
return new ConstantScoreQuery(ExistsQueryParser.newFilter(parseContext, queryText, null));
}
}

View File

@ -21,7 +21,7 @@ package org.apache.lucene.queryparser.classic;
import org.apache.lucene.search.ConstantScoreQuery;
import org.apache.lucene.search.Query;
import org.elasticsearch.index.query.MissingFilterParser;
import org.elasticsearch.index.query.MissingQueryParser;
import org.elasticsearch.index.query.QueryParseContext;
/**
@ -33,7 +33,7 @@ public class MissingFieldQueryExtension implements FieldQueryExtension {
@Override
public Query query(QueryParseContext parseContext, String queryText) {
return new ConstantScoreQuery(MissingFilterParser.newFilter(parseContext, queryText,
MissingFilterParser.DEFAULT_EXISTENCE_VALUE, MissingFilterParser.DEFAULT_NULL_VALUE, null));
return new ConstantScoreQuery(MissingQueryParser.newFilter(parseContext, queryText,
MissingQueryParser.DEFAULT_EXISTENCE_VALUE, MissingQueryParser.DEFAULT_NULL_VALUE, null));
}
}

View File

@ -24,8 +24,12 @@ import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.*;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilder;
import java.io.IOException;
import java.util.Map;
@ -97,7 +101,7 @@ public class Alias implements Streamable {
/**
* Associates a filter to the alias
*/
public Alias filter(FilterBuilder filterBuilder) {
public Alias filter(QueryBuilder filterBuilder) {
if (filterBuilder == null) {
this.filter = null;
return this;

View File

@ -22,6 +22,7 @@ package org.elasticsearch.action.admin.indices.alias;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Lists;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.AliasesRequest;
import org.elasticsearch.action.CompositeIndicesRequest;
@ -37,7 +38,7 @@ import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.util.CollectionUtils;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import java.io.IOException;
import java.util.ArrayList;
@ -113,7 +114,7 @@ public class IndicesAliasesRequest extends AcknowledgedRequest<IndicesAliasesReq
return this;
}
public AliasActions filter(FilterBuilder filter) {
public AliasActions filter(QueryBuilder filter) {
aliasAction.filter(filter);
return this;
}
@ -245,7 +246,7 @@ public class IndicesAliasesRequest extends AcknowledgedRequest<IndicesAliasesReq
* @param filterBuilder The filter
* @param indices The indices
*/
public IndicesAliasesRequest addAlias(String alias, FilterBuilder filterBuilder, String... indices) {
public IndicesAliasesRequest addAlias(String alias, QueryBuilder filterBuilder, String... indices) {
addAliasAction(new AliasActions(AliasAction.Type.ADD, indices, alias).filter(filterBuilder));
return this;
}

View File

@ -23,7 +23,7 @@ import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasA
import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;
import org.elasticsearch.client.ElasticsearchClient;
import org.elasticsearch.cluster.metadata.AliasAction;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import java.util.Map;
@ -115,7 +115,7 @@ public class IndicesAliasesRequestBuilder extends AcknowledgedRequestBuilder<Ind
* @param alias The alias
* @param filterBuilder The filter
*/
public IndicesAliasesRequestBuilder addAlias(String indices[], String alias, FilterBuilder filterBuilder) {
public IndicesAliasesRequestBuilder addAlias(String indices[], String alias, QueryBuilder filterBuilder) {
request.addAlias(alias, filterBuilder, indices);
return this;
}
@ -127,7 +127,7 @@ public class IndicesAliasesRequestBuilder extends AcknowledgedRequestBuilder<Ind
* @param alias The alias
* @param filterBuilder The filter
*/
public IndicesAliasesRequestBuilder addAlias(String index, String alias, FilterBuilder filterBuilder) {
public IndicesAliasesRequestBuilder addAlias(String index, String alias, QueryBuilder filterBuilder) {
request.addAlias(alias, filterBuilder, index);
return this;
}

View File

@ -25,7 +25,6 @@ import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
import org.elasticsearch.search.highlight.HighlightBuilder;
@ -144,14 +143,6 @@ public class PercolateRequestBuilder extends BroadcastOperationRequestBuilder<Pe
return this;
}
/**
* Delegates to {@link PercolateSourceBuilder#setFilterBuilder(FilterBuilder)}
*/
public PercolateRequestBuilder setPercolateFilter(FilterBuilder filterBuilder) {
sourceBuilder().setFilterBuilder(filterBuilder);
return this;
}
/**
* Delegates to {@link PercolateSourceBuilder#setHighlightBuilder(HighlightBuilder)}
*/

View File

@ -20,16 +20,18 @@
package org.elasticsearch.action.percolate;
import com.google.common.collect.Lists;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.client.Requests;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.xcontent.*;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.reducers.ReducerBuilder;
import org.elasticsearch.search.highlight.HighlightBuilder;
import org.elasticsearch.search.sort.ScoreSortBuilder;
import org.elasticsearch.search.sort.SortBuilder;
@ -46,7 +48,6 @@ public class PercolateSourceBuilder implements ToXContent {
private DocBuilder docBuilder;
private QueryBuilder queryBuilder;
private FilterBuilder filterBuilder;
private Integer size;
private List<SortBuilder> sorts;
private Boolean trackScores;
@ -70,14 +71,6 @@ public class PercolateSourceBuilder implements ToXContent {
return this;
}
/**
* Sets a filter to reduce the number of percolate queries to be evaluated.
*/
public PercolateSourceBuilder setFilterBuilder(FilterBuilder filterBuilder) {
this.filterBuilder = filterBuilder;
return this;
}
/**
* Limits the maximum number of percolate query matches to be returned.
*/
@ -149,10 +142,6 @@ public class PercolateSourceBuilder implements ToXContent {
builder.field("query");
queryBuilder.toXContent(builder, params);
}
if (filterBuilder != null) {
builder.field("filter");
filterBuilder.toXContent(builder, params);
}
if (size != null) {
builder.field("size", size);
}

View File

@ -28,7 +28,6 @@ import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.script.ScriptService;
import org.elasticsearch.search.Scroll;
@ -236,7 +235,7 @@ public class SearchRequestBuilder extends ActionRequestBuilder<SearchRequest, Se
* Sets a filter that will be executed after the query has been executed and only has affect on the search hits
* (not aggregations). This filter is always executed as last filtering mechanism.
*/
public SearchRequestBuilder setPostFilter(FilterBuilder postFilter) {
public SearchRequestBuilder setPostFilter(QueryBuilder postFilter) {
sourceBuilder().postFilter(postFilter);
return this;
}

View File

@ -28,7 +28,7 @@ import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import java.io.IOException;
import java.util.Map;
@ -154,7 +154,7 @@ public class AliasAction implements Streamable {
}
}
public AliasAction filter(FilterBuilder filterBuilder) {
public AliasAction filter(QueryBuilder filterBuilder) {
if (filterBuilder == null) {
this.filter = null;
return this;

View File

@ -102,4 +102,8 @@ public class ParseField {
return false;
}
@Override
public String toString() {
return getPreferredName();
}
}

View File

@ -26,13 +26,15 @@ import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.PostingsEnum;
import org.apache.lucene.index.Terms;
import org.apache.lucene.index.TermsEnum;
import org.apache.lucene.search.DocIdSet;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.lucene.search.Filter;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.search.Weight;
import org.apache.lucene.util.BitDocIdSet;
import org.apache.lucene.util.Bits;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.lucene.docset.DocIdSets;
import java.io.IOException;
import java.util.List;
@ -66,7 +68,7 @@ public class FilterableTermsEnum extends TermsEnum {
protected final int docsEnumFlag;
protected int numDocs;
public FilterableTermsEnum(IndexReader reader, String field, int docsEnumFlag, @Nullable final Filter filter) throws IOException {
public FilterableTermsEnum(IndexReader reader, String field, int docsEnumFlag, @Nullable Query filter) throws IOException {
if ((docsEnumFlag != PostingsEnum.FREQS) && (docsEnumFlag != PostingsEnum.NONE)) {
throw new IllegalArgumentException("invalid docsEnumFlag of " + docsEnumFlag);
}
@ -78,6 +80,14 @@ public class FilterableTermsEnum extends TermsEnum {
}
List<LeafReaderContext> leaves = reader.leaves();
List<Holder> enums = Lists.newArrayListWithExpectedSize(leaves.size());
final Weight weight;
if (filter == null) {
weight = null;
} else {
final IndexSearcher searcher = new IndexSearcher(reader);
searcher.setQueryCache(null);
weight = searcher.createNormalizedWeight(filter, false);
}
for (LeafReaderContext context : leaves) {
Terms terms = context.reader().terms(field);
if (terms == null) {
@ -88,21 +98,23 @@ public class FilterableTermsEnum extends TermsEnum {
continue;
}
Bits bits = null;
if (filter != null) {
if (weight != null) {
// we want to force apply deleted docs
DocIdSet docIdSet = filter.getDocIdSet(context, context.reader().getLiveDocs());
if (DocIdSets.isEmpty(docIdSet)) {
Scorer docs = weight.scorer(context, context.reader().getLiveDocs());
if (docs == null) {
// fully filtered, none matching, no need to iterate on this
continue;
}
bits = DocIdSets.toSafeBits(context.reader().maxDoc(), docIdSet);
BitDocIdSet.Builder builder = new BitDocIdSet.Builder(context.reader().maxDoc());
builder.or(docs);
bits = builder.build().bits();
// Count how many docs are in our filtered set
// TODO make this lazy-loaded only for those that need it?
DocIdSetIterator iterator = docIdSet.iterator();
if (iterator != null) {
while (iterator.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {
numDocs++;
}
docs = weight.scorer(context, context.reader().getLiveDocs());
while (docs.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {
numDocs++;
}
}
enums.add(new Holder(termsEnum, bits));

View File

@ -21,9 +21,8 @@ package org.elasticsearch.common.lucene.index;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.PostingsEnum;
import org.apache.lucene.search.Filter;
import org.apache.lucene.search.Query;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.lease.Releasable;
import org.elasticsearch.common.lease.Releasables;
@ -48,7 +47,7 @@ public class FreqTermsEnum extends FilterableTermsEnum implements Releasable {
private final boolean needTotalTermFreqs;
public FreqTermsEnum(IndexReader reader, String field, boolean needDocFreq, boolean needTotalTermFreq, @Nullable Filter filter, BigArrays bigArrays) throws IOException {
public FreqTermsEnum(IndexReader reader, String field, boolean needDocFreq, boolean needTotalTermFreq, @Nullable Query filter, BigArrays bigArrays) throws IOException {
super(reader, field, needTotalTermFreq ? PostingsEnum.FREQS : PostingsEnum.NONE, filter);
this.bigArrays = bigArrays;
this.needDocFreqs = needDocFreq;

View File

@ -50,14 +50,6 @@ public class Queries {
return new BooleanQuery();
}
public static Filter newMatchAllFilter() {
return new QueryWrapperFilter(newMatchAllQuery());
}
public static Filter newMatchNoDocsFilter() {
return new QueryWrapperFilter(newMatchNoDocsQuery());
}
public static Filter newNestedFilter() {
return new QueryWrapperFilter(new PrefixQuery(new Term(TypeFieldMapper.NAME, new BytesRef("__"))));
}
@ -66,6 +58,13 @@ public class Queries {
return new QueryWrapperFilter(not(newNestedFilter()));
}
public static BooleanQuery filtered(Query query, Query filter) {
BooleanQuery bq = new BooleanQuery();
bq.add(query, Occur.MUST);
bq.add(filter, Occur.FILTER);
return bq;
}
/** Return a query that matches all documents but those that match the given query. */
public static Query not(Query q) {
BooleanQuery bq = new BooleanQuery();

View File

@ -37,10 +37,10 @@ import java.util.*;
public class FiltersFunctionScoreQuery extends Query {
public static class FilterFunction {
public final Filter filter;
public final Query filter;
public final ScoreFunction function;
public FilterFunction(Filter filter, ScoreFunction function) {
public FilterFunction(Query filter, ScoreFunction function) {
this.filter = filter;
this.function = function;
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.index.aliases;
import org.apache.lucene.search.Filter;
import org.apache.lucene.search.Query;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.compress.CompressedString;
@ -28,13 +28,13 @@ import org.elasticsearch.common.compress.CompressedString;
*/
public class IndexAlias {
private String alias;
private final String alias;
private CompressedString filter;
private final CompressedString filter;
private Filter parsedFilter;
private final Query parsedFilter;
public IndexAlias(String alias, @Nullable CompressedString filter, @Nullable Filter parsedFilter) {
public IndexAlias(String alias, @Nullable CompressedString filter, @Nullable Query parsedFilter) {
this.alias = alias;
this.filter = filter;
this.parsedFilter = parsedFilter;
@ -50,7 +50,7 @@ public class IndexAlias {
}
@Nullable
public Filter parsedFilter() {
public Query parsedFilter() {
return parsedFilter;
}

View File

@ -22,6 +22,7 @@ package org.elasticsearch.index.aliases;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Filter;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryWrapperFilter;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.compress.CompressedString;
@ -33,7 +34,7 @@ import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.AbstractIndexComponent;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.query.IndexQueryParserService;
import org.elasticsearch.index.query.ParsedFilter;
import org.elasticsearch.index.query.ParsedQuery;
import org.elasticsearch.index.settings.IndexSettings;
import org.elasticsearch.indices.AliasFilterParsingException;
import org.elasticsearch.indices.InvalidAliasNameException;
@ -82,7 +83,7 @@ public class IndexAliasesService extends AbstractIndexComponent implements Itera
* <p>The list of filtering aliases should be obtained by calling MetaData.filteringAliases.
* Returns <tt>null</tt> if no filtering is required.</p>
*/
public Filter aliasFilter(String... aliases) {
public Query aliasFilter(String... aliases) {
if (aliases == null || aliases.length == 0) {
return null;
}
@ -109,7 +110,7 @@ public class IndexAliasesService extends AbstractIndexComponent implements Itera
return null;
}
}
return new QueryWrapperFilter(combined);
return combined;
}
}
@ -121,15 +122,15 @@ public class IndexAliasesService extends AbstractIndexComponent implements Itera
aliases.remove(alias);
}
private Filter parse(String alias, CompressedString filter) {
private Query parse(String alias, CompressedString filter) {
if (filter == null) {
return null;
}
try {
byte[] filterSource = filter.uncompressed();
try (XContentParser parser = XContentFactory.xContent(filterSource).createParser(filterSource)) {
ParsedFilter parsedFilter = indexQueryParser.parseInnerFilter(parser);
return parsedFilter == null ? null : parsedFilter.filter();
ParsedQuery parsedFilter = indexQueryParser.parseInnerFilter(parser);
return parsedFilter == null ? null : parsedFilter.query();
}
} catch (IOException ex) {
throw new AliasFilterParsingException(index, alias, "Invalid alias filter", ex);

View File

@ -20,8 +20,16 @@
package org.elasticsearch.index.engine;
import com.google.common.base.Preconditions;
import org.apache.lucene.index.*;
import org.apache.lucene.search.Filter;
import org.apache.lucene.index.FilterLeafReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.LeafReader;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.SegmentCommitInfo;
import org.apache.lucene.index.SegmentInfos;
import org.apache.lucene.index.SegmentReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.SearcherManager;
@ -52,7 +60,11 @@ import org.elasticsearch.index.translog.Translog;
import java.io.Closeable;
import java.io.IOException;
import java.util.*;
import java.util.Arrays;
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.locks.Condition;
@ -888,7 +900,7 @@ public abstract class Engine implements Closeable {
private final Query query;
private final BytesReference source;
private final String[] filteringAliases;
private final Filter aliasFilter;
private final Query aliasFilter;
private final String[] types;
private final BitDocIdSetFilter parentFilter;
private final Operation.Origin origin;
@ -896,7 +908,7 @@ public abstract class Engine implements Closeable {
private final long startTime;
private long endTime;
public DeleteByQuery(Query query, BytesReference source, @Nullable String[] filteringAliases, @Nullable Filter aliasFilter, BitDocIdSetFilter parentFilter, Operation.Origin origin, long startTime, String... types) {
public DeleteByQuery(Query query, BytesReference source, @Nullable String[] filteringAliases, @Nullable Query aliasFilter, BitDocIdSetFilter parentFilter, Operation.Origin origin, long startTime, String... types) {
this.query = query;
this.source = source;
this.types = types;
@ -923,7 +935,7 @@ public abstract class Engine implements Closeable {
return filteringAliases;
}
public Filter aliasFilter() {
public Query aliasFilter() {
return aliasFilter;
}

View File

@ -20,9 +20,26 @@
package org.elasticsearch.index.engine;
import com.google.common.collect.Lists;
import org.apache.lucene.index.*;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriter.IndexReaderWarmer;
import org.apache.lucene.search.*;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.LeafReader;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.LiveIndexWriterConfig;
import org.apache.lucene.index.MergePolicy;
import org.apache.lucene.index.MultiReader;
import org.apache.lucene.index.SegmentCommitInfo;
import org.apache.lucene.index.SegmentInfos;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.BooleanClause.Occur;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.SearcherFactory;
import org.apache.lucene.search.SearcherManager;
import org.apache.lucene.store.AlreadyClosedException;
import org.apache.lucene.store.LockObtainFailedException;
import org.apache.lucene.util.BytesRef;
@ -55,7 +72,11 @@ import org.elasticsearch.threadpool.ThreadPool;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.locks.Lock;
@ -595,15 +616,15 @@ public class InternalEngine extends Engine {
private void innerDelete(DeleteByQuery delete) throws EngineException {
try {
Query query;
if (delete.nested() && delete.aliasFilter() != null) {
query = new IncludeNestedDocsQuery(new FilteredQuery(delete.query(), delete.aliasFilter()), delete.parentFilter());
} else if (delete.nested()) {
query = new IncludeNestedDocsQuery(delete.query(), delete.parentFilter());
} else if (delete.aliasFilter() != null) {
query = new FilteredQuery(delete.query(), delete.aliasFilter());
} else {
query = delete.query();
Query query = delete.query();
if (delete.aliasFilter() != null) {
BooleanQuery boolQuery = new BooleanQuery();
boolQuery.add(query, Occur.MUST);
boolQuery.add(delete.aliasFilter(), Occur.FILTER);
query = boolQuery;
}
if (delete.nested()) {
query = new IncludeNestedDocsQuery(query, delete.parentFilter());
}
indexWriter.deleteDocuments(query);

View File

@ -28,6 +28,7 @@ import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.search.DocIdSet;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.lucene.search.Filter;
import org.apache.lucene.search.Query;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.Strings;
@ -181,7 +182,7 @@ public class DocumentMapper implements ToXContent {
private boolean hasNestedObjects = false;
private final Filter typeFilter;
private final Query typeFilter;
private final Object mappersMutex = new Object();
@ -199,7 +200,7 @@ public class DocumentMapper implements ToXContent {
meta);
this.documentParser = new DocumentParser(index, indexSettings, docMapperParser, this);
this.typeFilter = typeMapper().termFilter(type, null);
this.typeFilter = typeMapper().termQuery(type, null);
if (rootMapper(ParentFieldMapper.class).active()) {
// mark the routing field mapper as required
@ -313,7 +314,7 @@ public class DocumentMapper implements ToXContent {
return rootMapper(SizeFieldMapper.class);
}
public Filter typeFilter() {
public Query typeFilter() {
return this.typeFilter;
}

View File

@ -245,26 +245,16 @@ public interface FieldMapper<T> extends Mapper {
Query termQuery(Object value, @Nullable QueryParseContext context);
Filter termFilter(Object value, @Nullable QueryParseContext context);
Filter termsFilter(List values, @Nullable QueryParseContext context);
Filter fieldDataTermsFilter(List values, @Nullable QueryParseContext context);
Query termsQuery(List values, @Nullable QueryParseContext context);
Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context);
Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context);
Query fuzzyQuery(String value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions);
Query prefixQuery(Object value, @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryParseContext context);
Filter prefixFilter(Object value, @Nullable QueryParseContext context);
Query regexpQuery(Object value, int flags, int maxDeterminizedStates, @Nullable MultiTermQuery.RewriteMethod method, @Nullable QueryParseContext context);
Filter regexpFilter(Object value, int flags, int maxDeterminizedStates, @Nullable QueryParseContext parseContext);
/**
* A term query to use when parsing a query string. Can return <tt>null</tt>.
*/
@ -275,7 +265,7 @@ public interface FieldMapper<T> extends Mapper {
* Null value filter, returns <tt>null</tt> if there is no null value associated with the field.
*/
@Nullable
Filter nullValueFilter();
Query nullValueFilter();
FieldDataType fieldDataType();

Some files were not shown because too many files have changed in this diff Show More