mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-03-25 09:28:27 +00:00
Replace deprecated filters with equivalent queries.
In Lucene 5.1 lots of filters got deprecated in favour of equivalent queries. Additionally, random-access to filters is now replaced with approximations on scorers. This commit - replaces the deprecated NumericRangeFilter, PrefixFilter, TermFilter and TermsFilter with NumericRangeQuery, PrefixQuery, TermQuery and TermsQuery, wrapped in a QueryWrapperFilter - replaces XBooleanFilter, AndFilter and OrFilter with a BooleanQuery in a QueryWrapperFilter - removes DocIdSets.isBroken: the new two-phase iteration API will now help execute slow filters efficiently - replaces FilterCachingPolicy with QueryCachingPolicy Close #8960
This commit is contained in:
parent
b31e590421
commit
d7abb12100
@ -33,6 +33,11 @@ java.nio.file.Path#toFile()
|
||||
@defaultMessage Don't use deprecated lucene apis
|
||||
org.apache.lucene.index.DocsEnum
|
||||
org.apache.lucene.index.DocsAndPositionsEnum
|
||||
org.apache.lucene.queries.TermFilter
|
||||
org.apache.lucene.queries.TermsFilter
|
||||
org.apache.lucene.search.TermRangeFilter
|
||||
org.apache.lucene.search.NumericRangeFilter
|
||||
org.apache.lucene.search.PrefixFilter
|
||||
|
||||
java.nio.file.Paths @ Use PathUtils.get instead.
|
||||
java.nio.file.FileSystems#getDefault() @ use PathUtils.getDefault instead.
|
||||
|
@ -374,9 +374,18 @@ http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
|
||||
The cluster state api doesn't return the `routing_nodes` section anymore when
|
||||
`routing_table` is requested. The newly introduced `routing_nodes` flag can
|
||||
be used separately to control whether `routing_nodes` should be returned.
|
||||
|
||||
=== Query DSL
|
||||
|
||||
The `fuzzy_like_this` and `fuzzy_like_this_field` queries have been removed.
|
||||
|
||||
The `limit` filter is deprecated and becomes a no-op. You can achieve similar
|
||||
behaviour using the <<search-request-body,terminate_after>> parameter.
|
||||
|
||||
`or` and `and` on the one hand and `bool` on the other hand used to have
|
||||
different performance characteristics depending on the wrapped filters. This is
|
||||
fixed now, as a consequence the `or` and `and` filters are now deprecated in
|
||||
favour or `bool`.
|
||||
|
||||
The `execution` option of the `terms` filter is now deprecated and ignored if
|
||||
provided.
|
||||
|
@ -1,6 +1,8 @@
|
||||
[[query-dsl-and-filter]]
|
||||
=== And Filter
|
||||
|
||||
deprecated[2.0.0, Use the `bool` filter instead]
|
||||
|
||||
A filter that matches documents using the `AND` boolean operator on other
|
||||
filters. Can be placed within queries that accept a filter.
|
||||
|
||||
|
@ -1,6 +1,8 @@
|
||||
[[query-dsl-or-filter]]
|
||||
=== Or Filter
|
||||
|
||||
deprecated[2.0.0, Use the `bool` filter instead]
|
||||
|
||||
A filter that matches documents using the `OR` boolean operator on other
|
||||
filters. Can be placed within queries that accept a filter.
|
||||
|
||||
|
@ -18,71 +18,6 @@ Filters documents that have fields that match any of the provided terms
|
||||
The `terms` filter is also aliased with `in` as the filter name for
|
||||
simpler usage.
|
||||
|
||||
[float]
|
||||
==== Execution Mode
|
||||
|
||||
The way terms filter executes is by iterating over the terms provided
|
||||
and finding matches docs (loading into a bitset) and caching it.
|
||||
Sometimes, we want a different execution model that can still be
|
||||
achieved by building more complex queries in the DSL, but we can support
|
||||
them in the more compact model that terms filter provides.
|
||||
|
||||
The `execution` option now has the following options :
|
||||
|
||||
[horizontal]
|
||||
`plain`::
|
||||
The default. Works as today. Iterates over all the terms,
|
||||
building a bit set matching it, and filtering. The total filter is
|
||||
cached.
|
||||
|
||||
`fielddata`::
|
||||
Generates a terms filters that uses the fielddata cache to
|
||||
compare terms. This execution mode is great to use when filtering
|
||||
on a field that is already loaded into the fielddata cache from
|
||||
aggregating, sorting, or index warmers. When filtering on
|
||||
a large number of terms, this execution can be considerably faster
|
||||
than the other modes. The total filter is not cached unless
|
||||
explicitly configured to do so.
|
||||
|
||||
`bool`::
|
||||
Generates a term filter (which is cached) for each term, and
|
||||
wraps those in a bool filter. The bool filter itself is not cached as it
|
||||
can operate very quickly on the cached term filters.
|
||||
|
||||
`and`::
|
||||
Generates a term filter (which is cached) for each term, and
|
||||
wraps those in an and filter. The and filter itself is not cached.
|
||||
|
||||
`or`::
|
||||
Generates a term filter (which is cached) for each term, and
|
||||
wraps those in an or filter. The or filter itself is not cached.
|
||||
Generally, the `bool` execution mode should be preferred.
|
||||
|
||||
If you don't want the generated individual term queries to be cached,
|
||||
you can use: `bool_nocache`, `and_nocache` or `or_nocache` instead, but
|
||||
be aware that this will affect performance.
|
||||
|
||||
The "total" terms filter caching can still be explicitly controlled
|
||||
using the `_cache` option. Note the default value for it depends on the
|
||||
execution value.
|
||||
|
||||
For example:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"constant_score" : {
|
||||
"filter" : {
|
||||
"terms" : {
|
||||
"user" : ["kimchy", "elasticsearch"],
|
||||
"execution" : "bool",
|
||||
"_cache": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[float]
|
||||
==== Caching
|
||||
|
||||
|
@ -32,5 +32,5 @@
|
||||
- is_true: valid
|
||||
- match: {_shards.failed: 0}
|
||||
- match: {explanations.0.index: 'testing'}
|
||||
- match: {explanations.0.explanation: 'ConstantScore(*:*)'}
|
||||
- match: {explanations.0.explanation: '*:*'}
|
||||
|
||||
|
@ -22,25 +22,20 @@ package org.apache.lucene.search.vectorhighlight;
|
||||
import org.apache.lucene.index.IndexReader;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.BlendedTermQuery;
|
||||
import org.apache.lucene.queries.FilterClause;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
import org.apache.lucene.search.MultiPhraseQuery;
|
||||
import org.apache.lucene.search.MultiTermQueryWrapperFilter;
|
||||
import org.apache.lucene.search.PhraseQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.search.spans.SpanTermQuery;
|
||||
import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;
|
||||
import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.lang.reflect.Field;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
|
||||
@ -48,19 +43,9 @@ import java.util.List;
|
||||
*
|
||||
*/
|
||||
// LUCENE MONITOR
|
||||
// TODO: remove me!
|
||||
public class CustomFieldQuery extends FieldQuery {
|
||||
|
||||
private static Field multiTermQueryWrapperFilterQueryField;
|
||||
|
||||
static {
|
||||
try {
|
||||
multiTermQueryWrapperFilterQueryField = MultiTermQueryWrapperFilter.class.getDeclaredField("query");
|
||||
multiTermQueryWrapperFilterQueryField.setAccessible(true);
|
||||
} catch (NoSuchFieldException e) {
|
||||
// ignore
|
||||
}
|
||||
}
|
||||
|
||||
public static final ThreadLocal<Boolean> highlightFilters = new ThreadLocal<>();
|
||||
|
||||
public CustomFieldQuery(Query query, IndexReader reader, FastVectorHighlighter highlighter) throws IOException {
|
||||
@ -140,25 +125,8 @@ public class CustomFieldQuery extends FieldQuery {
|
||||
if (highlight == null || highlight.equals(Boolean.FALSE)) {
|
||||
return;
|
||||
}
|
||||
if (sourceFilter instanceof TermFilter) {
|
||||
// TermFilter is just a deprecated wrapper over QWF
|
||||
TermQuery actualQuery = (TermQuery) ((TermFilter) sourceFilter).getQuery();
|
||||
flatten(new TermQuery(actualQuery.getTerm()), reader, flatQueries);
|
||||
} else if (sourceFilter instanceof MultiTermQueryWrapperFilter) {
|
||||
if (multiTermQueryWrapperFilterQueryField != null) {
|
||||
try {
|
||||
flatten((Query) multiTermQueryWrapperFilterQueryField.get(sourceFilter), reader, flatQueries);
|
||||
} catch (IllegalAccessException e) {
|
||||
// ignore
|
||||
}
|
||||
}
|
||||
} else if (sourceFilter instanceof XBooleanFilter) {
|
||||
XBooleanFilter booleanFilter = (XBooleanFilter) sourceFilter;
|
||||
for (FilterClause clause : booleanFilter.clauses()) {
|
||||
if (clause.getOccur() == BooleanClause.Occur.MUST || clause.getOccur() == BooleanClause.Occur.SHOULD) {
|
||||
flatten(clause.getFilter(), reader, flatQueries);
|
||||
}
|
||||
}
|
||||
if (sourceFilter instanceof QueryWrapperFilter) {
|
||||
flatten(((QueryWrapperFilter) sourceFilter).getQuery(), reader, flatQueries);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -37,7 +37,6 @@ import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.routing.GroupShardsIterator;
|
||||
import org.elasticsearch.cluster.routing.ShardRouting;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.MatchNoDocsFilter;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.BigArrays;
|
||||
import org.elasticsearch.index.IndexService;
|
||||
@ -219,7 +218,7 @@ public class TransportValidateQueryAction extends TransportBroadcastOperationAct
|
||||
|
||||
private String getRewrittenQuery(IndexSearcher searcher, Query query) throws IOException {
|
||||
Query queryRewrite = searcher.rewrite(query);
|
||||
if (queryRewrite instanceof MatchNoDocsQuery || queryRewrite instanceof MatchNoDocsFilter) {
|
||||
if (queryRewrite instanceof MatchNoDocsQuery) {
|
||||
return query.toString();
|
||||
} else {
|
||||
return queryRewrite.toString();
|
||||
|
@ -19,21 +19,18 @@
|
||||
|
||||
package org.elasticsearch.common.lucene.docset;
|
||||
|
||||
import com.google.common.collect.Iterables;
|
||||
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.util.ArrayUtil;
|
||||
import org.apache.lucene.util.BitSet;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.InPlaceMergeSorter;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.elasticsearch.common.lucene.search.XDocIdSetIterator;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
@ -93,7 +90,7 @@ public class AndDocIdSet extends DocIdSet {
|
||||
return DocIdSetIterator.empty();
|
||||
}
|
||||
Bits bit = set.bits();
|
||||
if (bit != null && DocIdSets.isBroken(it)) {
|
||||
if (bit != null && bit instanceof BitSet == false) {
|
||||
bits.add(bit);
|
||||
} else {
|
||||
iterators.add(it);
|
||||
@ -138,7 +135,7 @@ public class AndDocIdSet extends DocIdSet {
|
||||
}
|
||||
}
|
||||
|
||||
static class IteratorBasedIterator extends XDocIdSetIterator {
|
||||
static class IteratorBasedIterator extends DocIdSetIterator {
|
||||
private int doc = -1;
|
||||
private final DocIdSetIterator lead;
|
||||
private final DocIdSetIterator[] otherIterators;
|
||||
@ -174,16 +171,6 @@ public class AndDocIdSet extends DocIdSet {
|
||||
this.otherIterators = Arrays.copyOfRange(sortedIterators, 1, sortedIterators.length);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isBroken() {
|
||||
for (DocIdSetIterator it : Iterables.concat(Collections.singleton(lead), Arrays.asList(otherIterators))) {
|
||||
if (DocIdSets.isBroken(it)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int docID() {
|
||||
return doc;
|
||||
|
@ -22,8 +22,6 @@ package org.elasticsearch.common.lucene.docset;
|
||||
import org.apache.lucene.index.LeafReader;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.search.DocValuesDocIdSet;
|
||||
import org.apache.lucene.search.FilteredDocIdSetIterator;
|
||||
import org.apache.lucene.util.BitDocIdSet;
|
||||
import org.apache.lucene.util.BitSet;
|
||||
import org.apache.lucene.util.Bits;
|
||||
@ -33,7 +31,6 @@ import org.apache.lucene.util.SparseFixedBitSet;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.ElasticsearchIllegalStateException;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.lucene.search.XDocIdSetIterator;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
@ -55,31 +52,6 @@ public class DocIdSets {
|
||||
return set == null || set == DocIdSet.EMPTY;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if the given iterator can nextDoc() or advance() in sub-linear time
|
||||
* of the number of documents. For instance, an iterator that would need to
|
||||
* iterate one document at a time to check for its value would be considered
|
||||
* broken.
|
||||
*/
|
||||
public static boolean isBroken(DocIdSetIterator iterator) {
|
||||
while (iterator instanceof FilteredDocIdSetIterator) {
|
||||
// this iterator is filtered (likely by some bits)
|
||||
// unwrap in order to check if the underlying iterator is fast
|
||||
iterator = ((FilteredDocIdSetIterator) iterator).getDelegate();
|
||||
}
|
||||
if (iterator instanceof XDocIdSetIterator) {
|
||||
return ((XDocIdSetIterator) iterator).isBroken();
|
||||
}
|
||||
if (iterator instanceof MatchDocIdSetIterator) {
|
||||
return true;
|
||||
}
|
||||
// DocValuesDocIdSet produces anonymous slow iterators
|
||||
if (iterator != null && DocValuesDocIdSet.class.equals(iterator.getClass().getEnclosingClass())) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Converts to a cacheable {@link DocIdSet}
|
||||
* <p/>
|
||||
|
@ -1,181 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.docset;
|
||||
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
* A {@link DocIdSet} that matches the "inverse" of the provided doc id set.
|
||||
*/
|
||||
public class NotDocIdSet extends DocIdSet {
|
||||
|
||||
private final DocIdSet set;
|
||||
private final int maxDoc;
|
||||
|
||||
public NotDocIdSet(DocIdSet set, int maxDoc) {
|
||||
this.maxDoc = maxDoc;
|
||||
this.set = set;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isCacheable() {
|
||||
return set.isCacheable();
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
return RamUsageEstimator.NUM_BYTES_OBJECT_REF + RamUsageEstimator.NUM_BYTES_INT + set.ramBytesUsed();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bits bits() throws IOException {
|
||||
Bits bits = set.bits();
|
||||
if (bits == null) {
|
||||
return null;
|
||||
}
|
||||
return new NotBits(bits);
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocIdSetIterator iterator() throws IOException {
|
||||
DocIdSetIterator it = set.iterator();
|
||||
if (it == null) {
|
||||
return new AllDocIdSet.Iterator(maxDoc);
|
||||
}
|
||||
// TODO: can we optimize for the FixedBitSet case?
|
||||
// if we have bits, its much faster to just check on the flipped end potentially
|
||||
// really depends on the nature of the Bits, specifically with FixedBitSet, where
|
||||
// most of the docs are set?
|
||||
Bits bits = set.bits();
|
||||
if (bits != null) {
|
||||
return new BitsBasedIterator(bits);
|
||||
}
|
||||
return new IteratorBasedIterator(maxDoc, it);
|
||||
}
|
||||
|
||||
public static class NotBits implements Bits {
|
||||
|
||||
private final Bits bits;
|
||||
|
||||
public NotBits(Bits bits) {
|
||||
this.bits = bits;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean get(int index) {
|
||||
return !bits.get(index);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int length() {
|
||||
return bits.length();
|
||||
}
|
||||
}
|
||||
|
||||
public static class BitsBasedIterator extends MatchDocIdSetIterator {
|
||||
|
||||
private final Bits bits;
|
||||
|
||||
public BitsBasedIterator(Bits bits) {
|
||||
super(bits.length());
|
||||
this.bits = bits;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected boolean matchDoc(int doc) {
|
||||
return !bits.get(doc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public long cost() {
|
||||
return bits.length();
|
||||
}
|
||||
}
|
||||
|
||||
public static class IteratorBasedIterator extends DocIdSetIterator {
|
||||
private final int max;
|
||||
private DocIdSetIterator it1;
|
||||
private int lastReturn = -1;
|
||||
private int innerDocid = -1;
|
||||
private final long cost;
|
||||
|
||||
IteratorBasedIterator(int max, DocIdSetIterator it) throws IOException {
|
||||
this.max = max;
|
||||
this.it1 = it;
|
||||
this.cost = it1.cost();
|
||||
if ((innerDocid = it1.nextDoc()) == DocIdSetIterator.NO_MORE_DOCS) {
|
||||
it1 = null;
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public int docID() {
|
||||
return lastReturn;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int nextDoc() throws IOException {
|
||||
return advance(0);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int advance(int target) throws IOException {
|
||||
|
||||
if (lastReturn == DocIdSetIterator.NO_MORE_DOCS) {
|
||||
return DocIdSetIterator.NO_MORE_DOCS;
|
||||
}
|
||||
|
||||
if (target <= lastReturn) target = lastReturn + 1;
|
||||
|
||||
if (it1 != null && innerDocid < target) {
|
||||
if ((innerDocid = it1.advance(target)) == DocIdSetIterator.NO_MORE_DOCS) {
|
||||
it1 = null;
|
||||
}
|
||||
}
|
||||
|
||||
while (it1 != null && innerDocid == target) {
|
||||
target++;
|
||||
if (target >= max) {
|
||||
return (lastReturn = DocIdSetIterator.NO_MORE_DOCS);
|
||||
}
|
||||
if ((innerDocid = it1.advance(target)) == DocIdSetIterator.NO_MORE_DOCS) {
|
||||
it1 = null;
|
||||
}
|
||||
}
|
||||
|
||||
// ADDED THIS, bug in original code
|
||||
if (target >= max) {
|
||||
return (lastReturn = DocIdSetIterator.NO_MORE_DOCS);
|
||||
}
|
||||
|
||||
return (lastReturn = target);
|
||||
}
|
||||
|
||||
@Override
|
||||
public long cost() {
|
||||
return cost;
|
||||
}
|
||||
}
|
||||
}
|
@ -1,263 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.docset;
|
||||
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.RamUsageEstimator;
|
||||
import org.elasticsearch.common.lucene.search.XDocIdSetIterator;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
public class OrDocIdSet extends DocIdSet {
|
||||
|
||||
private final DocIdSet[] sets;
|
||||
|
||||
public OrDocIdSet(DocIdSet[] sets) {
|
||||
this.sets = sets;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isCacheable() {
|
||||
for (DocIdSet set : sets) {
|
||||
if (!set.isCacheable()) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public long ramBytesUsed() {
|
||||
long ramBytesUsed = RamUsageEstimator.NUM_BYTES_OBJECT_REF + RamUsageEstimator.NUM_BYTES_ARRAY_HEADER;
|
||||
for (DocIdSet set : sets) {
|
||||
ramBytesUsed += RamUsageEstimator.NUM_BYTES_OBJECT_REF + set.ramBytesUsed();
|
||||
}
|
||||
return ramBytesUsed;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Bits bits() throws IOException {
|
||||
Bits[] bits = new Bits[sets.length];
|
||||
for (int i = 0; i < sets.length; i++) {
|
||||
bits[i] = sets[i].bits();
|
||||
if (bits[i] == null) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
return new OrBits(bits);
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocIdSetIterator iterator() throws IOException {
|
||||
return new IteratorBasedIterator(sets);
|
||||
}
|
||||
|
||||
/** A disjunction between several {@link Bits} instances with short-circuit logic. */
|
||||
public static class OrBits implements Bits {
|
||||
|
||||
private final Bits[] bits;
|
||||
|
||||
public OrBits(Bits[] bits) {
|
||||
this.bits = bits;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean get(int index) {
|
||||
for (Bits bit : bits) {
|
||||
if (bit.get(index)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int length() {
|
||||
return bits[0].length();
|
||||
}
|
||||
}
|
||||
|
||||
static class IteratorBasedIterator extends XDocIdSetIterator {
|
||||
|
||||
final class Item {
|
||||
public final DocIdSetIterator iter;
|
||||
public int doc;
|
||||
|
||||
public Item(DocIdSetIterator iter) {
|
||||
this.iter = iter;
|
||||
this.doc = -1;
|
||||
}
|
||||
}
|
||||
|
||||
private int _curDoc;
|
||||
private final Item[] _heap;
|
||||
private int _size;
|
||||
private final long cost;
|
||||
private final boolean broken;
|
||||
|
||||
IteratorBasedIterator(DocIdSet[] sets) throws IOException {
|
||||
_curDoc = -1;
|
||||
_heap = new Item[sets.length];
|
||||
_size = 0;
|
||||
long cost = 0;
|
||||
boolean broken = false;
|
||||
for (DocIdSet set : sets) {
|
||||
DocIdSetIterator iterator = set.iterator();
|
||||
broken |= DocIdSets.isBroken(iterator);
|
||||
if (iterator != null) {
|
||||
_heap[_size++] = new Item(iterator);
|
||||
cost += iterator.cost();
|
||||
}
|
||||
}
|
||||
this.cost = cost;
|
||||
this.broken = broken;
|
||||
if (_size == 0) _curDoc = DocIdSetIterator.NO_MORE_DOCS;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isBroken() {
|
||||
return broken;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int docID() {
|
||||
return _curDoc;
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int nextDoc() throws IOException {
|
||||
if (_curDoc == DocIdSetIterator.NO_MORE_DOCS) return DocIdSetIterator.NO_MORE_DOCS;
|
||||
|
||||
Item top = _heap[0];
|
||||
while (true) {
|
||||
DocIdSetIterator topIter = top.iter;
|
||||
int docid;
|
||||
if ((docid = topIter.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
|
||||
top.doc = docid;
|
||||
heapAdjust();
|
||||
} else {
|
||||
heapRemoveRoot();
|
||||
if (_size == 0) return (_curDoc = DocIdSetIterator.NO_MORE_DOCS);
|
||||
}
|
||||
top = _heap[0];
|
||||
int topDoc = top.doc;
|
||||
if (topDoc > _curDoc) {
|
||||
return (_curDoc = topDoc);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public final int advance(int target) throws IOException {
|
||||
if (_curDoc == DocIdSetIterator.NO_MORE_DOCS) return DocIdSetIterator.NO_MORE_DOCS;
|
||||
|
||||
if (target <= _curDoc) target = _curDoc + 1;
|
||||
|
||||
Item top = _heap[0];
|
||||
while (true) {
|
||||
DocIdSetIterator topIter = top.iter;
|
||||
int docid;
|
||||
if ((docid = topIter.advance(target)) != DocIdSetIterator.NO_MORE_DOCS) {
|
||||
top.doc = docid;
|
||||
heapAdjust();
|
||||
} else {
|
||||
heapRemoveRoot();
|
||||
if (_size == 0) return (_curDoc = DocIdSetIterator.NO_MORE_DOCS);
|
||||
}
|
||||
top = _heap[0];
|
||||
int topDoc = top.doc;
|
||||
if (topDoc >= target) {
|
||||
return (_curDoc = topDoc);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Organize subScorers into a min heap with scorers generating the earlest document on top.
|
||||
/*
|
||||
private final void heapify() {
|
||||
int size = _size;
|
||||
for (int i=(size>>1)-1; i>=0; i--)
|
||||
heapAdjust(i);
|
||||
}
|
||||
*/
|
||||
/* The subtree of subScorers at root is a min heap except possibly for its root element.
|
||||
* Bubble the root down as required to make the subtree a heap.
|
||||
*/
|
||||
|
||||
private final void heapAdjust() {
|
||||
final Item[] heap = _heap;
|
||||
final Item top = heap[0];
|
||||
final int doc = top.doc;
|
||||
final int size = _size;
|
||||
int i = 0;
|
||||
|
||||
while (true) {
|
||||
int lchild = (i << 1) + 1;
|
||||
if (lchild >= size) break;
|
||||
|
||||
Item left = heap[lchild];
|
||||
int ldoc = left.doc;
|
||||
|
||||
int rchild = lchild + 1;
|
||||
if (rchild < size) {
|
||||
Item right = heap[rchild];
|
||||
int rdoc = right.doc;
|
||||
|
||||
if (rdoc <= ldoc) {
|
||||
if (doc <= rdoc) break;
|
||||
|
||||
heap[i] = right;
|
||||
i = rchild;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (doc <= ldoc) break;
|
||||
|
||||
heap[i] = left;
|
||||
i = lchild;
|
||||
}
|
||||
heap[i] = top;
|
||||
}
|
||||
|
||||
// Remove the root Scorer from subScorers and re-establish it as a heap
|
||||
|
||||
private void heapRemoveRoot() {
|
||||
_size--;
|
||||
if (_size > 0) {
|
||||
Item tmp = _heap[0];
|
||||
_heap[0] = _heap[_size];
|
||||
_heap[_size] = tmp; // keep the finished iterator at the end for debugging
|
||||
heapAdjust();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public long cost() {
|
||||
return cost;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
@ -1,99 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.common.lucene.docset.AndDocIdSet;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
public class AndFilter extends Filter {
|
||||
|
||||
private final List<? extends Filter> filters;
|
||||
|
||||
public AndFilter(List<? extends Filter> filters) {
|
||||
this.filters = filters;
|
||||
}
|
||||
|
||||
public List<? extends Filter> filters() {
|
||||
return filters;
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
if (filters.size() == 1) {
|
||||
return filters.get(0).getDocIdSet(context, acceptDocs);
|
||||
}
|
||||
DocIdSet[] sets = new DocIdSet[filters.size()];
|
||||
for (int i = 0; i < filters.size(); i++) {
|
||||
DocIdSet set = filters.get(i).getDocIdSet(context, null);
|
||||
if (DocIdSets.isEmpty(set)) { // none matching for this filter, we AND, so return EMPTY
|
||||
return null;
|
||||
}
|
||||
sets[i] = set;
|
||||
}
|
||||
return BitsFilteredDocIdSet.wrap(new AndDocIdSet(sets), acceptDocs);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
int hash = 7;
|
||||
hash = 31 * hash + (null == filters ? 0 : filters.hashCode());
|
||||
return hash;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
|
||||
if ((obj == null) || (obj.getClass() != this.getClass()))
|
||||
return false;
|
||||
|
||||
AndFilter other = (AndFilter) obj;
|
||||
return equalFilters(filters, other.filters);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
StringBuilder builder = new StringBuilder();
|
||||
for (Filter filter : filters) {
|
||||
if (builder.length() > 0) {
|
||||
builder.append(' ');
|
||||
}
|
||||
builder.append('+');
|
||||
builder.append(filter);
|
||||
}
|
||||
return builder.toString();
|
||||
}
|
||||
|
||||
private boolean equalFilters(List<? extends Filter> filters1, List<? extends Filter> filters2) {
|
||||
return (filters1 == filters2) || ((filters1 != null) && filters1.equals(filters2));
|
||||
}
|
||||
}
|
@ -1,66 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.common.lucene.docset.AllDocIdSet;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
* A filter that matches on all docs.
|
||||
*/
|
||||
public class MatchAllDocsFilter extends Filter {
|
||||
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
return BitsFilteredDocIdSet.wrap(new AllDocIdSet(context.reader().maxDoc()), acceptDocs);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return this.getClass().hashCode();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (obj.getClass() == this.getClass()) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
return "*:*";
|
||||
}
|
||||
}
|
@ -1,64 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
* A filter that matches no docs.
|
||||
*/
|
||||
public class MatchNoDocsFilter extends Filter {
|
||||
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return this.getClass().hashCode();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (obj.getClass() == this.getClass()) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
return "MatchNoDocsFilter";
|
||||
}
|
||||
}
|
@ -1,78 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.common.lucene.docset.AllDocIdSet;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.common.lucene.docset.NotDocIdSet;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
public class NotFilter extends Filter {
|
||||
|
||||
private final Filter filter;
|
||||
|
||||
public NotFilter(Filter filter) {
|
||||
this.filter = filter;
|
||||
}
|
||||
|
||||
public Filter filter() {
|
||||
return filter;
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
DocIdSet set = filter.getDocIdSet(context, null);
|
||||
DocIdSet notSet;
|
||||
if (DocIdSets.isEmpty(set)) {
|
||||
notSet = new AllDocIdSet(context.reader().maxDoc());
|
||||
} else {
|
||||
notSet = new NotDocIdSet(set, context.reader().maxDoc());
|
||||
}
|
||||
return BitsFilteredDocIdSet.wrap(notSet, acceptDocs);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
|
||||
NotFilter notFilter = (NotFilter) o;
|
||||
return !(filter != null ? !filter.equals(notFilter.filter) : notFilter.filter != null);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
return "NotFilter(" + filter + ")";
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return filter != null ? filter.hashCode() : 0;
|
||||
}
|
||||
}
|
@ -1,108 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.common.lucene.docset.OrDocIdSet;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
public class OrFilter extends Filter {
|
||||
|
||||
private final List<? extends Filter> filters;
|
||||
|
||||
public OrFilter(List<? extends Filter> filters) {
|
||||
this.filters = filters;
|
||||
}
|
||||
|
||||
public List<? extends Filter> filters() {
|
||||
return filters;
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
if (filters.size() == 1) {
|
||||
return filters.get(0).getDocIdSet(context, acceptDocs);
|
||||
}
|
||||
List<DocIdSet> sets = new ArrayList<>(filters.size());
|
||||
for (int i = 0; i < filters.size(); i++) {
|
||||
DocIdSet set = filters.get(i).getDocIdSet(context, null);
|
||||
if (DocIdSets.isEmpty(set)) { // none matching for this filter, continue
|
||||
continue;
|
||||
}
|
||||
sets.add(set);
|
||||
}
|
||||
if (sets.size() == 0) {
|
||||
return null;
|
||||
}
|
||||
DocIdSet set;
|
||||
if (sets.size() == 1) {
|
||||
set = sets.get(0);
|
||||
} else {
|
||||
set = new OrDocIdSet(sets.toArray(new DocIdSet[sets.size()]));
|
||||
}
|
||||
return BitsFilteredDocIdSet.wrap(set, acceptDocs);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
int hash = 7;
|
||||
hash = 31 * hash + (null == filters ? 0 : filters.hashCode());
|
||||
return hash;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
|
||||
if ((obj == null) || (obj.getClass() != this.getClass()))
|
||||
return false;
|
||||
|
||||
OrFilter other = (OrFilter) obj;
|
||||
return equalFilters(filters, other.filters);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
StringBuilder builder = new StringBuilder();
|
||||
for (Filter filter : filters) {
|
||||
if (builder.length() > 0) {
|
||||
builder.append(' ');
|
||||
}
|
||||
builder.append(filter);
|
||||
}
|
||||
return builder.toString();
|
||||
}
|
||||
|
||||
private boolean equalFilters(List<? extends Filter> filters1, List<? extends Filter> filters2) {
|
||||
return (filters1 == filters2) || ((filters1 != null) && filters1.equals(filters2));
|
||||
}
|
||||
}
|
@ -19,7 +19,15 @@
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.search.*;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.MatchAllDocsQuery;
|
||||
import org.apache.lucene.search.MatchNoDocsQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.SuppressForbidden;
|
||||
import org.elasticsearch.index.query.QueryParseContext;
|
||||
@ -33,17 +41,8 @@ import java.util.regex.Pattern;
|
||||
*/
|
||||
public class Queries {
|
||||
|
||||
/**
|
||||
* A match all docs filter. Note, requires no caching!.
|
||||
*/
|
||||
public final static Filter MATCH_ALL_FILTER = new MatchAllDocsFilter();
|
||||
public final static Filter MATCH_NO_FILTER = new MatchNoDocsFilter();
|
||||
|
||||
public static Query newMatchAllQuery() {
|
||||
// We don't use MatchAllDocsQuery, its slower than the one below ... (much slower)
|
||||
// NEVER cache this XConstantScore Query it's not immutable and based on #3521
|
||||
// some code might set a boost on this query.
|
||||
return new ConstantScoreQuery(MATCH_ALL_FILTER);
|
||||
return new MatchAllDocsQuery();
|
||||
}
|
||||
|
||||
/** Return a query that matches no document. */
|
||||
@ -51,6 +50,22 @@ public class Queries {
|
||||
return new BooleanQuery();
|
||||
}
|
||||
|
||||
public static Filter newMatchAllFilter() {
|
||||
return wrap(newMatchAllQuery());
|
||||
}
|
||||
|
||||
public static Filter newMatchNoDocsFilter() {
|
||||
return wrap(newMatchNoDocsQuery());
|
||||
}
|
||||
|
||||
/** Return a query that matches all documents but those that match the given query. */
|
||||
public static Query not(Query q) {
|
||||
BooleanQuery bq = new BooleanQuery();
|
||||
bq.add(new MatchAllDocsQuery(), Occur.MUST);
|
||||
bq.add(q, Occur.MUST_NOT);
|
||||
return bq;
|
||||
}
|
||||
|
||||
public static boolean isNegativeQuery(Query q) {
|
||||
if (!(q instanceof BooleanQuery)) {
|
||||
return false;
|
||||
@ -76,10 +91,11 @@ public class Queries {
|
||||
|
||||
public static boolean isConstantMatchAllQuery(Query query) {
|
||||
if (query instanceof ConstantScoreQuery) {
|
||||
ConstantScoreQuery scoreQuery = (ConstantScoreQuery) query;
|
||||
if (scoreQuery.getQuery() instanceof MatchAllDocsFilter || scoreQuery.getQuery() instanceof MatchAllDocsQuery) {
|
||||
return true;
|
||||
}
|
||||
return isConstantMatchAllQuery(((ConstantScoreQuery) query).getQuery());
|
||||
} else if (query instanceof QueryWrapperFilter) {
|
||||
return isConstantMatchAllQuery(((QueryWrapperFilter) query).getQuery());
|
||||
} else if (query instanceof MatchAllDocsQuery) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
@ -151,10 +167,15 @@ public class Queries {
|
||||
*/
|
||||
@SuppressForbidden(reason = "QueryWrapperFilter cachability")
|
||||
public static Filter wrap(Query query, QueryParseContext context) {
|
||||
if (context.requireCustomQueryWrappingFilter() || CustomQueryWrappingFilter.shouldUseCustomQueryWrappingFilter(query)) {
|
||||
if ((context != null && context.requireCustomQueryWrappingFilter()) || CustomQueryWrappingFilter.shouldUseCustomQueryWrappingFilter(query)) {
|
||||
return new CustomQueryWrappingFilter(query);
|
||||
} else {
|
||||
return new QueryWrapperFilter(query);
|
||||
}
|
||||
}
|
||||
|
||||
/** Wrap as a {@link Filter}. */
|
||||
public static Filter wrap(Query query) {
|
||||
return wrap(query, null);
|
||||
}
|
||||
}
|
||||
|
@ -1,110 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.MultiTermQueryWrapperFilter;
|
||||
import org.apache.lucene.search.RegexpQuery;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.automaton.Operations;
|
||||
import org.apache.lucene.util.automaton.RegExp;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
* A lazy regexp filter which only builds the automaton on the first call to {@link #getDocIdSet(LeafReaderContext, Bits)}.
|
||||
* It is not thread safe (so can't be applied on multiple segments concurrently)
|
||||
*/
|
||||
public class RegexpFilter extends Filter {
|
||||
|
||||
private final Term term;
|
||||
private final int flags;
|
||||
|
||||
// use delegation here to support efficient implementation of equals & hashcode for this
|
||||
// filter (as it will be used as the filter cache key)
|
||||
private final InternalFilter filter;
|
||||
|
||||
public RegexpFilter(Term term) {
|
||||
this(term, RegExp.ALL);
|
||||
}
|
||||
|
||||
public RegexpFilter(Term term, int flags) {
|
||||
this(term, flags, Operations.DEFAULT_MAX_DETERMINIZED_STATES);
|
||||
}
|
||||
|
||||
public RegexpFilter(Term term, int flags, int maxDeterminizedStates) {
|
||||
filter = new InternalFilter(term, flags, maxDeterminizedStates);
|
||||
this.term = term;
|
||||
this.flags = flags;
|
||||
}
|
||||
|
||||
public String field() {
|
||||
return term.field();
|
||||
}
|
||||
|
||||
public String regexp() {
|
||||
return term.text();
|
||||
}
|
||||
|
||||
public int flags() {
|
||||
return flags;
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
return filter.getDocIdSet(context, acceptDocs);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (o == null || getClass() != o.getClass()) return false;
|
||||
|
||||
org.elasticsearch.common.lucene.search.RegexpFilter that = (org.elasticsearch.common.lucene.search.RegexpFilter) o;
|
||||
|
||||
if (flags != that.flags) return false;
|
||||
if (term != null ? !term.equals(that.term) : that.term != null) return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
int result = term != null ? term.hashCode() : 0;
|
||||
result = 31 * result + flags;
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
// todo should we also show the flags?
|
||||
return term.field() + ":" + term.text();
|
||||
}
|
||||
|
||||
static class InternalFilter extends MultiTermQueryWrapperFilter<RegexpQuery> {
|
||||
|
||||
public InternalFilter(Term term, int flags, int maxDeterminizedStates) {
|
||||
super(new RegexpQuery(term, flags, maxDeterminizedStates));
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -1,377 +0,0 @@
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.queries.FilterClause;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.BitDocIdSet;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.CollectionUtil;
|
||||
import org.elasticsearch.common.lucene.docset.AllDocIdSet;
|
||||
import org.elasticsearch.common.lucene.docset.AndDocIdSet;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.common.lucene.docset.NotDocIdSet;
|
||||
import org.elasticsearch.common.lucene.docset.OrDocIdSet.OrBits;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Comparator;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Similar to {@link org.apache.lucene.queries.BooleanFilter}.
|
||||
* <p/>
|
||||
* Our own variance mainly differs by the fact that we pass the acceptDocs down to the filters
|
||||
* and don't filter based on them at the end. Our logic is a bit different, and we filter based on that
|
||||
* at the top level filter chain.
|
||||
*/
|
||||
public class XBooleanFilter extends Filter implements Iterable<FilterClause> {
|
||||
|
||||
private static final Comparator<DocIdSetIterator> COST_DESCENDING = new Comparator<DocIdSetIterator>() {
|
||||
@Override
|
||||
public int compare(DocIdSetIterator o1, DocIdSetIterator o2) {
|
||||
return Long.compare(o2.cost(), o1.cost());
|
||||
}
|
||||
};
|
||||
private static final Comparator<DocIdSetIterator> COST_ASCENDING = new Comparator<DocIdSetIterator>() {
|
||||
@Override
|
||||
public int compare(DocIdSetIterator o1, DocIdSetIterator o2) {
|
||||
return Long.compare(o1.cost(), o2.cost());
|
||||
}
|
||||
};
|
||||
|
||||
final List<FilterClause> clauses = new ArrayList<>();
|
||||
|
||||
/**
|
||||
* Returns the a DocIdSetIterator representing the Boolean composition
|
||||
* of the filters that have been added.
|
||||
*/
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
final int maxDoc = context.reader().maxDoc();
|
||||
|
||||
// the 0-clauses case is ambiguous because an empty OR filter should return nothing
|
||||
// while an empty AND filter should return all docs, so we handle this case explicitely
|
||||
if (clauses.isEmpty()) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// optimize single case...
|
||||
if (clauses.size() == 1) {
|
||||
FilterClause clause = clauses.get(0);
|
||||
DocIdSet set = clause.getFilter().getDocIdSet(context, acceptDocs);
|
||||
if (clause.getOccur() == Occur.MUST_NOT) {
|
||||
if (DocIdSets.isEmpty(set)) {
|
||||
return new AllDocIdSet(maxDoc);
|
||||
} else {
|
||||
return new NotDocIdSet(set, maxDoc);
|
||||
}
|
||||
}
|
||||
// SHOULD or MUST, just return the set...
|
||||
if (DocIdSets.isEmpty(set)) {
|
||||
return null;
|
||||
}
|
||||
return set;
|
||||
}
|
||||
|
||||
// We have several clauses, try to organize things to make it easier to process
|
||||
List<DocIdSetIterator> shouldIterators = new ArrayList<>();
|
||||
List<Bits> shouldBits = new ArrayList<>();
|
||||
boolean hasShouldClauses = false;
|
||||
|
||||
List<DocIdSetIterator> requiredIterators = new ArrayList<>();
|
||||
List<DocIdSetIterator> excludedIterators = new ArrayList<>();
|
||||
|
||||
List<Bits> requiredBits = new ArrayList<>();
|
||||
List<Bits> excludedBits = new ArrayList<>();
|
||||
|
||||
for (FilterClause clause : clauses) {
|
||||
DocIdSet set = clause.getFilter().getDocIdSet(context, null);
|
||||
DocIdSetIterator it = null;
|
||||
Bits bits = null;
|
||||
if (DocIdSets.isEmpty(set) == false) {
|
||||
it = set.iterator();
|
||||
if (it != null) {
|
||||
bits = set.bits();
|
||||
}
|
||||
}
|
||||
|
||||
switch (clause.getOccur()) {
|
||||
case SHOULD:
|
||||
hasShouldClauses = true;
|
||||
if (it == null) {
|
||||
// continue, but we recorded that there is at least one should clause
|
||||
// so that if all iterators are null we know that nothing matches this
|
||||
// filter since at least one SHOULD clause needs to match
|
||||
} else if (bits != null && DocIdSets.isBroken(it)) {
|
||||
shouldBits.add(bits);
|
||||
} else {
|
||||
shouldIterators.add(it);
|
||||
}
|
||||
break;
|
||||
case MUST:
|
||||
if (it == null) {
|
||||
// no documents matched a clause that is compulsory, then nothing matches at all
|
||||
return null;
|
||||
} else if (bits != null && DocIdSets.isBroken(it)) {
|
||||
requiredBits.add(bits);
|
||||
} else {
|
||||
requiredIterators.add(it);
|
||||
}
|
||||
break;
|
||||
case MUST_NOT:
|
||||
if (it == null) {
|
||||
// ignore
|
||||
} else if (bits != null && DocIdSets.isBroken(it)) {
|
||||
excludedBits.add(bits);
|
||||
} else {
|
||||
excludedIterators.add(it);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
throw new AssertionError();
|
||||
}
|
||||
}
|
||||
|
||||
// Since BooleanFilter requires that at least one SHOULD clause matches,
|
||||
// transform the SHOULD clauses into a MUST clause
|
||||
|
||||
if (hasShouldClauses) {
|
||||
if (shouldIterators.isEmpty() && shouldBits.isEmpty()) {
|
||||
// we had should clauses, but they all produced empty sets
|
||||
// yet BooleanFilter requires that at least one clause matches
|
||||
// so it means we do not match anything
|
||||
return null;
|
||||
} else if (shouldIterators.size() == 1 && shouldBits.isEmpty()) {
|
||||
requiredIterators.add(shouldIterators.get(0));
|
||||
} else {
|
||||
// apply high-cardinality should clauses first
|
||||
CollectionUtil.timSort(shouldIterators, COST_DESCENDING);
|
||||
|
||||
BitDocIdSet.Builder shouldBuilder = null;
|
||||
for (DocIdSetIterator it : shouldIterators) {
|
||||
if (shouldBuilder == null) {
|
||||
shouldBuilder = new BitDocIdSet.Builder(maxDoc);
|
||||
}
|
||||
shouldBuilder.or(it);
|
||||
}
|
||||
|
||||
if (shouldBuilder != null && shouldBits.isEmpty() == false) {
|
||||
// we have both iterators and bits, there is no way to compute
|
||||
// the union efficiently, so we just transform the iterators into
|
||||
// bits
|
||||
// add first since these are fast bits
|
||||
shouldBits.add(0, shouldBuilder.build().bits());
|
||||
shouldBuilder = null;
|
||||
}
|
||||
|
||||
if (shouldBuilder == null) {
|
||||
// only bits
|
||||
assert shouldBits.size() >= 1;
|
||||
if (shouldBits.size() == 1) {
|
||||
requiredBits.add(shouldBits.get(0));
|
||||
} else {
|
||||
requiredBits.add(new OrBits(shouldBits.toArray(new Bits[shouldBits.size()])));
|
||||
}
|
||||
} else {
|
||||
assert shouldBits.isEmpty();
|
||||
// only iterators, we can add the merged iterator to the list of required iterators
|
||||
requiredIterators.add(shouldBuilder.build().iterator());
|
||||
}
|
||||
}
|
||||
} else {
|
||||
assert shouldIterators.isEmpty();
|
||||
assert shouldBits.isEmpty();
|
||||
}
|
||||
|
||||
// From now on, we don't have to care about SHOULD clauses anymore since we upgraded
|
||||
// them to required clauses (if necessary)
|
||||
|
||||
// cheap iterators first to make intersection faster
|
||||
CollectionUtil.timSort(requiredIterators, COST_ASCENDING);
|
||||
CollectionUtil.timSort(excludedIterators, COST_ASCENDING);
|
||||
|
||||
// Intersect iterators
|
||||
BitDocIdSet.Builder res = null;
|
||||
for (DocIdSetIterator iterator : requiredIterators) {
|
||||
if (res == null) {
|
||||
res = new BitDocIdSet.Builder(maxDoc);
|
||||
res.or(iterator);
|
||||
} else {
|
||||
res.and(iterator);
|
||||
}
|
||||
}
|
||||
for (DocIdSetIterator iterator : excludedIterators) {
|
||||
if (res == null) {
|
||||
res = new BitDocIdSet.Builder(maxDoc, true);
|
||||
}
|
||||
res.andNot(iterator);
|
||||
}
|
||||
|
||||
// Transform the excluded bits into required bits
|
||||
if (excludedBits.isEmpty() == false) {
|
||||
Bits excluded;
|
||||
if (excludedBits.size() == 1) {
|
||||
excluded = excludedBits.get(0);
|
||||
} else {
|
||||
excluded = new OrBits(excludedBits.toArray(new Bits[excludedBits.size()]));
|
||||
}
|
||||
requiredBits.add(new NotDocIdSet.NotBits(excluded));
|
||||
}
|
||||
|
||||
// The only thing left to do is to intersect 'res' with 'requiredBits'
|
||||
|
||||
// the main doc id set that will drive iteration
|
||||
DocIdSet main;
|
||||
if (res == null) {
|
||||
main = new AllDocIdSet(maxDoc);
|
||||
} else {
|
||||
main = res.build();
|
||||
}
|
||||
|
||||
// apply accepted docs and compute the bits to filter with
|
||||
// accepted docs are added first since they are fast and will help not computing anything on deleted docs
|
||||
if (acceptDocs != null) {
|
||||
requiredBits.add(0, acceptDocs);
|
||||
}
|
||||
// the random-access filter that we will apply to 'main'
|
||||
Bits filter;
|
||||
if (requiredBits.isEmpty()) {
|
||||
filter = null;
|
||||
} else if (requiredBits.size() == 1) {
|
||||
filter = requiredBits.get(0);
|
||||
} else {
|
||||
filter = new AndDocIdSet.AndBits(requiredBits.toArray(new Bits[requiredBits.size()]));
|
||||
}
|
||||
|
||||
return BitsFilteredDocIdSet.wrap(main, filter);
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a new FilterClause to the Boolean Filter container
|
||||
*
|
||||
* @param filterClause A FilterClause object containing a Filter and an Occur parameter
|
||||
*/
|
||||
public void add(FilterClause filterClause) {
|
||||
clauses.add(filterClause);
|
||||
}
|
||||
|
||||
public final void add(Filter filter, Occur occur) {
|
||||
add(new FilterClause(filter, occur));
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the list of clauses
|
||||
*/
|
||||
public List<FilterClause> clauses() {
|
||||
return clauses;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns an iterator on the clauses in this query. It implements the {@link Iterable} interface to
|
||||
* make it possible to do:
|
||||
* <pre class="prettyprint">for (FilterClause clause : booleanFilter) {}</pre>
|
||||
*/
|
||||
@Override
|
||||
public final Iterator<FilterClause> iterator() {
|
||||
return clauses().iterator();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj) {
|
||||
return true;
|
||||
}
|
||||
|
||||
if ((obj == null) || (obj.getClass() != this.getClass())) {
|
||||
return false;
|
||||
}
|
||||
|
||||
final XBooleanFilter other = (XBooleanFilter) obj;
|
||||
return clauses.equals(other.clauses);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return 657153718 ^ clauses.hashCode();
|
||||
}
|
||||
|
||||
/**
|
||||
* Prints a user-readable version of this Filter.
|
||||
*/
|
||||
@Override
|
||||
public String toString(String field) {
|
||||
final StringBuilder buffer = new StringBuilder("BooleanFilter(");
|
||||
final int minLen = buffer.length();
|
||||
for (final FilterClause c : clauses) {
|
||||
if (buffer.length() > minLen) {
|
||||
buffer.append(' ');
|
||||
}
|
||||
buffer.append(c);
|
||||
}
|
||||
return buffer.append(')').toString();
|
||||
}
|
||||
|
||||
static class ResultClause {
|
||||
|
||||
public final DocIdSet docIdSet;
|
||||
public final Bits bits;
|
||||
public final FilterClause clause;
|
||||
|
||||
DocIdSetIterator docIdSetIterator;
|
||||
|
||||
ResultClause(DocIdSet docIdSet, Bits bits, FilterClause clause) {
|
||||
this.docIdSet = docIdSet;
|
||||
this.bits = bits;
|
||||
this.clause = clause;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return An iterator, but caches it for subsequent usage. Don't use if iterator is consumed in one invocation.
|
||||
*/
|
||||
DocIdSetIterator iterator() throws IOException {
|
||||
if (docIdSetIterator != null) {
|
||||
return docIdSetIterator;
|
||||
} else {
|
||||
return docIdSetIterator = docIdSet.iterator();
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static boolean iteratorMatch(DocIdSetIterator docIdSetIterator, int target) throws IOException {
|
||||
assert docIdSetIterator != null;
|
||||
int current = docIdSetIterator.docID();
|
||||
if (current == DocIdSetIterator.NO_MORE_DOCS || target < current) {
|
||||
return false;
|
||||
} else {
|
||||
if (current == target) {
|
||||
return true;
|
||||
} else {
|
||||
return docIdSetIterator.advance(target) == target;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -1,40 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
|
||||
/**
|
||||
* Extension of {@link DocIdSetIterator} that allows to know if iteration is
|
||||
* implemented efficiently.
|
||||
*/
|
||||
public abstract class XDocIdSetIterator extends DocIdSetIterator {
|
||||
|
||||
/**
|
||||
* Return <tt>true</tt> if this iterator cannot both
|
||||
* {@link DocIdSetIterator#nextDoc} and {@link DocIdSetIterator#advance}
|
||||
* in sub-linear time.
|
||||
*
|
||||
* Do not call this method directly, use {@link DocIdSets#isBroken}.
|
||||
*/
|
||||
public abstract boolean isBroken();
|
||||
|
||||
}
|
@ -19,13 +19,13 @@
|
||||
|
||||
package org.elasticsearch.index.aliases;
|
||||
|
||||
import org.apache.lucene.queries.FilterClause;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.compress.CompressedString;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
@ -95,7 +95,7 @@ public class IndexAliasesService extends AbstractIndexComponent implements Itera
|
||||
return indexAlias.parsedFilter();
|
||||
} else {
|
||||
// we need to bench here a bit, to see maybe it makes sense to use OrFilter
|
||||
XBooleanFilter combined = new XBooleanFilter();
|
||||
BooleanQuery combined = new BooleanQuery();
|
||||
for (String alias : aliases) {
|
||||
IndexAlias indexAlias = alias(alias);
|
||||
if (indexAlias == null) {
|
||||
@ -103,19 +103,13 @@ public class IndexAliasesService extends AbstractIndexComponent implements Itera
|
||||
throw new InvalidAliasNameException(index, aliases[0], "Unknown alias name was passed to alias Filter");
|
||||
}
|
||||
if (indexAlias.parsedFilter() != null) {
|
||||
combined.add(new FilterClause(indexAlias.parsedFilter(), BooleanClause.Occur.SHOULD));
|
||||
combined.add(indexAlias.parsedFilter(), BooleanClause.Occur.SHOULD);
|
||||
} else {
|
||||
// The filter might be null only if filter was removed after filteringAliases was called
|
||||
return null;
|
||||
}
|
||||
}
|
||||
if (combined.clauses().size() == 0) {
|
||||
return null;
|
||||
}
|
||||
if (combined.clauses().size() == 1) {
|
||||
return combined.clauses().get(0).getFilter();
|
||||
}
|
||||
return combined;
|
||||
return Queries.wrap(combined);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,103 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.index.cache.filter;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.UsageTrackingFilterCachingPolicy;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.common.settings.ImmutableSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.index.AbstractIndexComponent;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.index.settings.IndexSettings;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
/**
|
||||
* This class is a wrapper around {@link UsageTrackingFilterCachingPolicy}
|
||||
* which wires parameters through index settings and makes sure to not
|
||||
* cache {@link DocIdSet}s which have a {@link DocIdSets#isBroken(DocIdSetIterator) broken}
|
||||
* iterator.
|
||||
*/
|
||||
public class AutoFilterCachingPolicy extends AbstractIndexComponent implements FilterCachingPolicy {
|
||||
|
||||
// These settings don't have the purpose of being documented. They are only here so that
|
||||
// if anyone ever hits an issue with elasticsearch that is due to the value of one of these
|
||||
// parameters, then it might be possible to temporarily work around the issue without having
|
||||
// to wait for a new release
|
||||
|
||||
// number of times a filter that is expensive to compute should be seen before the doc id sets are cached
|
||||
public static final String MIN_FREQUENCY_COSTLY = "index.cache.filter.policy.min_frequency.costly";
|
||||
// number of times a filter that produces cacheable filters should be seen before the doc id sets are cached
|
||||
public static final String MIN_FREQUENCY_CACHEABLE = "index.cache.filter.policy.min_frequency.cacheable";
|
||||
// same for filters that produce doc id sets that are not directly cacheable
|
||||
public static final String MIN_FREQUENCY_OTHER = "index.cache.filter.policy.min_frequency.other";
|
||||
// sources of segments that should be cached
|
||||
public static final String MIN_SEGMENT_SIZE_RATIO = "index.cache.filter.policy.min_segment_size_ratio";
|
||||
// size of the history to keep for filters. A filter will be cached if it has been seen more than a given
|
||||
// number of times (depending on the filter, the segment and the produced DocIdSet) in the most
|
||||
// ${history_size} recently used filters
|
||||
public static final String HISTORY_SIZE = "index.cache.filter.policy.history_size";
|
||||
|
||||
public static Settings AGGRESSIVE_CACHING_SETTINGS = ImmutableSettings.builder()
|
||||
.put(MIN_FREQUENCY_CACHEABLE, 1)
|
||||
.put(MIN_FREQUENCY_COSTLY, 1)
|
||||
.put(MIN_FREQUENCY_OTHER, 1)
|
||||
.put(MIN_SEGMENT_SIZE_RATIO, 0.000000001f)
|
||||
.build();
|
||||
|
||||
private final FilterCachingPolicy in;
|
||||
|
||||
@Inject
|
||||
public AutoFilterCachingPolicy(Index index, @IndexSettings Settings indexSettings) {
|
||||
super(index, indexSettings);
|
||||
final int historySize = indexSettings.getAsInt(HISTORY_SIZE, 1000);
|
||||
// cache aggressively filters that produce sets that are already cacheable,
|
||||
// ie. if the filter has been used twice or more among the most 1000 recently
|
||||
// used filters
|
||||
final int minFrequencyCacheable = indexSettings.getAsInt(MIN_FREQUENCY_CACHEABLE, 2);
|
||||
// cache aggressively filters whose getDocIdSet method is costly
|
||||
final int minFrequencyCostly = indexSettings.getAsInt(MIN_FREQUENCY_COSTLY, 2);
|
||||
// be a bit less aggressive when the produced doc id sets are not cacheable
|
||||
final int minFrequencyOther = indexSettings.getAsInt(MIN_FREQUENCY_OTHER, 5);
|
||||
final float minSegmentSizeRatio = indexSettings.getAsFloat(MIN_SEGMENT_SIZE_RATIO, 0.01f);
|
||||
in = new UsageTrackingFilterCachingPolicy(minSegmentSizeRatio, historySize, minFrequencyCostly, minFrequencyCacheable, minFrequencyOther);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onUse(Filter filter) {
|
||||
in.onUse(filter);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean shouldCache(Filter filter, LeafReaderContext context, DocIdSet set) throws IOException {
|
||||
if (set != null && DocIdSets.isBroken(set.iterator())) {
|
||||
// O(maxDoc) to cache, no thanks.
|
||||
return false;
|
||||
}
|
||||
|
||||
return in.shouldCache(filter, context, set);
|
||||
}
|
||||
|
||||
}
|
@ -20,7 +20,7 @@
|
||||
package org.elasticsearch.index.cache.filter;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.index.IndexComponent;
|
||||
@ -48,7 +48,7 @@ public interface FilterCache extends IndexComponent, Closeable {
|
||||
|
||||
String type();
|
||||
|
||||
Filter cache(Filter filterToCache, @Nullable HashedBytesRef cacheKey, FilterCachingPolicy policy);
|
||||
Filter cache(Filter filterToCache, @Nullable HashedBytesRef cacheKey, QueryCachingPolicy policy);
|
||||
|
||||
void clear(Object reader);
|
||||
|
||||
|
@ -19,7 +19,8 @@
|
||||
|
||||
package org.elasticsearch.index.cache.filter;
|
||||
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.search.UsageTrackingQueryCachingPolicy;
|
||||
import org.elasticsearch.common.inject.AbstractModule;
|
||||
import org.elasticsearch.common.inject.Scopes;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
@ -32,6 +33,8 @@ public class FilterCacheModule extends AbstractModule {
|
||||
|
||||
public static final class FilterCacheSettings {
|
||||
public static final String FILTER_CACHE_TYPE = "index.cache.filter.type";
|
||||
// for test purposes only
|
||||
public static final String FILTER_CACHE_EVERYTHING = "index.cache.filter.everything";
|
||||
}
|
||||
|
||||
private final Settings settings;
|
||||
@ -48,7 +51,10 @@ public class FilterCacheModule extends AbstractModule {
|
||||
// the filter cache is a node-level thing, however we want the most popular filters
|
||||
// to be computed on a per-index basis, that is why we don't use the SINGLETON
|
||||
// scope below
|
||||
bind(FilterCachingPolicy.class)
|
||||
.to(AutoFilterCachingPolicy.class);
|
||||
if (settings.getAsBoolean(FilterCacheSettings.FILTER_CACHE_EVERYTHING, false)) {
|
||||
bind(QueryCachingPolicy.class).toInstance(QueryCachingPolicy.ALWAYS_CACHE);
|
||||
} else {
|
||||
bind(QueryCachingPolicy.class).toInstance(new UsageTrackingQueryCachingPolicy());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -20,7 +20,7 @@
|
||||
package org.elasticsearch.index.cache.filter.none;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
@ -58,7 +58,7 @@ public class NoneFilterCache extends AbstractIndexComponent implements FilterCac
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter cache(Filter filterToCache, @Nullable HashedBytesRef cacheKey, FilterCachingPolicy policy) {
|
||||
public Filter cache(Filter filterToCache, @Nullable HashedBytesRef cacheKey, QueryCachingPolicy policy) {
|
||||
return filterToCache;
|
||||
}
|
||||
|
||||
|
@ -29,7 +29,7 @@ import org.apache.lucene.index.SegmentReader;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
@ -128,7 +128,7 @@ public class WeightedFilterCache extends AbstractIndexComponent implements Filte
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter cache(Filter filterToCache, @Nullable HashedBytesRef cacheKey, FilterCachingPolicy cachePolicy) {
|
||||
public Filter cache(Filter filterToCache, @Nullable HashedBytesRef cacheKey, QueryCachingPolicy cachePolicy) {
|
||||
if (filterToCache == null) {
|
||||
return null;
|
||||
}
|
||||
@ -148,10 +148,10 @@ public class WeightedFilterCache extends AbstractIndexComponent implements Filte
|
||||
|
||||
private final Filter filter;
|
||||
private final Object filterCacheKey;
|
||||
private final FilterCachingPolicy cachePolicy;
|
||||
private final QueryCachingPolicy cachePolicy;
|
||||
private final WeightedFilterCache cache;
|
||||
|
||||
FilterCacheFilterWrapper(Filter filter, Object cacheKey, FilterCachingPolicy cachePolicy, WeightedFilterCache cache) {
|
||||
FilterCacheFilterWrapper(Filter filter, Object cacheKey, QueryCachingPolicy cachePolicy, WeightedFilterCache cache) {
|
||||
this.filter = filter;
|
||||
this.filterCacheKey = cacheKey != null ? cacheKey : filter;
|
||||
this.cachePolicy = cachePolicy;
|
||||
@ -172,7 +172,7 @@ public class WeightedFilterCache extends AbstractIndexComponent implements Filte
|
||||
ret = cacheValue;
|
||||
} else {
|
||||
final DocIdSet uncached = filter.getDocIdSet(context, null);
|
||||
if (cachePolicy.shouldCache(filter, context, uncached)) {
|
||||
if (cachePolicy.shouldCache(filter, context)) {
|
||||
if (!cache.seenReaders.containsKey(context.reader().getCoreCacheKey())) {
|
||||
Boolean previous = cache.seenReaders.putIfAbsent(context.reader().getCoreCacheKey(), Boolean.TRUE);
|
||||
if (previous == null) {
|
||||
|
@ -31,11 +31,13 @@ import org.apache.lucene.analysis.Analyzer;
|
||||
import org.apache.lucene.analysis.DelegatingAnalyzerWrapper;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.FilterClause;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.ElasticsearchGenerationException;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
@ -47,9 +49,7 @@ import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.io.FileSystemUtils;
|
||||
import org.elasticsearch.common.io.PathUtils;
|
||||
import org.elasticsearch.common.io.Streams;
|
||||
import org.elasticsearch.common.lucene.search.AndFilter;
|
||||
import org.elasticsearch.common.lucene.search.NotFilter;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.regex.Regex;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.env.Environment;
|
||||
@ -446,18 +446,21 @@ public class MapperService extends AbstractIndexComponent {
|
||||
}
|
||||
}
|
||||
}
|
||||
Filter excludePercolatorType = null;
|
||||
Filter percolatorType = null;
|
||||
if (filterPercolateType) {
|
||||
excludePercolatorType = new NotFilter(documentMapper(PercolatorService.TYPE_NAME).typeFilter());
|
||||
percolatorType = documentMapper(PercolatorService.TYPE_NAME).typeFilter();
|
||||
}
|
||||
|
||||
if (types == null || types.length == 0) {
|
||||
if (hasNested && filterPercolateType) {
|
||||
return new AndFilter(ImmutableList.of(excludePercolatorType, NonNestedDocsFilter.INSTANCE));
|
||||
BooleanQuery bq = new BooleanQuery();
|
||||
bq.add(percolatorType, Occur.MUST_NOT);
|
||||
bq.add(NonNestedDocsFilter.INSTANCE, Occur.MUST);
|
||||
return Queries.wrap(bq);
|
||||
} else if (hasNested) {
|
||||
return NonNestedDocsFilter.INSTANCE;
|
||||
} else if (filterPercolateType) {
|
||||
return excludePercolatorType;
|
||||
return Queries.wrap(Queries.not(percolatorType));
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
@ -466,9 +469,12 @@ public class MapperService extends AbstractIndexComponent {
|
||||
// since they have different types (starting with __)
|
||||
if (types.length == 1) {
|
||||
DocumentMapper docMapper = documentMapper(types[0]);
|
||||
Filter filter = docMapper != null ? docMapper.typeFilter() : new TermFilter(new Term(types[0]));
|
||||
if (hasNested) {
|
||||
return new AndFilter(ImmutableList.of(filter, NonNestedDocsFilter.INSTANCE));
|
||||
Filter filter = docMapper != null ? docMapper.typeFilter() : Queries.wrap(new TermQuery(new Term(TypeFieldMapper.NAME, types[0])));
|
||||
if (filterPercolateType) {
|
||||
BooleanQuery bq = new BooleanQuery();
|
||||
bq.add(percolatorType, Occur.MUST_NOT);
|
||||
bq.add(filter, Occur.MUST);
|
||||
return Queries.wrap(bq);
|
||||
} else {
|
||||
return filter;
|
||||
}
|
||||
@ -493,31 +499,34 @@ public class MapperService extends AbstractIndexComponent {
|
||||
for (int i = 0; i < typesBytes.length; i++) {
|
||||
typesBytes[i] = new BytesRef(types[i]);
|
||||
}
|
||||
TermsFilter termsFilter = new TermsFilter(TypeFieldMapper.NAME, typesBytes);
|
||||
TermsQuery termsFilter = new TermsQuery(TypeFieldMapper.NAME, typesBytes);
|
||||
if (filterPercolateType) {
|
||||
return new AndFilter(ImmutableList.of(excludePercolatorType, termsFilter));
|
||||
BooleanQuery bq = new BooleanQuery();
|
||||
bq.add(percolatorType, Occur.MUST_NOT);
|
||||
bq.add(termsFilter, Occur.MUST);
|
||||
return Queries.wrap(bq);
|
||||
} else {
|
||||
return termsFilter;
|
||||
return Queries.wrap(termsFilter);
|
||||
}
|
||||
} else {
|
||||
// Current bool filter requires that at least one should clause matches, even with a must clause.
|
||||
XBooleanFilter bool = new XBooleanFilter();
|
||||
BooleanQuery bool = new BooleanQuery();
|
||||
for (String type : types) {
|
||||
DocumentMapper docMapper = documentMapper(type);
|
||||
if (docMapper == null) {
|
||||
bool.add(new FilterClause(new TermFilter(new Term(TypeFieldMapper.NAME, type)), BooleanClause.Occur.SHOULD));
|
||||
bool.add(new TermQuery(new Term(TypeFieldMapper.NAME, type)), BooleanClause.Occur.SHOULD);
|
||||
} else {
|
||||
bool.add(new FilterClause(docMapper.typeFilter(), BooleanClause.Occur.SHOULD));
|
||||
bool.add(docMapper.typeFilter(), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
}
|
||||
if (filterPercolateType) {
|
||||
bool.add(excludePercolatorType, BooleanClause.Occur.MUST);
|
||||
bool.add(percolatorType, BooleanClause.Occur.MUST_NOT);
|
||||
}
|
||||
if (hasNested) {
|
||||
bool.add(NonNestedDocsFilter.INSTANCE, BooleanClause.Occur.MUST);
|
||||
}
|
||||
|
||||
return bool;
|
||||
return Queries.wrap(bool);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -29,17 +29,14 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FuzzyQuery;
|
||||
import org.apache.lucene.search.MultiTermQuery;
|
||||
import org.apache.lucene.search.PrefixFilter;
|
||||
import org.apache.lucene.search.PrefixQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.RegexpQuery;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.search.TermRangeFilter;
|
||||
import org.apache.lucene.search.TermRangeQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
@ -49,8 +46,7 @@ import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.Lucene;
|
||||
import org.elasticsearch.common.lucene.search.MatchNoDocsFilter;
|
||||
import org.elasticsearch.common.lucene.search.RegexpFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.ImmutableSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
@ -496,14 +492,14 @@ public abstract class AbstractFieldMapper<T> implements FieldMapper<T> {
|
||||
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
return new TermFilter(names().createIndexNameTerm(indexedValueForSearch(value)));
|
||||
return Queries.wrap(new TermQuery(names().createIndexNameTerm(indexedValueForSearch(value))));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter termsFilter(List values, @Nullable QueryParseContext context) {
|
||||
switch (values.size()) {
|
||||
case 0:
|
||||
return new MatchNoDocsFilter();
|
||||
return Queries.newMatchNoDocsFilter();
|
||||
case 1:
|
||||
// When there is a single term, it's important to return a term filter so that
|
||||
// it can return a DocIdSet that is directly backed by a postings list, instead
|
||||
@ -515,7 +511,7 @@ public abstract class AbstractFieldMapper<T> implements FieldMapper<T> {
|
||||
for (int i = 0; i < bytesRefs.length; i++) {
|
||||
bytesRefs[i] = indexedValueForSearch(values.get(i));
|
||||
}
|
||||
return new TermsFilter(names.indexName(), bytesRefs);
|
||||
return Queries.wrap(new TermsQuery(names.indexName(), bytesRefs));
|
||||
|
||||
}
|
||||
}
|
||||
@ -545,10 +541,10 @@ public abstract class AbstractFieldMapper<T> implements FieldMapper<T> {
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return new TermRangeFilter(names.indexName(),
|
||||
return Queries.wrap(new TermRangeQuery(names.indexName(),
|
||||
lowerTerm == null ? null : indexedValueForSearch(lowerTerm),
|
||||
upperTerm == null ? null : indexedValueForSearch(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -567,7 +563,7 @@ public abstract class AbstractFieldMapper<T> implements FieldMapper<T> {
|
||||
|
||||
@Override
|
||||
public Filter prefixFilter(Object value, @Nullable QueryParseContext context) {
|
||||
return new PrefixFilter(names().createIndexNameTerm(indexedValueForSearch(value)));
|
||||
return Queries.wrap(new PrefixQuery(names().createIndexNameTerm(indexedValueForSearch(value))));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -581,7 +577,7 @@ public abstract class AbstractFieldMapper<T> implements FieldMapper<T> {
|
||||
|
||||
@Override
|
||||
public Filter regexpFilter(Object value, int flags, int maxDeterminizedStates, @Nullable QueryParseContext parseContext) {
|
||||
return new RegexpFilter(names().createIndexNameTerm(indexedValueForSearch(value)), flags, maxDeterminizedStates);
|
||||
return Queries.wrap(new RegexpQuery(names().createIndexNameTerm(indexedValueForSearch(value)), flags, maxDeterminizedStates));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -23,14 +23,15 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.document.SortedNumericDocValuesField;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.Booleans;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.Lucene;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
@ -205,7 +206,7 @@ public class BooleanFieldMapper extends AbstractFieldMapper<Boolean> {
|
||||
if (nullValue == null) {
|
||||
return null;
|
||||
}
|
||||
return new TermFilter(names().createIndexNameTerm(nullValue ? Values.TRUE : Values.FALSE));
|
||||
return Queries.wrap(new TermQuery(names().createIndexNameTerm(nullValue ? Values.TRUE : Values.FALSE)));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -24,7 +24,6 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -34,6 +33,7 @@ import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.Explicit;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
@ -218,16 +218,16 @@ public class ByteFieldMapper extends NumberFieldMapper<Byte> {
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
int iValue = parseValueAsInt(value);
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true);
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
lowerTerm == null ? null : parseValueAsInt(lowerTerm),
|
||||
upperTerm == null ? null : parseValueAsInt(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -243,10 +243,10 @@ public class ByteFieldMapper extends NumberFieldMapper<Byte> {
|
||||
if (nullValue == null) {
|
||||
return null;
|
||||
}
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
nullValue.intValue(),
|
||||
nullValue.intValue(),
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -24,7 +24,6 @@ import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.index.IndexReader;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -40,6 +39,7 @@ import org.elasticsearch.common.joda.DateMathParser;
|
||||
import org.elasticsearch.common.joda.FormatDateTimeFormatter;
|
||||
import org.elasticsearch.common.joda.Joda;
|
||||
import org.elasticsearch.common.lucene.search.NoCacheQuery;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.lucene.search.ResolvableFilter;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
@ -326,8 +326,8 @@ public class DateFieldMapper extends NumberFieldMapper<Long> {
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
final long lValue = parseToMilliseconds(value);
|
||||
return NumericRangeFilter.newLongRange(names.indexName(), precisionStep,
|
||||
lValue, lValue, true, true);
|
||||
return Queries.wrap(NumericRangeQuery.newLongRange(names.indexName(), precisionStep,
|
||||
lValue, lValue, true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -405,9 +405,9 @@ public class DateFieldMapper extends NumberFieldMapper<Long> {
|
||||
if (fieldData != null) {
|
||||
filter = NumericRangeFieldDataFilter.newLongRange(fieldData, lowerVal,upperVal, includeLower, includeUpper);
|
||||
} else {
|
||||
filter = NumericRangeFilter.newLongRange(
|
||||
filter = Queries.wrap(NumericRangeQuery.newLongRange(
|
||||
names.indexName(), precisionStep, lowerVal, upperVal, includeLower, includeUpper
|
||||
);
|
||||
));
|
||||
}
|
||||
|
||||
return filter;
|
||||
@ -419,10 +419,10 @@ public class DateFieldMapper extends NumberFieldMapper<Long> {
|
||||
return null;
|
||||
}
|
||||
long value = parseStringValue(nullValue);
|
||||
return NumericRangeFilter.newLongRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newLongRange(names.indexName(), precisionStep,
|
||||
value,
|
||||
value,
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
|
||||
|
@ -28,7 +28,6 @@ import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.DocValuesType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -38,6 +37,7 @@ import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.Explicit;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Numbers;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
import org.elasticsearch.common.util.ByteUtils;
|
||||
@ -209,20 +209,20 @@ public class DoubleFieldMapper extends NumberFieldMapper<Double> {
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
double dValue = parseDoubleValue(value);
|
||||
return NumericRangeFilter.newDoubleRange(names.indexName(), precisionStep,
|
||||
dValue, dValue, true, true);
|
||||
return Queries.wrap(NumericRangeQuery.newDoubleRange(names.indexName(), precisionStep,
|
||||
dValue, dValue, true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFilter.newDoubleRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newDoubleRange(names.indexName(), precisionStep,
|
||||
lowerTerm == null ? null : parseDoubleValue(lowerTerm),
|
||||
upperTerm == null ? null : parseDoubleValue(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
public Filter rangeFilter(Double lowerTerm, Double upperTerm, boolean includeLower, boolean includeUpper) {
|
||||
return NumericRangeFilter.newDoubleRange(names.indexName(), precisionStep, lowerTerm, upperTerm, includeLower, includeUpper);
|
||||
return Queries.wrap(NumericRangeQuery.newDoubleRange(names.indexName(), precisionStep, lowerTerm, upperTerm, includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -238,10 +238,10 @@ public class DoubleFieldMapper extends NumberFieldMapper<Double> {
|
||||
if (nullValue == null) {
|
||||
return null;
|
||||
}
|
||||
return NumericRangeFilter.newDoubleRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newDoubleRange(names.indexName(), precisionStep,
|
||||
nullValue,
|
||||
nullValue,
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -28,7 +28,6 @@ import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.DocValuesType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -39,6 +38,7 @@ import org.elasticsearch.common.Explicit;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Numbers;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
import org.elasticsearch.common.util.ByteUtils;
|
||||
@ -219,16 +219,16 @@ public class FloatFieldMapper extends NumberFieldMapper<Float> {
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
float fValue = parseValue(value);
|
||||
return NumericRangeFilter.newFloatRange(names.indexName(), precisionStep,
|
||||
fValue, fValue, true, true);
|
||||
return Queries.wrap(NumericRangeQuery.newFloatRange(names.indexName(), precisionStep,
|
||||
fValue, fValue, true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFilter.newFloatRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newFloatRange(names.indexName(), precisionStep,
|
||||
lowerTerm == null ? null : parseValue(lowerTerm),
|
||||
upperTerm == null ? null : parseValue(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -244,10 +244,10 @@ public class FloatFieldMapper extends NumberFieldMapper<Float> {
|
||||
if (nullValue == null) {
|
||||
return null;
|
||||
}
|
||||
return NumericRangeFilter.newFloatRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newFloatRange(names.indexName(), precisionStep,
|
||||
nullValue,
|
||||
nullValue,
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -25,7 +25,6 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -36,6 +35,7 @@ import org.elasticsearch.common.Explicit;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Numbers;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
@ -205,8 +205,8 @@ public class IntegerFieldMapper extends NumberFieldMapper<Integer> {
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
int iValue = parseValue(value);
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true);
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -219,10 +219,10 @@ public class IntegerFieldMapper extends NumberFieldMapper<Integer> {
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
lowerTerm == null ? null : parseValue(lowerTerm),
|
||||
upperTerm == null ? null : parseValue(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -238,10 +238,10 @@ public class IntegerFieldMapper extends NumberFieldMapper<Integer> {
|
||||
if (nullValue == null) {
|
||||
return null;
|
||||
}
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
nullValue,
|
||||
nullValue,
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -25,7 +25,6 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -36,6 +35,7 @@ import org.elasticsearch.common.Explicit;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Numbers;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
@ -195,8 +195,8 @@ public class LongFieldMapper extends NumberFieldMapper<Long> {
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
long iValue = parseLongValue(value);
|
||||
return NumericRangeFilter.newLongRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true);
|
||||
return Queries.wrap(NumericRangeQuery.newLongRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -209,10 +209,10 @@ public class LongFieldMapper extends NumberFieldMapper<Long> {
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFilter.newLongRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newLongRange(names.indexName(), precisionStep,
|
||||
lowerTerm == null ? null : parseLongValue(lowerTerm),
|
||||
upperTerm == null ? null : parseLongValue(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -228,10 +228,10 @@ public class LongFieldMapper extends NumberFieldMapper<Long> {
|
||||
if (nullValue == null) {
|
||||
return null;
|
||||
}
|
||||
return NumericRangeFilter.newLongRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newLongRange(names.indexName(), precisionStep,
|
||||
nullValue,
|
||||
nullValue,
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -25,7 +25,6 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -36,6 +35,7 @@ import org.elasticsearch.common.Explicit;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Numbers;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
@ -219,24 +219,24 @@ public class ShortFieldMapper extends NumberFieldMapper<Short> {
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
int iValue = parseValueAsInt(value);
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true);
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
iValue, iValue, true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
lowerTerm == null ? null : parseValueAsInt(lowerTerm),
|
||||
upperTerm == null ? null : parseValueAsInt(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(QueryParseContext parseContext, Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFieldDataFilter.newShortRange((IndexNumericFieldData) parseContext.getForField(this),
|
||||
return Queries.wrap(NumericRangeFieldDataFilter.newShortRange((IndexNumericFieldData) parseContext.getForField(this),
|
||||
lowerTerm == null ? null : parseValue(lowerTerm),
|
||||
upperTerm == null ? null : parseValue(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -244,10 +244,10 @@ public class ShortFieldMapper extends NumberFieldMapper<Short> {
|
||||
if (nullValue == null) {
|
||||
return null;
|
||||
}
|
||||
return NumericRangeFilter.newIntRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newIntRange(names.indexName(), precisionStep,
|
||||
nullValue.intValue(),
|
||||
nullValue.intValue(),
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -26,13 +26,12 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.MultiTermQuery;
|
||||
import org.apache.lucene.search.PrefixFilter;
|
||||
import org.apache.lucene.search.PrefixQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.RegexpQuery;
|
||||
@ -42,8 +41,7 @@ import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.Lucene;
|
||||
import org.elasticsearch.common.lucene.search.RegexpFilter;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
@ -202,7 +200,7 @@ public class IdFieldMapper extends AbstractFieldMapper<String> implements Intern
|
||||
if (fieldType.indexOptions() != IndexOptions.NONE || context == null) {
|
||||
return super.termFilter(value, context);
|
||||
}
|
||||
return new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), value));
|
||||
return Queries.wrap(new TermsQuery(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), value)));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -210,7 +208,7 @@ public class IdFieldMapper extends AbstractFieldMapper<String> implements Intern
|
||||
if (fieldType.indexOptions() != IndexOptions.NONE || context == null) {
|
||||
return super.termsFilter(values, context);
|
||||
}
|
||||
return new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), values));
|
||||
return Queries.wrap(new TermsQuery(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), values)));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -219,13 +217,6 @@ public class IdFieldMapper extends AbstractFieldMapper<String> implements Intern
|
||||
return super.prefixQuery(value, method, context);
|
||||
}
|
||||
Collection<String> queryTypes = context.queryTypes();
|
||||
if (queryTypes.size() == 1) {
|
||||
PrefixQuery prefixQuery = new PrefixQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(Iterables.getFirst(queryTypes, null), BytesRefs.toBytesRef(value))));
|
||||
if (method != null) {
|
||||
prefixQuery.setRewriteMethod(method);
|
||||
}
|
||||
return prefixQuery;
|
||||
}
|
||||
BooleanQuery query = new BooleanQuery();
|
||||
for (String queryType : queryTypes) {
|
||||
PrefixQuery prefixQuery = new PrefixQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(queryType, BytesRefs.toBytesRef(value))));
|
||||
@ -243,14 +234,11 @@ public class IdFieldMapper extends AbstractFieldMapper<String> implements Intern
|
||||
return super.prefixFilter(value, context);
|
||||
}
|
||||
Collection<String> queryTypes = context.queryTypes();
|
||||
if (queryTypes.size() == 1) {
|
||||
return new PrefixFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(Iterables.getFirst(queryTypes, null), BytesRefs.toBytesRef(value))));
|
||||
}
|
||||
XBooleanFilter filter = new XBooleanFilter();
|
||||
BooleanQuery filter = new BooleanQuery();
|
||||
for (String queryType : queryTypes) {
|
||||
filter.add(new PrefixFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(queryType, BytesRefs.toBytesRef(value)))), BooleanClause.Occur.SHOULD);
|
||||
filter.add(new PrefixQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(queryType, BytesRefs.toBytesRef(value)))), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
return filter;
|
||||
return Queries.wrap(filter);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -284,16 +272,12 @@ public class IdFieldMapper extends AbstractFieldMapper<String> implements Intern
|
||||
return super.regexpFilter(value, flags, maxDeterminizedStates, context);
|
||||
}
|
||||
Collection<String> queryTypes = context.queryTypes();
|
||||
if (queryTypes.size() == 1) {
|
||||
return new RegexpFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(Iterables.getFirst(queryTypes, null), BytesRefs.toBytesRef(value))),
|
||||
flags, maxDeterminizedStates);
|
||||
}
|
||||
XBooleanFilter filter = new XBooleanFilter();
|
||||
BooleanQuery filter = new BooleanQuery();
|
||||
for (String queryType : queryTypes) {
|
||||
filter.add(new RegexpFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(queryType, BytesRefs.toBytesRef(value))),
|
||||
filter.add(new RegexpQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(queryType, BytesRefs.toBytesRef(value))),
|
||||
flags, maxDeterminizedStates), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
return filter;
|
||||
return Queries.wrap(filter);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -24,11 +24,11 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
@ -275,7 +275,7 @@ public class ParentFieldMapper extends AbstractFieldMapper<Uid> implements Inter
|
||||
}
|
||||
BytesRef bValue = BytesRefs.toBytesRef(value);
|
||||
if (Uid.hasDelimiter(bValue)) {
|
||||
return new TermFilter(new Term(names.indexName(), bValue));
|
||||
return Queries.wrap(new TermQuery(new Term(names.indexName(), bValue)));
|
||||
}
|
||||
|
||||
List<String> types = new ArrayList<>(context.mapperService().types().size());
|
||||
@ -286,16 +286,16 @@ public class ParentFieldMapper extends AbstractFieldMapper<Uid> implements Inter
|
||||
}
|
||||
|
||||
if (types.isEmpty()) {
|
||||
return Queries.MATCH_NO_FILTER;
|
||||
return Queries.newMatchNoDocsFilter();
|
||||
} else if (types.size() == 1) {
|
||||
return new TermFilter(new Term(names.indexName(), Uid.createUidAsBytes(types.get(0), bValue)));
|
||||
return Queries.wrap(new TermQuery(new Term(names.indexName(), Uid.createUidAsBytes(types.get(0), bValue))));
|
||||
} else {
|
||||
// we use all non child types, cause we don't know if its exact or not...
|
||||
List<BytesRef> typesValues = new ArrayList<>(types.size());
|
||||
for (String type : context.mapperService().types()) {
|
||||
typesValues.add(Uid.createUidAsBytes(type, bValue));
|
||||
}
|
||||
return new TermsFilter(names.indexName(), typesValues);
|
||||
return Queries.wrap(new TermsQuery(names.indexName(), typesValues));
|
||||
}
|
||||
}
|
||||
|
||||
@ -328,7 +328,7 @@ public class ParentFieldMapper extends AbstractFieldMapper<Uid> implements Inter
|
||||
}
|
||||
}
|
||||
}
|
||||
return new TermsFilter(names.indexName(), bValues);
|
||||
return Queries.wrap(new TermsQuery(names.indexName(), bValues));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -24,16 +24,17 @@ import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.document.SortedSetDocValuesField;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.PrefixFilter;
|
||||
import org.apache.lucene.search.PrefixQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.Lucene;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.index.fielddata.FieldDataType;
|
||||
@ -138,9 +139,9 @@ public class TypeFieldMapper extends AbstractFieldMapper<String> implements Inte
|
||||
@Override
|
||||
public Filter termFilter(Object value, @Nullable QueryParseContext context) {
|
||||
if (fieldType.indexOptions() == IndexOptions.NONE) {
|
||||
return new PrefixFilter(new Term(UidFieldMapper.NAME, Uid.typePrefixAsBytes(BytesRefs.toBytesRef(value))));
|
||||
return Queries.wrap(new PrefixQuery(new Term(UidFieldMapper.NAME, Uid.typePrefixAsBytes(BytesRefs.toBytesRef(value)))));
|
||||
}
|
||||
return new TermFilter(names().createIndexNameTerm(BytesRefs.toBytesRef(value)));
|
||||
return Queries.wrap(new TermQuery(names().createIndexNameTerm(BytesRefs.toBytesRef(value))));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -26,7 +26,6 @@ import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.FieldType;
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.NumericRangeFilter;
|
||||
import org.apache.lucene.search.NumericRangeQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -37,6 +36,7 @@ import org.elasticsearch.common.Explicit;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Numbers;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.Fuzziness;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
@ -254,10 +254,10 @@ public class IpFieldMapper extends NumberFieldMapper<Long> {
|
||||
|
||||
@Override
|
||||
public Filter rangeFilter(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable QueryParseContext context) {
|
||||
return NumericRangeFilter.newLongRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newLongRange(names.indexName(), precisionStep,
|
||||
lowerTerm == null ? null : parseValue(lowerTerm),
|
||||
upperTerm == null ? null : parseValue(upperTerm),
|
||||
includeLower, includeUpper);
|
||||
includeLower, includeUpper));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -274,10 +274,10 @@ public class IpFieldMapper extends NumberFieldMapper<Long> {
|
||||
return null;
|
||||
}
|
||||
final long value = ipToLong(nullValue);
|
||||
return NumericRangeFilter.newLongRange(names.indexName(), precisionStep,
|
||||
return Queries.wrap(NumericRangeQuery.newLongRange(names.indexName(), precisionStep,
|
||||
value,
|
||||
value,
|
||||
true, true);
|
||||
true, true));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -24,8 +24,8 @@ import com.google.common.collect.Iterables;
|
||||
import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.index.IndexableField;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.ElasticsearchIllegalStateException;
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
@ -34,6 +34,7 @@ import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.collect.CopyOnWriteHashMap;
|
||||
import org.elasticsearch.common.joda.FormatDateTimeFormatter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
@ -387,7 +388,7 @@ public class ObjectMapper implements Mapper, AllFieldMapper.IncludeInAll, Clonea
|
||||
}
|
||||
this.nestedTypePathAsString = "__" + fullPath;
|
||||
this.nestedTypePathAsBytes = new BytesRef(nestedTypePathAsString);
|
||||
this.nestedTypeFilter = new TermFilter(new Term(TypeFieldMapper.NAME, nestedTypePathAsBytes));
|
||||
this.nestedTypeFilter = Queries.wrap(new TermQuery(new Term(TypeFieldMapper.NAME, nestedTypePathAsBytes)));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -20,14 +20,15 @@
|
||||
package org.elasticsearch.index.percolator;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.apache.lucene.util.CloseableThreadLocal;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
@ -49,8 +50,8 @@ import org.elasticsearch.index.query.QueryParseContext;
|
||||
import org.elasticsearch.index.query.QueryParsingException;
|
||||
import org.elasticsearch.index.settings.IndexSettings;
|
||||
import org.elasticsearch.index.shard.AbstractIndexShardComponent;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.index.shard.IndexShard;
|
||||
import org.elasticsearch.index.shard.ShardId;
|
||||
import org.elasticsearch.indices.IndicesLifecycle;
|
||||
import org.elasticsearch.percolator.PercolatorService;
|
||||
|
||||
@ -281,7 +282,7 @@ public class PercolatorQueriesRegistry extends AbstractIndexShardComponent imple
|
||||
try (Engine.Searcher searcher = shard.acquireSearcher("percolator_load_queries", true)) {
|
||||
Query query = new ConstantScoreQuery(
|
||||
indexCache.filter().cache(
|
||||
new TermFilter(new Term(TypeFieldMapper.NAME, PercolatorService.TYPE_NAME)),
|
||||
Queries.wrap(new TermQuery(new Term(TypeFieldMapper.NAME, PercolatorService.TYPE_NAME))),
|
||||
null,
|
||||
queryParserService.autoFilterCachePolicy()
|
||||
)
|
||||
|
@ -27,9 +27,9 @@ import java.util.ArrayList;
|
||||
|
||||
/**
|
||||
* A filter that matches documents matching boolean combinations of other filters.
|
||||
*
|
||||
*
|
||||
* @deprecated Use {@link BoolFilterBuilder} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public class AndFilterBuilder extends BaseFilterBuilder {
|
||||
|
||||
private ArrayList<FilterBuilder> filters = Lists.newArrayList();
|
||||
|
@ -19,11 +19,13 @@
|
||||
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.AndFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
@ -54,7 +56,7 @@ public class AndFilterParser implements FilterParser {
|
||||
ArrayList<Filter> filters = newArrayList();
|
||||
boolean filtersFound = false;
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
|
||||
String filterName = null;
|
||||
@ -114,7 +116,11 @@ public class AndFilterParser implements FilterParser {
|
||||
}
|
||||
|
||||
// no need to cache this one
|
||||
Filter filter = new AndFilter(filters);
|
||||
BooleanQuery boolQuery = new BooleanQuery();
|
||||
for (Filter filter : filters) {
|
||||
boolQuery.add(filter, Occur.MUST);
|
||||
}
|
||||
Filter filter = Queries.wrap(boolQuery);
|
||||
if (cache != null) {
|
||||
filter = parseContext.cacheFilter(filter, cacheKey, cache);
|
||||
}
|
||||
|
@ -19,13 +19,13 @@
|
||||
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.queries.FilterClause;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
@ -50,9 +50,9 @@ public class BoolFilterParser implements FilterParser {
|
||||
public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
XBooleanFilter boolFilter = new XBooleanFilter();
|
||||
BooleanQuery boolFilter = new BooleanQuery();
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
|
||||
String filterName = null;
|
||||
@ -69,19 +69,20 @@ public class BoolFilterParser implements FilterParser {
|
||||
hasAnyFilter = true;
|
||||
Filter filter = parseContext.parseInnerFilter();
|
||||
if (filter != null) {
|
||||
boolFilter.add(new FilterClause(filter, BooleanClause.Occur.MUST));
|
||||
boolFilter.add(new BooleanClause(filter, BooleanClause.Occur.FILTER));
|
||||
}
|
||||
} else if ("must_not".equals(currentFieldName) || "mustNot".equals(currentFieldName)) {
|
||||
hasAnyFilter = true;
|
||||
Filter filter = parseContext.parseInnerFilter();
|
||||
if (filter != null) {
|
||||
boolFilter.add(new FilterClause(filter, BooleanClause.Occur.MUST_NOT));
|
||||
boolFilter.add(new BooleanClause(filter, BooleanClause.Occur.MUST_NOT));
|
||||
}
|
||||
} else if ("should".equals(currentFieldName)) {
|
||||
hasAnyFilter = true;
|
||||
Filter filter = parseContext.parseInnerFilter();
|
||||
if (filter != null) {
|
||||
boolFilter.add(new FilterClause(filter, BooleanClause.Occur.SHOULD));
|
||||
boolFilter.setMinimumNumberShouldMatch(1);
|
||||
boolFilter.add(new BooleanClause(filter, BooleanClause.Occur.SHOULD));
|
||||
}
|
||||
} else {
|
||||
throw new QueryParsingException(parseContext.index(), "[bool] filter does not support [" + currentFieldName + "]");
|
||||
@ -92,7 +93,7 @@ public class BoolFilterParser implements FilterParser {
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
||||
Filter filter = parseContext.parseInnerFilter();
|
||||
if (filter != null) {
|
||||
boolFilter.add(new FilterClause(filter, BooleanClause.Occur.MUST));
|
||||
boolFilter.add(new BooleanClause(filter, BooleanClause.Occur.MUST));
|
||||
}
|
||||
}
|
||||
} else if ("must_not".equals(currentFieldName) || "mustNot".equals(currentFieldName)) {
|
||||
@ -100,7 +101,7 @@ public class BoolFilterParser implements FilterParser {
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
||||
Filter filter = parseContext.parseInnerFilter();
|
||||
if (filter != null) {
|
||||
boolFilter.add(new FilterClause(filter, BooleanClause.Occur.MUST_NOT));
|
||||
boolFilter.add(new BooleanClause(filter, BooleanClause.Occur.MUST_NOT));
|
||||
}
|
||||
}
|
||||
} else if ("should".equals(currentFieldName)) {
|
||||
@ -108,7 +109,8 @@ public class BoolFilterParser implements FilterParser {
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
||||
Filter filter = parseContext.parseInnerFilter();
|
||||
if (filter != null) {
|
||||
boolFilter.add(new FilterClause(filter, BooleanClause.Occur.SHOULD));
|
||||
boolFilter.setMinimumNumberShouldMatch(1);
|
||||
boolFilter.add(new BooleanClause(filter, BooleanClause.Occur.SHOULD));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@ -136,7 +138,7 @@ public class BoolFilterParser implements FilterParser {
|
||||
return null;
|
||||
}
|
||||
|
||||
Filter filter = boolFilter;
|
||||
Filter filter = Queries.wrap(boolFilter);
|
||||
if (cache != null) {
|
||||
filter = parseContext.cacheFilter(filter, cacheKey, cache);
|
||||
}
|
||||
|
@ -21,8 +21,8 @@ package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
@ -55,7 +55,7 @@ public class ConstantScoreQueryParser implements QueryParser {
|
||||
Query query = null;
|
||||
boolean queryFound = false;
|
||||
float boost = 1.0f;
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
|
||||
String currentFieldName = null;
|
||||
|
@ -19,16 +19,17 @@
|
||||
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.apache.lucene.search.TermRangeFilter;
|
||||
import org.apache.lucene.search.TermRangeQuery;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.FieldMapper;
|
||||
import org.elasticsearch.index.mapper.FieldMappers;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
import org.elasticsearch.index.mapper.internal.FieldNamesFieldMapper;
|
||||
@ -95,17 +96,17 @@ public class ExistsFilterParser implements FilterParser {
|
||||
List<String> fields = parseContext.simpleMatchToIndexNames(fieldPattern);
|
||||
if (fields.isEmpty()) {
|
||||
// no fields exists, so we should not match anything
|
||||
return Queries.MATCH_NO_FILTER;
|
||||
return Queries.newMatchNoDocsFilter();
|
||||
}
|
||||
MapperService.SmartNameFieldMappers nonNullFieldMappers = null;
|
||||
|
||||
XBooleanFilter boolFilter = new XBooleanFilter();
|
||||
BooleanQuery boolFilter = new BooleanQuery();
|
||||
for (String field : fields) {
|
||||
MapperService.SmartNameFieldMappers smartNameFieldMappers = parseContext.smartFieldMappers(field);
|
||||
if (smartNameFieldMappers != null) {
|
||||
nonNullFieldMappers = smartNameFieldMappers;
|
||||
}
|
||||
Filter filter = null;
|
||||
Query filter = null;
|
||||
if (fieldNamesMapper!= null && fieldNamesMapper.enabled()) {
|
||||
final String f;
|
||||
if (smartNameFieldMappers != null && smartNameFieldMappers.hasMapper()) {
|
||||
@ -120,14 +121,15 @@ public class ExistsFilterParser implements FilterParser {
|
||||
filter = smartNameFieldMappers.mapper().rangeFilter(null, null, true, true, parseContext);
|
||||
}
|
||||
if (filter == null) {
|
||||
filter = new TermRangeFilter(field, null, null, true, true);
|
||||
filter = new TermRangeQuery(field, null, null, true, true);
|
||||
}
|
||||
boolFilter.add(filter, BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
|
||||
Filter filter = Queries.wrap(boolFilter);
|
||||
// we always cache this one, really does not change... (exists)
|
||||
// its ok to cache under the fieldName cacheKey, since its per segment and the mapping applies to this data on this segment...
|
||||
Filter filter = parseContext.cacheFilter(boolFilter, new HashedBytesRef("$exists$" + fieldPattern), parseContext.autoFilterCachePolicy());
|
||||
filter = parseContext.cacheFilter(filter, new HashedBytesRef("$exists$" + fieldPattern), parseContext.autoFilterCachePolicy());
|
||||
|
||||
if (filterName != null) {
|
||||
parseContext.addNamedFilter(filterName, filter);
|
||||
|
@ -20,8 +20,8 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
@ -52,7 +52,7 @@ public class FQueryFilterParser implements FilterParser {
|
||||
|
||||
Query query = null;
|
||||
boolean queryFound = false;
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
|
||||
String filterName = null;
|
||||
|
@ -524,10 +524,18 @@ public abstract class FilterBuilders {
|
||||
return new BoolFilterBuilder();
|
||||
}
|
||||
|
||||
/**
|
||||
* @deprecated Use {@link #boolFilter()} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public static AndFilterBuilder andFilter(FilterBuilder... filters) {
|
||||
return new AndFilterBuilder(filters);
|
||||
}
|
||||
|
||||
/**
|
||||
* @deprecated Use {@link #boolFilter()} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public static OrFilterBuilder orFilter(FilterBuilder... filters) {
|
||||
return new OrFilterBuilder(filters);
|
||||
}
|
||||
|
@ -19,20 +19,13 @@
|
||||
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
import org.apache.lucene.search.FilteredQuery.FilterStrategy;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.Scorer;
|
||||
import org.apache.lucene.search.Weight;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
@ -45,70 +38,6 @@ public class FilteredQueryParser implements QueryParser {
|
||||
|
||||
public static final String NAME = "filtered";
|
||||
|
||||
public static final FilterStrategy ALWAYS_RANDOM_ACCESS_FILTER_STRATEGY = new CustomRandomAccessFilterStrategy(0);
|
||||
|
||||
public static final CustomRandomAccessFilterStrategy CUSTOM_FILTER_STRATEGY = new CustomRandomAccessFilterStrategy();
|
||||
|
||||
/**
|
||||
* Extends {@link org.apache.lucene.search.FilteredQuery.RandomAccessFilterStrategy}.
|
||||
* <p/>
|
||||
* Adds a threshold value, which defaults to -1. When set to -1, it will check if the filter docSet is
|
||||
* *not* a fast docSet, and if not, it will use {@link FilteredQuery#QUERY_FIRST_FILTER_STRATEGY} (since
|
||||
* the assumption is that its a "slow" filter and better computed only on whatever matched the query).
|
||||
* <p/>
|
||||
* If the threshold value is 0, it always tries to pass "down" the filter as acceptDocs, and it the filter
|
||||
* can't be represented as Bits (never really), then it uses {@link FilteredQuery#LEAP_FROG_QUERY_FIRST_STRATEGY}.
|
||||
* <p/>
|
||||
* If the above conditions are not met, then it reverts to the {@link FilteredQuery.RandomAccessFilterStrategy} logic,
|
||||
* with the threshold used to control {@link #useRandomAccess(org.apache.lucene.util.Bits, int)}.
|
||||
*/
|
||||
public static class CustomRandomAccessFilterStrategy extends FilteredQuery.RandomAccessFilterStrategy {
|
||||
|
||||
private final int threshold;
|
||||
|
||||
public CustomRandomAccessFilterStrategy() {
|
||||
this.threshold = -1;
|
||||
}
|
||||
|
||||
public CustomRandomAccessFilterStrategy(int threshold) {
|
||||
this.threshold = threshold;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Scorer filteredScorer(LeafReaderContext context, Weight weight, DocIdSet docIdSet) throws IOException {
|
||||
// CHANGE: If threshold is 0, always pass down the accept docs, don't pay the price of calling nextDoc even...
|
||||
final Bits filterAcceptDocs = docIdSet.bits();
|
||||
if (threshold == 0) {
|
||||
if (filterAcceptDocs != null) {
|
||||
return weight.scorer(context, filterAcceptDocs);
|
||||
} else {
|
||||
return FilteredQuery.LEAP_FROG_QUERY_FIRST_STRATEGY.filteredScorer(context, weight, docIdSet);
|
||||
}
|
||||
}
|
||||
|
||||
// CHANGE: handle "default" value
|
||||
if (threshold == -1) {
|
||||
// default value, don't iterate on only apply filter after query if its not a "fast" docIdSet
|
||||
// TODO: is there a way we could avoid creating an iterator here?
|
||||
if (filterAcceptDocs != null && DocIdSets.isBroken(docIdSet.iterator())) {
|
||||
return FilteredQuery.QUERY_FIRST_FILTER_STRATEGY.filteredScorer(context, weight, docIdSet);
|
||||
}
|
||||
}
|
||||
|
||||
return super.filteredScorer(context, weight, docIdSet);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected boolean useRandomAccess(Bits bits, long filterCost) {
|
||||
int multiplier = threshold;
|
||||
if (threshold == -1) {
|
||||
// default
|
||||
multiplier = 100;
|
||||
}
|
||||
return filterCost * multiplier > bits.length();
|
||||
}
|
||||
}
|
||||
|
||||
@Inject
|
||||
public FilteredQueryParser() {
|
||||
}
|
||||
@ -126,13 +55,13 @@ public class FilteredQueryParser implements QueryParser {
|
||||
Filter filter = null;
|
||||
boolean filterFound = false;
|
||||
float boost = 1.0f;
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String queryName = null;
|
||||
|
||||
String currentFieldName = null;
|
||||
XContentParser.Token token;
|
||||
FilteredQuery.FilterStrategy filterStrategy = CUSTOM_FILTER_STRATEGY;
|
||||
FilteredQuery.FilterStrategy filterStrategy = FilteredQuery.RANDOM_ACCESS_FILTER_STRATEGY;
|
||||
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
|
||||
if (token == XContentParser.Token.FIELD_NAME) {
|
||||
@ -152,15 +81,13 @@ public class FilteredQueryParser implements QueryParser {
|
||||
if ("query_first".equals(value) || "queryFirst".equals(value)) {
|
||||
filterStrategy = FilteredQuery.QUERY_FIRST_FILTER_STRATEGY;
|
||||
} else if ("random_access_always".equals(value) || "randomAccessAlways".equals(value)) {
|
||||
filterStrategy = ALWAYS_RANDOM_ACCESS_FILTER_STRATEGY;
|
||||
filterStrategy = FilteredQuery.RANDOM_ACCESS_FILTER_STRATEGY;
|
||||
} else if ("leap_frog".equals(value) || "leapFrog".equals(value)) {
|
||||
filterStrategy = FilteredQuery.LEAP_FROG_QUERY_FIRST_STRATEGY;
|
||||
} else if (value.startsWith("random_access_")) {
|
||||
int threshold = Integer.parseInt(value.substring("random_access_".length()));
|
||||
filterStrategy = new CustomRandomAccessFilterStrategy(threshold);
|
||||
filterStrategy = FilteredQuery.RANDOM_ACCESS_FILTER_STRATEGY;
|
||||
} else if (value.startsWith("randomAccess")) {
|
||||
int threshold = Integer.parseInt(value.substring("randomAccess".length()));
|
||||
filterStrategy = new CustomRandomAccessFilterStrategy(threshold);
|
||||
filterStrategy = FilteredQuery.RANDOM_ACCESS_FILTER_STRATEGY;
|
||||
} else if ("leap_frog_query_first".equals(value) || "leapFrogQueryFirst".equals(value)) {
|
||||
filterStrategy = FilteredQuery.LEAP_FROG_QUERY_FIRST_STRATEGY;
|
||||
} else if ("leap_frog_filter_first".equals(value) || "leapFrogFilterFirst".equals(value)) {
|
||||
@ -197,7 +124,7 @@ public class FilteredQueryParser implements QueryParser {
|
||||
return query;
|
||||
}
|
||||
}
|
||||
if (filter == Queries.MATCH_ALL_FILTER) {
|
||||
if (Queries.isConstantMatchAllQuery(filter)) {
|
||||
// this is an instance of match all filter, just execute the query
|
||||
return query;
|
||||
}
|
||||
|
@ -20,7 +20,7 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.common.geo.GeoPoint;
|
||||
import org.elasticsearch.common.geo.GeoUtils;
|
||||
@ -72,7 +72,7 @@ public class GeoBoundingBoxFilterParser implements FilterParser {
|
||||
public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String fieldName = null;
|
||||
|
||||
|
@ -20,7 +20,7 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.geo.GeoDistance;
|
||||
import org.elasticsearch.common.geo.GeoHashUtils;
|
||||
import org.elasticsearch.common.geo.GeoPoint;
|
||||
@ -64,7 +64,7 @@ public class GeoDistanceFilterParser implements FilterParser {
|
||||
|
||||
XContentParser.Token token;
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String filterName = null;
|
||||
String currentFieldName = null;
|
||||
|
@ -20,7 +20,7 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.geo.GeoDistance;
|
||||
import org.elasticsearch.common.geo.GeoHashUtils;
|
||||
import org.elasticsearch.common.geo.GeoPoint;
|
||||
@ -64,7 +64,7 @@ public class GeoDistanceRangeFilterParser implements FilterParser {
|
||||
|
||||
XContentParser.Token token;
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String filterName = null;
|
||||
String currentFieldName = null;
|
||||
|
@ -22,7 +22,7 @@ package org.elasticsearch.index.query;
|
||||
import com.google.common.collect.Lists;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.geo.GeoPoint;
|
||||
import org.elasticsearch.common.geo.GeoUtils;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
@ -68,7 +68,7 @@ public class GeoPolygonFilterParser implements FilterParser {
|
||||
public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String fieldName = null;
|
||||
|
||||
|
@ -22,8 +22,9 @@ package org.elasticsearch.index.query;
|
||||
import com.spatial4j.core.shape.Shape;
|
||||
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.spatial.prefix.PrefixTreeStrategy;
|
||||
import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy;
|
||||
import org.elasticsearch.common.geo.ShapeRelation;
|
||||
@ -31,7 +32,7 @@ import org.elasticsearch.common.geo.builders.ShapeBuilder;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.inject.internal.Nullable;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.FieldMapper;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
@ -84,7 +85,7 @@ public class GeoShapeFilterParser implements FilterParser {
|
||||
ShapeRelation shapeRelation = ShapeRelation.INTERSECTS;
|
||||
String strategyName = null;
|
||||
ShapeBuilder shape = null;
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String filterName = null;
|
||||
|
||||
@ -183,12 +184,12 @@ public class GeoShapeFilterParser implements FilterParser {
|
||||
if (strategy instanceof RecursivePrefixTreeStrategy && shapeRelation == ShapeRelation.DISJOINT) {
|
||||
// this strategy doesn't support disjoint anymore: but it did before, including creating lucene fieldcache (!)
|
||||
// in this case, execute disjoint as exists && !intersects
|
||||
XBooleanFilter bool = new XBooleanFilter();
|
||||
BooleanQuery bool = new BooleanQuery();
|
||||
Filter exists = ExistsFilterParser.newFilter(parseContext, fieldName, null);
|
||||
Filter intersects = strategy.makeFilter(GeoShapeQueryParser.getArgs(shape, ShapeRelation.INTERSECTS));
|
||||
bool.add(exists, BooleanClause.Occur.MUST);
|
||||
bool.add(intersects, BooleanClause.Occur.MUST_NOT);
|
||||
filter = bool;
|
||||
filter = Queries.wrap(bool);
|
||||
} else {
|
||||
filter = strategy.makeFilter(GeoShapeQueryParser.getArgs(shape, shapeRelation));
|
||||
}
|
||||
|
@ -20,6 +20,7 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.Query;
|
||||
@ -33,7 +34,6 @@ import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.geo.ShapeRelation;
|
||||
import org.elasticsearch.common.geo.builders.ShapeBuilder;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.FieldMapper;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
@ -161,7 +161,7 @@ public class GeoShapeQueryParser implements QueryParser {
|
||||
if (strategy instanceof RecursivePrefixTreeStrategy && shapeRelation == ShapeRelation.DISJOINT) {
|
||||
// this strategy doesn't support disjoint anymore: but it did before, including creating lucene fieldcache (!)
|
||||
// in this case, execute disjoint as exists && !intersects
|
||||
XBooleanFilter bool = new XBooleanFilter();
|
||||
BooleanQuery bool = new BooleanQuery();
|
||||
Filter exists = ExistsFilterParser.newFilter(parseContext, fieldName, null);
|
||||
Filter intersects = strategy.makeFilter(getArgs(shape, ShapeRelation.INTERSECTS));
|
||||
bool.add(exists, BooleanClause.Occur.MUST);
|
||||
|
@ -20,7 +20,7 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
@ -215,7 +215,7 @@ public class GeohashCellFilter {
|
||||
String geohash = null;
|
||||
int levels = -1;
|
||||
boolean neighbors = false;
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
|
||||
|
||||
|
@ -19,14 +19,15 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.collect.Tuple;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.NotFilter;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;
|
||||
import org.elasticsearch.index.mapper.DocumentMapper;
|
||||
@ -181,14 +182,14 @@ public class HasParentQueryParser implements QueryParser {
|
||||
parentFilter = documentMapper.typeFilter();
|
||||
}
|
||||
} else {
|
||||
XBooleanFilter parentsFilter = new XBooleanFilter();
|
||||
BooleanQuery parentsFilter = new BooleanQuery();
|
||||
for (String parentTypeStr : parentTypes) {
|
||||
DocumentMapper documentMapper = parseContext.mapperService().documentMapper(parentTypeStr);
|
||||
if (documentMapper != null) {
|
||||
parentsFilter.add(documentMapper.typeFilter(), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
}
|
||||
parentFilter = parentsFilter;
|
||||
parentFilter = Queries.wrap(parentsFilter);
|
||||
}
|
||||
|
||||
if (parentFilter == null) {
|
||||
@ -197,7 +198,7 @@ public class HasParentQueryParser implements QueryParser {
|
||||
|
||||
// wrap the query with type query
|
||||
innerQuery = new FilteredQuery(innerQuery, parseContext.cacheFilter(parentDocMapper.typeFilter(), null, parseContext.autoFilterCachePolicy()));
|
||||
Filter childrenFilter = parseContext.cacheFilter(new NotFilter(parentFilter), null, parseContext.autoFilterCachePolicy());
|
||||
Filter childrenFilter = parseContext.cacheFilter(Queries.wrap(Queries.not(parentFilter)), null, parseContext.autoFilterCachePolicy());
|
||||
if (score) {
|
||||
return new ParentQuery(parentChildIndexFieldData, innerQuery, parentDocMapper.type(), childrenFilter);
|
||||
} else {
|
||||
|
@ -21,7 +21,8 @@ package org.elasticsearch.index.query;
|
||||
|
||||
import com.google.common.collect.ImmutableList;
|
||||
import com.google.common.collect.Iterables;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
@ -99,7 +100,7 @@ public class IdsFilterParser implements FilterParser {
|
||||
}
|
||||
|
||||
if (ids.isEmpty()) {
|
||||
return Queries.MATCH_NO_FILTER;
|
||||
return Queries.newMatchNoDocsFilter();
|
||||
}
|
||||
|
||||
if (types == null || types.isEmpty()) {
|
||||
@ -108,7 +109,7 @@ public class IdsFilterParser implements FilterParser {
|
||||
types = parseContext.mapperService().types();
|
||||
}
|
||||
|
||||
TermsFilter filter = new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(types, ids));
|
||||
Filter filter = Queries.wrap(new TermsQuery(UidFieldMapper.NAME, Uid.createTypeUids(types, ids)));
|
||||
if (filterName != null) {
|
||||
parseContext.addNamedFilter(filterName, filter);
|
||||
}
|
||||
|
@ -21,8 +21,8 @@ package org.elasticsearch.index.query;
|
||||
|
||||
import com.google.common.collect.ImmutableList;
|
||||
import com.google.common.collect.Iterables;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
@ -121,9 +121,7 @@ public class IdsQueryParser implements QueryParser {
|
||||
types = parseContext.mapperService().types();
|
||||
}
|
||||
|
||||
TermsFilter filter = new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(types, ids));
|
||||
// no need for constant score filter, since we don't cache the filter, and it always takes deletes into account
|
||||
ConstantScoreQuery query = new ConstantScoreQuery(filter);
|
||||
TermsQuery query = new TermsQuery(UidFieldMapper.NAME, Uid.createTypeUids(types, ids));
|
||||
query.setBoost(boost);
|
||||
if (queryName != null) {
|
||||
parseContext.addNamedQuery(queryName, query);
|
||||
|
@ -22,8 +22,8 @@ package org.elasticsearch.index.query;
|
||||
import com.google.common.collect.ImmutableMap;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.util.CloseableThreadLocal;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.Version;
|
||||
@ -94,7 +94,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
|
||||
|
||||
final BitsetFilterCache bitsetFilterCache;
|
||||
|
||||
final FilterCachingPolicy autoFilterCachePolicy;
|
||||
final QueryCachingPolicy autoFilterCachePolicy;
|
||||
|
||||
private final Map<String, QueryParser> queryParsers;
|
||||
|
||||
@ -111,7 +111,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
|
||||
ScriptService scriptService, AnalysisService analysisService,
|
||||
MapperService mapperService, IndexCache indexCache, IndexFieldDataService fieldDataService,
|
||||
BitsetFilterCache bitsetFilterCache,
|
||||
FilterCachingPolicy autoFilterCachePolicy,
|
||||
QueryCachingPolicy autoFilterCachePolicy,
|
||||
@Nullable SimilarityService similarityService,
|
||||
@Nullable Map<String, QueryParserFactory> namedQueryParsers,
|
||||
@Nullable Map<String, FilterParserFactory> namedFilterParsers) {
|
||||
@ -185,7 +185,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
|
||||
return this.defaultField;
|
||||
}
|
||||
|
||||
public FilterCachingPolicy autoFilterCachePolicy() {
|
||||
public QueryCachingPolicy autoFilterCachePolicy() {
|
||||
return autoFilterCachePolicy;
|
||||
}
|
||||
|
||||
|
@ -56,7 +56,7 @@ public class IndicesFilterParser implements FilterParser {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
Filter filter = null;
|
||||
Filter noMatchFilter = Queries.MATCH_ALL_FILTER;
|
||||
Filter noMatchFilter = Queries.newMatchAllFilter();
|
||||
boolean filterFound = false;
|
||||
boolean indicesFound = false;
|
||||
boolean currentIndexMatchesIndices = false;
|
||||
@ -113,9 +113,9 @@ public class IndicesFilterParser implements FilterParser {
|
||||
} else if ("no_match_filter".equals(currentFieldName)) {
|
||||
String type = parser.text();
|
||||
if ("all".equals(type)) {
|
||||
noMatchFilter = Queries.MATCH_ALL_FILTER;
|
||||
noMatchFilter = Queries.newMatchAllFilter();
|
||||
} else if ("none".equals(type)) {
|
||||
noMatchFilter = Queries.MATCH_NO_FILTER;
|
||||
noMatchFilter = Queries.newMatchNoDocsFilter();
|
||||
}
|
||||
} else if ("_name".equals(currentFieldName)) {
|
||||
filterName = parser.text();
|
||||
|
@ -63,6 +63,6 @@ public class LimitFilterParser implements FilterParser {
|
||||
}
|
||||
|
||||
// this filter is deprecated and parses to a filter that matches everything
|
||||
return Queries.MATCH_ALL_FILTER;
|
||||
return Queries.newMatchAllFilter();
|
||||
}
|
||||
}
|
||||
|
@ -51,6 +51,6 @@ public class MatchAllFilterParser implements FilterParser {
|
||||
while (((token = parser.nextToken()) != XContentParser.Token.END_OBJECT && token != XContentParser.Token.END_ARRAY)) {
|
||||
}
|
||||
|
||||
return Queries.MATCH_ALL_FILTER;
|
||||
return Queries.newMatchAllFilter();
|
||||
}
|
||||
}
|
@ -19,15 +19,15 @@
|
||||
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.IndexOptions;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.TermRangeFilter;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.apache.lucene.search.TermRangeQuery;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.NotFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.FieldMappers;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
@ -107,7 +107,7 @@ public class MissingFilterParser implements FilterParser {
|
||||
if (fields.isEmpty()) {
|
||||
if (existence) {
|
||||
// if we ask for existence of fields, and we found none, then we should match on all
|
||||
return Queries.MATCH_ALL_FILTER;
|
||||
return Queries.newMatchAllFilter();
|
||||
}
|
||||
return null;
|
||||
}
|
||||
@ -118,13 +118,13 @@ public class MissingFilterParser implements FilterParser {
|
||||
MapperService.SmartNameFieldMappers nonNullFieldMappers = null;
|
||||
|
||||
if (existence) {
|
||||
XBooleanFilter boolFilter = new XBooleanFilter();
|
||||
BooleanQuery boolFilter = new BooleanQuery();
|
||||
for (String field : fields) {
|
||||
MapperService.SmartNameFieldMappers smartNameFieldMappers = parseContext.smartFieldMappers(field);
|
||||
if (smartNameFieldMappers != null) {
|
||||
nonNullFieldMappers = smartNameFieldMappers;
|
||||
}
|
||||
Filter filter = null;
|
||||
Query filter = null;
|
||||
if (fieldNamesMapper != null && fieldNamesMapper.enabled()) {
|
||||
final String f;
|
||||
if (smartNameFieldMappers != null && smartNameFieldMappers.hasMapper()) {
|
||||
@ -139,15 +139,16 @@ public class MissingFilterParser implements FilterParser {
|
||||
filter = smartNameFieldMappers.mapper().rangeFilter(null, null, true, true, parseContext);
|
||||
}
|
||||
if (filter == null) {
|
||||
filter = new TermRangeFilter(field, null, null, true, true);
|
||||
filter = new TermRangeQuery(field, null, null, true, true);
|
||||
}
|
||||
boolFilter.add(filter, BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
|
||||
// we always cache this one, really does not change... (exists)
|
||||
// its ok to cache under the fieldName cacheKey, since its per segment and the mapping applies to this data on this segment...
|
||||
existenceFilter = parseContext.cacheFilter(boolFilter, new HashedBytesRef("$exists$" + fieldPattern), parseContext.autoFilterCachePolicy());
|
||||
existenceFilter = new NotFilter(existenceFilter);
|
||||
existenceFilter = Queries.wrap(boolFilter);
|
||||
existenceFilter = parseContext.cacheFilter(existenceFilter, new HashedBytesRef("$exists$" + fieldPattern), parseContext.autoFilterCachePolicy());
|
||||
existenceFilter = Queries.wrap(Queries.not(existenceFilter));
|
||||
// cache the not filter as well, so it will be faster
|
||||
existenceFilter = parseContext.cacheFilter(existenceFilter, new HashedBytesRef("$missing$" + fieldPattern), parseContext.autoFilterCachePolicy());
|
||||
}
|
||||
@ -168,11 +169,11 @@ public class MissingFilterParser implements FilterParser {
|
||||
Filter filter;
|
||||
if (nullFilter != null) {
|
||||
if (existenceFilter != null) {
|
||||
XBooleanFilter combined = new XBooleanFilter();
|
||||
BooleanQuery combined = new BooleanQuery();
|
||||
combined.add(existenceFilter, BooleanClause.Occur.SHOULD);
|
||||
combined.add(nullFilter, BooleanClause.Occur.SHOULD);
|
||||
// cache the not filter as well, so it will be faster
|
||||
filter = parseContext.cacheFilter(combined, null, parseContext.autoFilterCachePolicy());
|
||||
filter = parseContext.cacheFilter(Queries.wrap(combined), null, parseContext.autoFilterCachePolicy());
|
||||
} else {
|
||||
filter = nullFilter;
|
||||
}
|
||||
|
@ -21,11 +21,11 @@ package org.elasticsearch.index.query;
|
||||
|
||||
import com.google.common.collect.Lists;
|
||||
import com.google.common.collect.Sets;
|
||||
|
||||
import org.apache.lucene.analysis.Analyzer;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
@ -350,8 +350,7 @@ public class MoreLikeThisQueryParser implements QueryParser {
|
||||
uids.add(createUidAsBytes(item.type(), item.id()));
|
||||
}
|
||||
if (!uids.isEmpty()) {
|
||||
TermsFilter filter = new TermsFilter(UidFieldMapper.NAME, uids.toArray(new BytesRef[0]));
|
||||
ConstantScoreQuery query = new ConstantScoreQuery(filter);
|
||||
TermsQuery query = new TermsQuery(UidFieldMapper.NAME, uids.toArray(new BytesRef[0]));
|
||||
boolQuery.add(query, BooleanClause.Occur.MUST_NOT);
|
||||
}
|
||||
}
|
||||
|
@ -20,9 +20,10 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.NotFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
@ -92,7 +93,7 @@ public class NotFilterParser implements FilterParser {
|
||||
return null;
|
||||
}
|
||||
|
||||
Filter notFilter = new NotFilter(filter);
|
||||
Filter notFilter = Queries.wrap(Queries.not(filter));
|
||||
if (cache) {
|
||||
notFilter = parseContext.cacheFilter(notFilter, cacheKey, parseContext.autoFilterCachePolicy());
|
||||
}
|
||||
|
@ -27,9 +27,9 @@ import java.util.ArrayList;
|
||||
|
||||
/**
|
||||
* A filter that matches documents matching boolean combinations of other filters.
|
||||
*
|
||||
*
|
||||
* @deprecated Use {@link BoolFilterBuilder} instead
|
||||
*/
|
||||
@Deprecated
|
||||
public class OrFilterBuilder extends BaseFilterBuilder {
|
||||
|
||||
private ArrayList<FilterBuilder> filters = Lists.newArrayList();
|
||||
|
@ -19,11 +19,13 @@
|
||||
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.OrFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
|
||||
import java.io.IOException;
|
||||
@ -54,7 +56,7 @@ public class OrFilterParser implements FilterParser {
|
||||
ArrayList<Filter> filters = newArrayList();
|
||||
boolean filtersFound = false;
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
|
||||
String filterName = null;
|
||||
@ -113,7 +115,11 @@ public class OrFilterParser implements FilterParser {
|
||||
}
|
||||
|
||||
// no need to cache this one
|
||||
Filter filter = new OrFilter(filters);
|
||||
BooleanQuery boolQuery = new BooleanQuery();
|
||||
for (Filter filter : filters) {
|
||||
boolQuery.add(filter, Occur.SHOULD);
|
||||
}
|
||||
Filter filter = Queries.wrap(boolQuery);
|
||||
if (cache != null) {
|
||||
filter = parseContext.cacheFilter(filter, cacheKey, cache);
|
||||
}
|
||||
|
@ -21,11 +21,12 @@ package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.PrefixFilter;
|
||||
import org.apache.lucene.search.PrefixQuery;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
|
||||
@ -51,7 +52,7 @@ public class PrefixFilterParser implements FilterParser {
|
||||
public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String fieldName = null;
|
||||
Object value = null;
|
||||
@ -87,7 +88,7 @@ public class PrefixFilterParser implements FilterParser {
|
||||
filter = smartNameFieldMappers.mapper().prefixFilter(value, parseContext);
|
||||
}
|
||||
if (filter == null) {
|
||||
filter = new PrefixFilter(new Term(fieldName, BytesRefs.toBytesRef(value)));
|
||||
filter = Queries.wrap(new PrefixQuery(new Term(fieldName, BytesRefs.toBytesRef(value))));
|
||||
}
|
||||
|
||||
if (cache != null) {
|
||||
|
@ -27,8 +27,8 @@ import org.apache.lucene.queryparser.classic.MapperQueryParser;
|
||||
import org.apache.lucene.queryparser.classic.QueryParserSettings;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.search.join.BitDocIdSetFilter;
|
||||
import org.apache.lucene.search.similarities.Similarity;
|
||||
import org.apache.lucene.util.Bits;
|
||||
@ -190,11 +190,11 @@ public class QueryParseContext {
|
||||
return indexQueryParser.defaultField();
|
||||
}
|
||||
|
||||
public FilterCachingPolicy autoFilterCachePolicy() {
|
||||
public QueryCachingPolicy autoFilterCachePolicy() {
|
||||
return indexQueryParser.autoFilterCachePolicy();
|
||||
}
|
||||
|
||||
public FilterCachingPolicy parseFilterCachePolicy() throws IOException {
|
||||
public QueryCachingPolicy parseFilterCachePolicy() throws IOException {
|
||||
final String text = parser.textOrNull();
|
||||
if (text == null || text.equals("auto")) {
|
||||
return autoFilterCachePolicy();
|
||||
@ -202,7 +202,7 @@ public class QueryParseContext {
|
||||
// cache without conditions on how many times the filter has been
|
||||
// used or what the produced DocIdSet looks like, but ONLY on large
|
||||
// segments to not pollute the cache
|
||||
return FilterCachingPolicy.CacheOnLargeSegments.DEFAULT;
|
||||
return QueryCachingPolicy.CacheOnLargeSegments.DEFAULT;
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
@ -221,7 +221,7 @@ public class QueryParseContext {
|
||||
return indexQueryParser.bitsetFilterCache.getBitDocIdSetFilter(filter);
|
||||
}
|
||||
|
||||
public Filter cacheFilter(Filter filter, final @Nullable HashedBytesRef cacheKey, final FilterCachingPolicy cachePolicy) {
|
||||
public Filter cacheFilter(Filter filter, final @Nullable HashedBytesRef cacheKey, final QueryCachingPolicy cachePolicy) {
|
||||
if (filter == null) {
|
||||
return null;
|
||||
}
|
||||
|
@ -20,13 +20,14 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.TermRangeFilter;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.search.TermRangeQuery;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.joda.DateMathParser;
|
||||
import org.elasticsearch.common.joda.Joda;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.FieldMapper;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
@ -56,7 +57,7 @@ public class RangeFilterParser implements FilterParser {
|
||||
public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String fieldName = null;
|
||||
Object from = null;
|
||||
@ -167,7 +168,7 @@ public class RangeFilterParser implements FilterParser {
|
||||
}
|
||||
|
||||
if (filter == null) {
|
||||
filter = new TermRangeFilter(fieldName, BytesRefs.toBytesRef(from), BytesRefs.toBytesRef(to), includeLower, includeUpper);
|
||||
filter = Queries.wrap(new TermRangeQuery(fieldName, BytesRefs.toBytesRef(from), BytesRefs.toBytesRef(to), includeLower, includeUpper));
|
||||
}
|
||||
|
||||
if (cache != null) {
|
||||
|
@ -21,12 +21,13 @@ package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.search.RegexpQuery;
|
||||
import org.apache.lucene.util.automaton.Operations;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.RegexpFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
|
||||
@ -52,7 +53,7 @@ public class RegexpFilterParser implements FilterParser {
|
||||
public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String fieldName = null;
|
||||
String secondaryFieldName = null;
|
||||
@ -117,7 +118,7 @@ public class RegexpFilterParser implements FilterParser {
|
||||
filter = smartNameFieldMappers.mapper().regexpFilter(value, flagsValue, maxDeterminizedStates, parseContext);
|
||||
}
|
||||
if (filter == null) {
|
||||
filter = new RegexpFilter(new Term(fieldName, BytesRefs.toBytesRef(value)), flagsValue, maxDeterminizedStates);
|
||||
filter = Queries.wrap(new RegexpQuery(new Term(fieldName, BytesRefs.toBytesRef(value)), flagsValue, maxDeterminizedStates));
|
||||
}
|
||||
|
||||
if (cache != null) {
|
||||
|
@ -24,15 +24,16 @@ import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocValuesDocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.script.LeafSearchScript;
|
||||
import org.elasticsearch.script.ScriptContext;
|
||||
import org.elasticsearch.script.ScriptParameterParser;
|
||||
import org.elasticsearch.script.*;
|
||||
import org.elasticsearch.script.ScriptParameterParser.ScriptParameterValue;
|
||||
import org.elasticsearch.script.ScriptService;
|
||||
import org.elasticsearch.script.SearchScript;
|
||||
@ -66,7 +67,7 @@ public class ScriptFilterParser implements FilterParser {
|
||||
|
||||
XContentParser.Token token;
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
// also, when caching, since its isCacheable is false, will result in loading all bit set...
|
||||
String script = null;
|
||||
|
@ -20,12 +20,13 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
|
||||
@ -51,7 +52,7 @@ public class TermFilterParser implements FilterParser {
|
||||
public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
HashedBytesRef cacheKey = null;
|
||||
String fieldName = null;
|
||||
Object value = null;
|
||||
@ -112,7 +113,7 @@ public class TermFilterParser implements FilterParser {
|
||||
filter = smartNameFieldMappers.mapper().termFilter(value, parseContext);
|
||||
}
|
||||
if (filter == null) {
|
||||
filter = new TermFilter(new Term(fieldName, BytesRefs.toBytesRef(value)));
|
||||
filter = Queries.wrap(new TermQuery(new Term(fieldName, BytesRefs.toBytesRef(value))));
|
||||
}
|
||||
|
||||
if (cache != null) {
|
||||
|
@ -118,7 +118,9 @@ public class TermsFilterBuilder extends BaseFilterBuilder {
|
||||
/**
|
||||
* Sets the execution mode for the terms filter. Cane be either "plain", "bool"
|
||||
* "and". Defaults to "plain".
|
||||
* @deprecated elasticsearch now makes better decisions on its own
|
||||
*/
|
||||
@Deprecated
|
||||
public TermsFilterBuilder execution(String execution) {
|
||||
this.execution = execution;
|
||||
return this;
|
||||
|
@ -21,12 +21,9 @@ package org.elasticsearch.index.query;
|
||||
|
||||
import com.google.common.collect.Lists;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.action.get.GetRequest;
|
||||
import org.elasticsearch.action.get.GetResponse;
|
||||
@ -34,10 +31,7 @@ import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.BytesRefs;
|
||||
import org.elasticsearch.common.lucene.HashedBytesRef;
|
||||
import org.elasticsearch.common.lucene.search.AndFilter;
|
||||
import org.elasticsearch.common.lucene.search.OrFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.support.XContentMapValues;
|
||||
import org.elasticsearch.index.mapper.FieldMapper;
|
||||
@ -55,15 +49,8 @@ public class TermsFilterParser implements FilterParser {
|
||||
public static final String NAME = "terms";
|
||||
private Client client;
|
||||
|
||||
@Deprecated
|
||||
public static final String EXECUTION_KEY = "execution";
|
||||
public static final String EXECUTION_VALUE_PLAIN = "plain";
|
||||
public static final String EXECUTION_VALUE_FIELDDATA = "fielddata";
|
||||
public static final String EXECUTION_VALUE_BOOL = "bool";
|
||||
public static final String EXECUTION_VALUE_BOOL_NOCACHE = "bool_nocache";
|
||||
public static final String EXECUTION_VALUE_AND = "and";
|
||||
public static final String EXECUTION_VALUE_AND_NOCACHE = "and_nocache";
|
||||
public static final String EXECUTION_VALUE_OR = "or";
|
||||
public static final String EXECUTION_VALUE_OR_NOCACHE = "or_nocache";
|
||||
|
||||
@Inject
|
||||
public TermsFilterParser() {
|
||||
@ -84,7 +71,7 @@ public class TermsFilterParser implements FilterParser {
|
||||
XContentParser parser = parseContext.parser();
|
||||
|
||||
MapperService.SmartNameFieldMappers smartNameFieldMappers;
|
||||
FilterCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
QueryCachingPolicy cache = parseContext.autoFilterCachePolicy();
|
||||
String filterName = null;
|
||||
String currentFieldName = null;
|
||||
|
||||
@ -96,7 +83,6 @@ public class TermsFilterParser implements FilterParser {
|
||||
|
||||
HashedBytesRef cacheKey = null;
|
||||
XContentParser.Token token;
|
||||
String execution = EXECUTION_VALUE_PLAIN;
|
||||
List<Object> terms = Lists.newArrayList();
|
||||
String fieldName = null;
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
|
||||
@ -147,7 +133,7 @@ public class TermsFilterParser implements FilterParser {
|
||||
}
|
||||
} else if (token.isValue()) {
|
||||
if (EXECUTION_KEY.equals(currentFieldName)) {
|
||||
execution = parser.text();
|
||||
// ignore
|
||||
} else if ("_name".equals(currentFieldName)) {
|
||||
filterName = parser.text();
|
||||
} else if ("_cache".equals(currentFieldName)) {
|
||||
@ -183,111 +169,27 @@ public class TermsFilterParser implements FilterParser {
|
||||
}
|
||||
|
||||
if (terms.isEmpty()) {
|
||||
return Queries.MATCH_NO_FILTER;
|
||||
return Queries.newMatchNoDocsFilter();
|
||||
}
|
||||
|
||||
Filter filter;
|
||||
if (EXECUTION_VALUE_PLAIN.equals(execution)) {
|
||||
if (fieldMapper != null) {
|
||||
filter = fieldMapper.termsFilter(terms, parseContext);
|
||||
} else {
|
||||
BytesRef[] filterValues = new BytesRef[terms.size()];
|
||||
for (int i = 0; i < filterValues.length; i++) {
|
||||
filterValues[i] = BytesRefs.toBytesRef(terms.get(i));
|
||||
}
|
||||
filter = new TermsFilter(fieldName, filterValues);
|
||||
}
|
||||
} else if (EXECUTION_VALUE_FIELDDATA.equals(execution)) {
|
||||
// if there are no mappings, then nothing has been indexing yet against this shard, so we can return
|
||||
// no match (but not cached!), since the FieldDataTermsFilter relies on a mapping...
|
||||
if (fieldMapper == null) {
|
||||
return Queries.MATCH_NO_FILTER;
|
||||
}
|
||||
|
||||
filter = fieldMapper.fieldDataTermsFilter(terms, parseContext);
|
||||
} else if (EXECUTION_VALUE_BOOL.equals(execution)) {
|
||||
XBooleanFilter boolFiler = new XBooleanFilter();
|
||||
if (fieldMapper != null) {
|
||||
for (Object term : terms) {
|
||||
boolFiler.add(parseContext.cacheFilter(fieldMapper.termFilter(term, parseContext), null, parseContext.autoFilterCachePolicy()), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
} else {
|
||||
for (Object term : terms) {
|
||||
boolFiler.add(parseContext.cacheFilter(new TermFilter(new Term(fieldName, BytesRefs.toBytesRef(term))), null, parseContext.autoFilterCachePolicy()), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
}
|
||||
filter = boolFiler;
|
||||
} else if (EXECUTION_VALUE_BOOL_NOCACHE.equals(execution)) {
|
||||
XBooleanFilter boolFiler = new XBooleanFilter();
|
||||
if (fieldMapper != null) {
|
||||
for (Object term : terms) {
|
||||
boolFiler.add(fieldMapper.termFilter(term, parseContext), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
} else {
|
||||
for (Object term : terms) {
|
||||
boolFiler.add(new TermFilter(new Term(fieldName, BytesRefs.toBytesRef(term))), BooleanClause.Occur.SHOULD);
|
||||
}
|
||||
}
|
||||
filter = boolFiler;
|
||||
} else if (EXECUTION_VALUE_AND.equals(execution)) {
|
||||
List<Filter> filters = Lists.newArrayList();
|
||||
if (fieldMapper != null) {
|
||||
for (Object term : terms) {
|
||||
filters.add(parseContext.cacheFilter(fieldMapper.termFilter(term, parseContext), null, parseContext.autoFilterCachePolicy()));
|
||||
}
|
||||
} else {
|
||||
for (Object term : terms) {
|
||||
filters.add(parseContext.cacheFilter(new TermFilter(new Term(fieldName, BytesRefs.toBytesRef(term))), null, parseContext.autoFilterCachePolicy()));
|
||||
}
|
||||
}
|
||||
filter = new AndFilter(filters);
|
||||
} else if (EXECUTION_VALUE_AND_NOCACHE.equals(execution)) {
|
||||
List<Filter> filters = Lists.newArrayList();
|
||||
if (fieldMapper != null) {
|
||||
for (Object term : terms) {
|
||||
filters.add(fieldMapper.termFilter(term, parseContext));
|
||||
}
|
||||
} else {
|
||||
for (Object term : terms) {
|
||||
filters.add(new TermFilter(new Term(fieldName, BytesRefs.toBytesRef(term))));
|
||||
}
|
||||
}
|
||||
filter = new AndFilter(filters);
|
||||
} else if (EXECUTION_VALUE_OR.equals(execution)) {
|
||||
List<Filter> filters = Lists.newArrayList();
|
||||
if (fieldMapper != null) {
|
||||
for (Object term : terms) {
|
||||
filters.add(parseContext.cacheFilter(fieldMapper.termFilter(term, parseContext), null, parseContext.autoFilterCachePolicy()));
|
||||
}
|
||||
} else {
|
||||
for (Object term : terms) {
|
||||
filters.add(parseContext.cacheFilter(new TermFilter(new Term(fieldName, BytesRefs.toBytesRef(term))), null, parseContext.autoFilterCachePolicy()));
|
||||
}
|
||||
}
|
||||
filter = new OrFilter(filters);
|
||||
} else if (EXECUTION_VALUE_OR_NOCACHE.equals(execution)) {
|
||||
List<Filter> filters = Lists.newArrayList();
|
||||
if (fieldMapper != null) {
|
||||
for (Object term : terms) {
|
||||
filters.add(fieldMapper.termFilter(term, parseContext));
|
||||
}
|
||||
} else {
|
||||
for (Object term : terms) {
|
||||
filters.add(new TermFilter(new Term(fieldName, BytesRefs.toBytesRef(term))));
|
||||
}
|
||||
}
|
||||
filter = new OrFilter(filters);
|
||||
} else {
|
||||
throw new QueryParsingException(parseContext.index(), "terms filter execution value [" + execution + "] not supported");
|
||||
|
||||
Filter filter;
|
||||
if (fieldMapper != null) {
|
||||
filter = fieldMapper.termsFilter(terms, parseContext);
|
||||
} else {
|
||||
BytesRef[] filterValues = new BytesRef[terms.size()];
|
||||
for (int i = 0; i < filterValues.length; i++) {
|
||||
filterValues[i] = BytesRefs.toBytesRef(terms.get(i));
|
||||
}
|
||||
|
||||
if (cache != null) {
|
||||
filter = parseContext.cacheFilter(filter, cacheKey, cache);
|
||||
}
|
||||
|
||||
if (filterName != null) {
|
||||
parseContext.addNamedFilter(filterName, filter);
|
||||
}
|
||||
return filter;
|
||||
filter = Queries.wrap(new TermsQuery(fieldName, filterValues));
|
||||
}
|
||||
|
||||
if (cache != null) {
|
||||
filter = parseContext.cacheFilter(filter, cacheKey, cache);
|
||||
}
|
||||
|
||||
if (filterName != null) {
|
||||
parseContext.addNamedFilter(filterName, filter);
|
||||
}
|
||||
return filter;
|
||||
}
|
||||
}
|
||||
|
@ -20,10 +20,11 @@
|
||||
package org.elasticsearch.index.query;
|
||||
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.DocumentMapper;
|
||||
import org.elasticsearch.index.mapper.internal.TypeFieldMapper;
|
||||
@ -67,7 +68,7 @@ public class TypeFilterParser implements FilterParser {
|
||||
//LUCENE 4 UPGRADE document mapper should use bytesref as well?
|
||||
DocumentMapper documentMapper = parseContext.mapperService().documentMapper(type.utf8ToString());
|
||||
if (documentMapper == null) {
|
||||
filter = new TermFilter(new Term(TypeFieldMapper.NAME, type));
|
||||
filter = Queries.wrap(new TermQuery(new Term(TypeFieldMapper.NAME, type)));
|
||||
} else {
|
||||
filter = documentMapper.typeFilter();
|
||||
}
|
||||
|
@ -21,6 +21,7 @@ package org.elasticsearch.index.query.functionscore;
|
||||
|
||||
import com.google.common.collect.ImmutableMap;
|
||||
import com.google.common.collect.ImmutableMap.Builder;
|
||||
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
@ -29,7 +30,6 @@ import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.lucene.search.MatchAllDocsFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.lucene.search.function.CombineFunction;
|
||||
import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;
|
||||
@ -165,7 +165,7 @@ public class FunctionScoreQueryParser implements QueryParser {
|
||||
}
|
||||
// handle cases where only one score function and no filter was
|
||||
// provided. In this case we create a FunctionScoreQuery.
|
||||
if (filterFunctions.size() == 0 || filterFunctions.size() == 1 && (filterFunctions.get(0).filter == null || filterFunctions.get(0).filter instanceof MatchAllDocsFilter)) {
|
||||
if (filterFunctions.size() == 0 || filterFunctions.size() == 1 && (filterFunctions.get(0).filter == null || Queries.isConstantMatchAllQuery(filterFunctions.get(0).filter))) {
|
||||
ScoreFunction function = filterFunctions.size() == 0 ? null : filterFunctions.get(0).function;
|
||||
FunctionScoreQuery theQuery = new FunctionScoreQuery(query, function, minScore);
|
||||
if (combineFunction != null) {
|
||||
@ -227,7 +227,7 @@ public class FunctionScoreQueryParser implements QueryParser {
|
||||
}
|
||||
}
|
||||
if (filter == null) {
|
||||
filter = Queries.MATCH_ALL_FILTER;
|
||||
filter = Queries.newMatchAllFilter();
|
||||
}
|
||||
if (scoreFunction == null) {
|
||||
throw new ElasticsearchParseException("function_score: One entry in functions list is missing a function.");
|
||||
|
@ -26,7 +26,6 @@ import org.apache.lucene.search.MultiTermQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.lucene.search.AndFilter;
|
||||
import org.elasticsearch.index.mapper.DocumentMapper;
|
||||
import org.elasticsearch.index.mapper.MapperService;
|
||||
import org.elasticsearch.index.query.QueryParseContext;
|
||||
|
@ -24,6 +24,7 @@ import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.index.SortedDocValues;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.CollectionTerminatedException;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
@ -105,7 +106,7 @@ public class ChildrenConstantScoreQuery extends Query {
|
||||
final long valueCount;
|
||||
List<LeafReaderContext> leaves = searcher.getIndexReader().leaves();
|
||||
if (globalIfd == null || leaves.isEmpty()) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
} else {
|
||||
AtomicParentChildFieldData afd = globalIfd.load(leaves.get(0));
|
||||
SortedDocValues globalValues = afd.getOrdinalsValues(parentType);
|
||||
@ -113,7 +114,7 @@ public class ChildrenConstantScoreQuery extends Query {
|
||||
}
|
||||
|
||||
if (valueCount == 0) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
|
||||
Query childQuery = rewrittenChildQuery;
|
||||
@ -124,7 +125,7 @@ public class ChildrenConstantScoreQuery extends Query {
|
||||
|
||||
final long remaining = collector.foundParents();
|
||||
if (remaining == 0) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
|
||||
Filter shortCircuitFilter = null;
|
||||
|
@ -23,6 +23,7 @@ import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.index.SortedDocValues;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.BitsFilteredDocIdSet;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.CollectionTerminatedException;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
@ -170,7 +171,7 @@ public class ChildrenQuery extends Query {
|
||||
IndexParentChildFieldData globalIfd = ifd.loadGlobal(searcher.getIndexReader());
|
||||
if (globalIfd == null) {
|
||||
// No docs of the specified type exist on this shard
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
IndexSearcher indexSearcher = new IndexSearcher(searcher.getIndexReader());
|
||||
indexSearcher.setSimilarity(searcher.getSimilarity());
|
||||
@ -215,7 +216,7 @@ public class ChildrenQuery extends Query {
|
||||
indexSearcher.search(childQuery, collector);
|
||||
numFoundParents = collector.foundParents();
|
||||
if (numFoundParents == 0) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
abort = false;
|
||||
} finally {
|
||||
|
@ -85,7 +85,7 @@ public class ParentConstantScoreQuery extends Query {
|
||||
final long maxOrd;
|
||||
List<LeafReaderContext> leaves = searcher.getIndexReader().leaves();
|
||||
if (globalIfd == null || leaves.isEmpty()) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
} else {
|
||||
AtomicParentChildFieldData afd = globalIfd.load(leaves.get(0));
|
||||
SortedDocValues globalValues = afd.getOrdinalsValues(parentType);
|
||||
@ -93,7 +93,7 @@ public class ParentConstantScoreQuery extends Query {
|
||||
}
|
||||
|
||||
if (maxOrd == 0) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
|
||||
final Query parentQuery = rewrittenParentQuery;
|
||||
@ -103,7 +103,7 @@ public class ParentConstantScoreQuery extends Query {
|
||||
indexSearcher.search(parentQuery, collector);
|
||||
|
||||
if (collector.parentCount() == 0) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
|
||||
return new ChildrenWeight(this, childrenFilter, collector, globalIfd);
|
||||
|
@ -24,10 +24,12 @@ import org.apache.lucene.index.SortedDocValues;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.index.Terms;
|
||||
import org.apache.lucene.index.TermsEnum;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.search.join.BitDocIdSetFilter;
|
||||
import org.apache.lucene.util.BitDocIdSet;
|
||||
import org.apache.lucene.util.BitSet;
|
||||
@ -38,7 +40,7 @@ import org.apache.lucene.util.FixedBitSet;
|
||||
import org.apache.lucene.util.LongBitSet;
|
||||
import org.apache.lucene.util.SparseFixedBitSet;
|
||||
import org.elasticsearch.common.lease.Releasables;
|
||||
import org.elasticsearch.common.lucene.search.AndFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.util.BytesRefHash;
|
||||
import org.elasticsearch.common.util.LongHash;
|
||||
import org.elasticsearch.index.mapper.Uid;
|
||||
@ -46,8 +48,6 @@ import org.elasticsearch.index.mapper.internal.UidFieldMapper;
|
||||
import org.elasticsearch.search.internal.SearchContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Advantages over using this filter over Lucene's TermsFilter in the parent child context:
|
||||
@ -63,13 +63,12 @@ final class ParentIdsFilter extends Filter {
|
||||
if (numFoundParents == 1) {
|
||||
BytesRef id = globalValues.lookupOrd((int) parentOrds.nextSetBit(0));
|
||||
if (nonNestedDocsFilter != null) {
|
||||
List<Filter> filters = Arrays.asList(
|
||||
new TermFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id))),
|
||||
nonNestedDocsFilter
|
||||
);
|
||||
return new AndFilter(filters);
|
||||
BooleanQuery bq = new BooleanQuery();
|
||||
bq.add(new TermQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id))), Occur.MUST);
|
||||
bq.add(nonNestedDocsFilter, Occur.MUST);
|
||||
return Queries.wrap(bq);
|
||||
} else {
|
||||
return new TermFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id)));
|
||||
return Queries.wrap(new TermQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id))));
|
||||
}
|
||||
} else {
|
||||
BytesRefHash parentIds= null;
|
||||
@ -96,13 +95,12 @@ final class ParentIdsFilter extends Filter {
|
||||
if (numFoundParents == 1) {
|
||||
BytesRef id = globalValues.lookupOrd((int) parentIdxs.get(0));
|
||||
if (nonNestedDocsFilter != null) {
|
||||
List<Filter> filters = Arrays.asList(
|
||||
new TermFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id))),
|
||||
nonNestedDocsFilter
|
||||
);
|
||||
return new AndFilter(filters);
|
||||
BooleanQuery bq = new BooleanQuery();
|
||||
bq.add(new TermQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id))), Occur.MUST);
|
||||
bq.add(nonNestedDocsFilter, Occur.MUST);
|
||||
return Queries.wrap(bq);
|
||||
} else {
|
||||
return new TermFilter(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id)));
|
||||
return Queries.wrap(new TermQuery(new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(parentType, id))));
|
||||
}
|
||||
} else {
|
||||
BytesRefHash parentIds = null;
|
||||
|
@ -126,7 +126,7 @@ public class ParentQuery extends Query {
|
||||
IndexParentChildFieldData globalIfd = parentChildIndexFieldData.loadGlobal(searcher.getIndexReader());
|
||||
if (globalIfd == null) {
|
||||
// No docs of the specified type don't exist on this shard
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
|
||||
try {
|
||||
@ -138,7 +138,7 @@ public class ParentQuery extends Query {
|
||||
indexSearcher.setSimilarity(searcher.getSimilarity());
|
||||
indexSearcher.search(parentQuery, collector);
|
||||
if (collector.parentCount() == 0) {
|
||||
return Queries.newMatchNoDocsQuery().createWeight(searcher, needsScores);
|
||||
return new BooleanQuery().createWeight(searcher, needsScores);
|
||||
}
|
||||
childWeight = new ChildWeight(this, parentQuery.createWeight(searcher, needsScores), childrenFilter, collector, globalIfd);
|
||||
releaseCollectorResource = false;
|
||||
|
@ -20,10 +20,11 @@
|
||||
package org.elasticsearch.index.search.geo;
|
||||
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.common.geo.GeoPoint;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
|
||||
|
||||
/**
|
||||
@ -43,17 +44,18 @@ public class IndexedGeoBoundingBoxFilter {
|
||||
}
|
||||
|
||||
private static Filter westGeoBoundingBoxFilter(GeoPoint topLeft, GeoPoint bottomRight, GeoPointFieldMapper fieldMapper) {
|
||||
XBooleanFilter filter = new XBooleanFilter();
|
||||
BooleanQuery filter = new BooleanQuery();
|
||||
filter.setMinimumNumberShouldMatch(1);
|
||||
filter.add(fieldMapper.lonMapper().rangeFilter(null, bottomRight.lon(), true, true), Occur.SHOULD);
|
||||
filter.add(fieldMapper.lonMapper().rangeFilter(topLeft.lon(), null, true, true), Occur.SHOULD);
|
||||
filter.add(fieldMapper.latMapper().rangeFilter(bottomRight.lat(), topLeft.lat(), true, true), Occur.MUST);
|
||||
return filter;
|
||||
return Queries.wrap(filter);
|
||||
}
|
||||
|
||||
private static Filter eastGeoBoundingBoxFilter(GeoPoint topLeft, GeoPoint bottomRight, GeoPointFieldMapper fieldMapper) {
|
||||
XBooleanFilter filter = new XBooleanFilter();
|
||||
BooleanQuery filter = new BooleanQuery();
|
||||
filter.add(fieldMapper.lonMapper().rangeFilter(topLeft.lon(), bottomRight.lon(), true, true), Occur.MUST);
|
||||
filter.add(fieldMapper.latMapper().rangeFilter(bottomRight.lat(), topLeft.lat(), true, true), Occur.MUST);
|
||||
return filter;
|
||||
return Queries.wrap(filter);
|
||||
}
|
||||
}
|
||||
|
@ -23,10 +23,12 @@ import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.PrefixFilter;
|
||||
import org.apache.lucene.search.PrefixQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
import org.elasticsearch.common.lucene.search.NotFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.index.mapper.internal.TypeFieldMapper;
|
||||
|
||||
import java.io.IOException;
|
||||
@ -38,16 +40,21 @@ import java.io.IOException;
|
||||
* A nested document is a sub documents that belong to a root document.
|
||||
* Nested documents share the unique id and type and optionally the _source with root documents.
|
||||
*/
|
||||
public class NonNestedDocsFilter extends Filter {
|
||||
public final class NonNestedDocsFilter extends Filter {
|
||||
|
||||
public static final NonNestedDocsFilter INSTANCE = new NonNestedDocsFilter();
|
||||
|
||||
private final Filter filter = new NotFilter(nestedFilter());
|
||||
private final Filter filter = Queries.wrap(Queries.not(nestedFilter()));
|
||||
private final int hashCode = filter.hashCode();
|
||||
|
||||
private NonNestedDocsFilter() {
|
||||
}
|
||||
|
||||
@Override
|
||||
public Query clone() {
|
||||
return INSTANCE;
|
||||
}
|
||||
|
||||
@Override
|
||||
public DocIdSet getDocIdSet(LeafReaderContext context, Bits acceptDocs) throws IOException {
|
||||
return filter.getDocIdSet(context, acceptDocs);
|
||||
@ -72,6 +79,6 @@ public class NonNestedDocsFilter extends Filter {
|
||||
* @return a filter that returns all nested documents.
|
||||
*/
|
||||
private static Filter nestedFilter() {
|
||||
return new PrefixFilter(new Term(TypeFieldMapper.NAME, new BytesRef("__")));
|
||||
return Queries.wrap(new PrefixQuery(new Term(TypeFieldMapper.NAME, new BytesRef("__"))));
|
||||
}
|
||||
}
|
@ -25,6 +25,7 @@ import org.apache.lucene.index.ReaderUtil;
|
||||
import org.apache.lucene.index.memory.ExtendedMemoryIndex;
|
||||
import org.apache.lucene.index.memory.MemoryIndex;
|
||||
import org.apache.lucene.search.BooleanClause;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
@ -50,7 +51,7 @@ import org.elasticsearch.common.component.AbstractComponent;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.io.stream.BytesStreamOutput;
|
||||
import org.elasticsearch.common.lucene.Lucene;
|
||||
import org.elasticsearch.common.lucene.search.XBooleanFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.text.BytesText;
|
||||
import org.elasticsearch.common.text.StringText;
|
||||
@ -796,10 +797,10 @@ public class PercolatorService extends AbstractComponent {
|
||||
|
||||
final Filter filter;
|
||||
if (context.aliasFilter() != null) {
|
||||
XBooleanFilter booleanFilter = new XBooleanFilter();
|
||||
BooleanQuery booleanFilter = new BooleanQuery();
|
||||
booleanFilter.add(context.aliasFilter(), BooleanClause.Occur.MUST);
|
||||
booleanFilter.add(percolatorTypeFilter, BooleanClause.Occur.MUST);
|
||||
filter = booleanFilter;
|
||||
filter = Queries.wrap(booleanFilter);
|
||||
} else {
|
||||
filter = percolatorTypeFilter;
|
||||
}
|
||||
|
@ -20,7 +20,6 @@ package org.elasticsearch.search.aggregations;
|
||||
|
||||
import com.google.common.collect.ImmutableMap;
|
||||
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
@ -116,7 +115,7 @@ public class AggregationPhase implements SearchPhase {
|
||||
// optimize the global collector based execution
|
||||
if (!globals.isEmpty()) {
|
||||
BucketCollector globalsCollector = BucketCollector.wrap(globals);
|
||||
Query query = new ConstantScoreQuery(Queries.MATCH_ALL_FILTER);
|
||||
Query query = Queries.newMatchAllQuery();
|
||||
Filter searchFilter = context.searchFilter(context.types());
|
||||
if (searchFilter != null) {
|
||||
query = new FilteredQuery(query, searchFilter);
|
||||
|
@ -18,7 +18,7 @@
|
||||
*/
|
||||
package org.elasticsearch.search.aggregations.bucket.filter;
|
||||
|
||||
import org.elasticsearch.common.lucene.search.MatchAllDocsFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.query.ParsedFilter;
|
||||
import org.elasticsearch.search.aggregations.Aggregator;
|
||||
@ -41,7 +41,7 @@ public class FilterParser implements Aggregator.Parser {
|
||||
public AggregatorFactory parse(String aggregationName, XContentParser parser, SearchContext context) throws IOException {
|
||||
ParsedFilter filter = context.queryParserService().parseInnerFilter(parser);
|
||||
|
||||
return new FilterAggregator.Factory(aggregationName, filter == null ? new MatchAllDocsFilter() : filter.filter());
|
||||
return new FilterAggregator.Factory(aggregationName, filter == null ? Queries.newMatchAllFilter() : filter.filter());
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -19,7 +19,7 @@
|
||||
|
||||
package org.elasticsearch.search.aggregations.bucket.filters;
|
||||
|
||||
import org.elasticsearch.common.lucene.search.MatchAllDocsFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.query.ParsedFilter;
|
||||
import org.elasticsearch.search.SearchParseException;
|
||||
@ -61,7 +61,7 @@ public class FiltersParser implements Aggregator.Parser {
|
||||
key = parser.currentName();
|
||||
} else {
|
||||
ParsedFilter filter = context.queryParserService().parseInnerFilter(parser);
|
||||
filters.add(new FiltersAggregator.KeyedFilter(key, filter == null ? new MatchAllDocsFilter() : filter.filter()));
|
||||
filters.add(new FiltersAggregator.KeyedFilter(key, filter == null ? Queries.newMatchAllFilter() : filter.filter()));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@ -73,7 +73,7 @@ public class FiltersParser implements Aggregator.Parser {
|
||||
int idx = 0;
|
||||
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
|
||||
ParsedFilter filter = context.queryParserService().parseInnerFilter(parser);
|
||||
filters.add(new FiltersAggregator.KeyedFilter(String.valueOf(idx), filter == null ? new MatchAllDocsFilter()
|
||||
filters.add(new FiltersAggregator.KeyedFilter(String.valueOf(idx), filter == null ? Queries.newMatchAllFilter()
|
||||
: filter.filter()));
|
||||
idx++;
|
||||
}
|
||||
|
@ -23,6 +23,7 @@ import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilterCachingPolicy;
|
||||
import org.apache.lucene.search.QueryCachingPolicy;
|
||||
import org.apache.lucene.search.join.BitDocIdSetFilter;
|
||||
import org.apache.lucene.util.BitDocIdSet;
|
||||
import org.apache.lucene.util.BitSet;
|
||||
@ -55,7 +56,7 @@ public class NestedAggregator extends SingleBucketAggregator {
|
||||
private DocIdSetIterator childDocs;
|
||||
private BitSet parentDocs;
|
||||
|
||||
public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator, Map<String, Object> metaData, FilterCachingPolicy filterCachingPolicy) throws IOException {
|
||||
public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator, Map<String, Object> metaData, QueryCachingPolicy filterCachingPolicy) throws IOException {
|
||||
super(name, factories, aggregationContext, parentAggregator, metaData);
|
||||
childFilter = aggregationContext.searchContext().filterCache().cache(objectMapper.nestedTypeFilter(), null, filterCachingPolicy);
|
||||
}
|
||||
@ -142,12 +143,12 @@ public class NestedAggregator extends SingleBucketAggregator {
|
||||
public static class Factory extends AggregatorFactory {
|
||||
|
||||
private final String path;
|
||||
private final FilterCachingPolicy filterCachingPolicy;
|
||||
private final QueryCachingPolicy queryCachingPolicy;
|
||||
|
||||
public Factory(String name, String path, FilterCachingPolicy filterCachingPolicy) {
|
||||
public Factory(String name, String path, QueryCachingPolicy queryCachingPolicy) {
|
||||
super(name, InternalNested.TYPE.name());
|
||||
this.path = path;
|
||||
this.filterCachingPolicy = filterCachingPolicy;
|
||||
this.queryCachingPolicy = queryCachingPolicy;
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -166,7 +167,7 @@ public class NestedAggregator extends SingleBucketAggregator {
|
||||
if (!objectMapper.nested().isNested()) {
|
||||
throw new AggregationExecutionException("[nested] nested path [" + path + "] is not nested");
|
||||
}
|
||||
return new NestedAggregator(name, factories, objectMapper, context, parent, metaData, filterCachingPolicy);
|
||||
return new NestedAggregator(name, factories, objectMapper, context, parent, metaData, queryCachingPolicy);
|
||||
}
|
||||
|
||||
private final static class Unmapped extends NonCollectingAggregator {
|
||||
|
@ -24,12 +24,14 @@ import com.google.common.collect.ImmutableMap;
|
||||
import org.apache.lucene.index.LeafReader;
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.search.TopDocs;
|
||||
import org.apache.lucene.search.TopDocsCollector;
|
||||
import org.apache.lucene.search.TopFieldCollector;
|
||||
@ -40,7 +42,7 @@ import org.apache.lucene.util.BitSet;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.ExceptionsHelper;
|
||||
import org.elasticsearch.common.lucene.Lucene;
|
||||
import org.elasticsearch.common.lucene.search.AndFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.index.fieldvisitor.SingleFieldsVisitor;
|
||||
import org.elasticsearch.index.mapper.DocumentMapper;
|
||||
import org.elasticsearch.index.mapper.Uid;
|
||||
@ -55,7 +57,6 @@ import org.elasticsearch.search.internal.FilteredSearchContext;
|
||||
import org.elasticsearch.search.internal.SearchContext;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
@ -284,16 +285,16 @@ public final class InnerHitsContext {
|
||||
term = (String) fieldsVisitor.fields().get(ParentFieldMapper.NAME).get(0);
|
||||
}
|
||||
}
|
||||
Filter filter = new TermFilter(new Term(field, term)); // Only include docs that have the current hit as parent
|
||||
Filter filter = Queries.wrap(new TermQuery(new Term(field, term))); // Only include docs that have the current hit as parent
|
||||
Filter typeFilter = documentMapper.typeFilter(); // Only include docs that have this inner hits type.
|
||||
|
||||
BooleanQuery filteredQuery = new BooleanQuery();
|
||||
filteredQuery.add(query, Occur.MUST);
|
||||
filteredQuery.add(filter, Occur.FILTER);
|
||||
filteredQuery.add(typeFilter, Occur.FILTER);
|
||||
if (size() == 0) {
|
||||
TotalHitCountCollector collector = new TotalHitCountCollector();
|
||||
context.searcher().search(
|
||||
new FilteredQuery(query, new AndFilter(Arrays.asList(filter, typeFilter))),
|
||||
collector
|
||||
);
|
||||
return new TopDocs(collector.getTotalHits(), Lucene.EMPTY_SCORE_DOCS, 0);
|
||||
final int count = context.searcher().count(filteredQuery);
|
||||
return new TopDocs(count, Lucene.EMPTY_SCORE_DOCS, 0);
|
||||
} else {
|
||||
int topN = from() + size();
|
||||
TopDocsCollector topDocsCollector;
|
||||
@ -302,10 +303,7 @@ public final class InnerHitsContext {
|
||||
} else {
|
||||
topDocsCollector = TopScoreDocCollector.create(topN);
|
||||
}
|
||||
context.searcher().search(
|
||||
new FilteredQuery(query, new AndFilter(Arrays.asList(filter, typeFilter))),
|
||||
topDocsCollector
|
||||
);
|
||||
context.searcher().search( filteredQuery, topDocsCollector);
|
||||
return topDocsCollector.topDocs(from(), size());
|
||||
}
|
||||
}
|
||||
|
@ -23,6 +23,7 @@ import com.google.common.collect.Lists;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.TermQuery;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.ExceptionsHelper;
|
||||
|
@ -21,10 +21,14 @@ package org.elasticsearch.search.internal;
|
||||
|
||||
import com.google.common.collect.ImmutableList;
|
||||
import com.google.common.collect.Lists;
|
||||
|
||||
import org.apache.lucene.search.BooleanClause.Occur;
|
||||
import org.apache.lucene.search.BooleanQuery;
|
||||
import org.apache.lucene.search.ConstantScoreQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.search.FilteredQuery;
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.search.QueryWrapperFilter;
|
||||
import org.apache.lucene.search.ScoreDoc;
|
||||
import org.apache.lucene.search.Sort;
|
||||
import org.apache.lucene.util.Counter;
|
||||
@ -33,11 +37,11 @@ import org.elasticsearch.action.search.SearchType;
|
||||
import org.elasticsearch.cache.recycler.PageCacheRecycler;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.lease.Releasables;
|
||||
import org.elasticsearch.common.lucene.search.AndFilter;
|
||||
import org.elasticsearch.common.lucene.search.Queries;
|
||||
import org.elasticsearch.common.lucene.search.function.BoostScoreFunction;
|
||||
import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;
|
||||
import org.elasticsearch.common.util.BigArrays;
|
||||
import org.elasticsearch.index.IndexService;
|
||||
import org.elasticsearch.index.analysis.AnalysisService;
|
||||
import org.elasticsearch.index.cache.bitset.BitsetFilterCache;
|
||||
import org.elasticsearch.index.cache.filter.FilterCache;
|
||||
@ -49,8 +53,6 @@ import org.elasticsearch.index.mapper.MapperService;
|
||||
import org.elasticsearch.index.query.IndexQueryParserService;
|
||||
import org.elasticsearch.index.query.ParsedFilter;
|
||||
import org.elasticsearch.index.query.ParsedQuery;
|
||||
import org.elasticsearch.index.IndexService;
|
||||
import org.elasticsearch.index.query.support.NestedScope;
|
||||
import org.elasticsearch.index.shard.IndexShard;
|
||||
import org.elasticsearch.index.similarity.SimilarityService;
|
||||
import org.elasticsearch.script.ScriptService;
|
||||
@ -248,15 +250,17 @@ public class DefaultSearchContext extends SearchContext {
|
||||
@Override
|
||||
public Filter searchFilter(String[] types) {
|
||||
Filter filter = mapperService().searchFilter(types);
|
||||
if (filter == null) {
|
||||
return aliasFilter;
|
||||
} else {
|
||||
filter = filterCache().cache(filter, null, indexService.queryParserService().autoFilterCachePolicy());
|
||||
if (aliasFilter != null) {
|
||||
return new AndFilter(ImmutableList.of(filter, aliasFilter));
|
||||
}
|
||||
return filter;
|
||||
if (filter == null && aliasFilter == null) {
|
||||
return null;
|
||||
}
|
||||
BooleanQuery bq = new BooleanQuery();
|
||||
if (filter != null) {
|
||||
bq.add(filterCache().cache(filter, null, indexService.queryParserService().autoFilterCachePolicy()), Occur.MUST);
|
||||
}
|
||||
if (aliasFilter != null) {
|
||||
bq.add(aliasFilter, Occur.MUST);
|
||||
}
|
||||
return Queries.wrap(bq);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -1,168 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.docset;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import org.apache.lucene.index.LeafReaderContext;
|
||||
import org.apache.lucene.search.DocIdSet;
|
||||
import org.apache.lucene.search.DocIdSetIterator;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.util.Bits;
|
||||
import org.apache.lucene.util.RoaringDocIdSet;
|
||||
import org.elasticsearch.ElasticsearchIllegalArgumentException;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||
import org.elasticsearch.common.settings.ImmutableSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.DistanceUnit;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.index.engine.Engine.Searcher;
|
||||
import org.elasticsearch.index.query.FilterBuilder;
|
||||
import org.elasticsearch.index.query.FilterBuilders;
|
||||
import org.elasticsearch.index.query.TermFilterBuilder;
|
||||
import org.elasticsearch.index.IndexService;
|
||||
import org.elasticsearch.test.ElasticsearchSingleNodeTest;
|
||||
|
||||
public class DocIdSetsTests extends ElasticsearchSingleNodeTest {
|
||||
|
||||
private static final Settings SINGLE_SHARD_SETTINGS = ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).build();
|
||||
|
||||
private void test(IndexService indexService, boolean broken, FilterBuilder filterBuilder) throws IOException {
|
||||
client().admin().indices().prepareRefresh("test").get();
|
||||
XContentBuilder builder = filterBuilder.toXContent(JsonXContent.contentBuilder(), ToXContent.EMPTY_PARAMS);
|
||||
XContentParser parser = JsonXContent.jsonXContent.createParser(builder.bytes());
|
||||
Filter filter = indexService.queryParserService().parseInnerFilter(parser).filter();
|
||||
try (Searcher searcher = indexService.shardSafe(0).acquireSearcher("test")) {
|
||||
final LeafReaderContext ctx = searcher.reader().leaves().get(0);
|
||||
DocIdSet set = filter.getDocIdSet(ctx, null);
|
||||
assertEquals(broken, DocIdSets.isBroken(set.iterator()));
|
||||
}
|
||||
}
|
||||
|
||||
public void testTermIsNotBroken() throws IOException {
|
||||
IndexService indexService = createIndex("test", SINGLE_SHARD_SETTINGS, "type", "l", "type=long");
|
||||
client().prepareIndex("test", "type").setSource("l", 7).get();
|
||||
TermFilterBuilder filter = FilterBuilders.termFilter("l", 7).cache(randomBoolean());
|
||||
test(indexService, false, filter);
|
||||
}
|
||||
|
||||
public void testDefaultGeoIsBroken() throws IOException {
|
||||
// Geo is slow by default :'(
|
||||
IndexService indexService = createIndex("test", SINGLE_SHARD_SETTINGS, "type", "gp", "type=geo_point");
|
||||
client().prepareIndex("test", "type").setSource("gp", "2,3").get();
|
||||
FilterBuilder filter = FilterBuilders.geoDistanceFilter("gp").distance(1000, DistanceUnit.KILOMETERS).point(3, 2);
|
||||
test(indexService, true, filter);
|
||||
|
||||
}
|
||||
|
||||
public void testIndexedGeoIsNotBroken() throws IOException {
|
||||
// Geo has a fast iterator when indexing lat,lon and using the "indexed" bbox optimization
|
||||
IndexService indexService = createIndex("test", SINGLE_SHARD_SETTINGS, "type", "gp", "type=geo_point,lat_lon=true");
|
||||
client().prepareIndex("test", "type").setSource("gp", "2,3").get();
|
||||
FilterBuilder filter = FilterBuilders.geoDistanceFilter("gp").distance(1000, DistanceUnit.KILOMETERS).point(3, 2).optimizeBbox("indexed");
|
||||
test(indexService, false, filter);
|
||||
}
|
||||
|
||||
public void testScriptIsBroken() throws IOException { // by nature unfortunately
|
||||
IndexService indexService = createIndex("test", SINGLE_SHARD_SETTINGS, "type", "l", "type=long");
|
||||
client().prepareIndex("test", "type").setSource("l", 7).get();
|
||||
FilterBuilder filter = FilterBuilders.scriptFilter("doc['l'].value < 8");
|
||||
test(indexService, true, filter);
|
||||
}
|
||||
|
||||
public void testCachedIsNotBroken() throws IOException {
|
||||
IndexService indexService = createIndex("test", SINGLE_SHARD_SETTINGS, "type", "l", "type=long");
|
||||
client().prepareIndex("test", "type").setSource("l", 7).get();
|
||||
// This filter is inherently slow but by caching it we pay the price at caching time, not iteration
|
||||
FilterBuilder filter = FilterBuilders.scriptFilter("doc['l'].value < 8").cache(true);
|
||||
test(indexService, false, filter);
|
||||
}
|
||||
|
||||
public void testOr() throws IOException {
|
||||
IndexService indexService = createIndex("test", SINGLE_SHARD_SETTINGS, "type", "l", "type=long");
|
||||
client().prepareIndex("test", "type").setSource("l", new long[] {7, 8}).get();
|
||||
// Or with fast clauses is fast
|
||||
FilterBuilder filter = FilterBuilders.orFilter(FilterBuilders.termFilter("l", 7), FilterBuilders.termFilter("l", 8));
|
||||
test(indexService, false, filter);
|
||||
// But if at least one clause is broken, it is broken
|
||||
filter = FilterBuilders.orFilter(FilterBuilders.termFilter("l", 7), FilterBuilders.scriptFilter("doc['l'].value < 8"));
|
||||
test(indexService, true, filter);
|
||||
}
|
||||
|
||||
public void testAnd() throws IOException {
|
||||
IndexService indexService = createIndex("test", SINGLE_SHARD_SETTINGS, "type", "l", "type=long");
|
||||
client().prepareIndex("test", "type").setSource("l", new long[] {7, 8}).get();
|
||||
// And with fast clauses is fast
|
||||
FilterBuilder filter = FilterBuilders.andFilter(FilterBuilders.termFilter("l", 7), FilterBuilders.termFilter("l", 8));
|
||||
test(indexService, false, filter);
|
||||
// If at least one clause is 'fast' and the other clauses supports random-access, it is still fast
|
||||
filter = FilterBuilders.andFilter(FilterBuilders.termFilter("l", 7).cache(randomBoolean()), FilterBuilders.scriptFilter("doc['l'].value < 8"));
|
||||
test(indexService, false, filter);
|
||||
// However if all clauses are broken, the and is broken
|
||||
filter = FilterBuilders.andFilter(FilterBuilders.scriptFilter("doc['l'].value > 5"), FilterBuilders.scriptFilter("doc['l'].value < 8"));
|
||||
test(indexService, true, filter);
|
||||
}
|
||||
|
||||
public void testAsSequentialAccessBits() throws IOException {
|
||||
final int maxDoc = randomIntBetween(5, 100);
|
||||
|
||||
// Null DocIdSet maps to empty bits
|
||||
Bits bits = DocIdSets.asSequentialAccessBits(100, null);
|
||||
for (int i = 0; i < maxDoc; ++i) {
|
||||
assertFalse(bits.get(i));
|
||||
}
|
||||
|
||||
// Empty set maps to empty bits
|
||||
bits = DocIdSets.asSequentialAccessBits(100, DocIdSet.EMPTY);
|
||||
for (int i = 0; i < maxDoc; ++i) {
|
||||
assertFalse(bits.get(i));
|
||||
}
|
||||
|
||||
RoaringDocIdSet.Builder b = new RoaringDocIdSet.Builder(maxDoc);
|
||||
for (int i = randomInt(maxDoc - 1); i < maxDoc; i += randomIntBetween(1, 10)) {
|
||||
b.add(i);
|
||||
}
|
||||
final RoaringDocIdSet set = b.build();
|
||||
// RoaringDocIdSet does not support random access
|
||||
assertNull(set.bits());
|
||||
|
||||
bits = DocIdSets.asSequentialAccessBits(100, set);
|
||||
bits.get(4);
|
||||
try {
|
||||
bits.get(2);
|
||||
fail("Should have thrown an exception because of out-of-order consumption");
|
||||
} catch (ElasticsearchIllegalArgumentException e) {
|
||||
// ok
|
||||
}
|
||||
|
||||
bits = DocIdSets.asSequentialAccessBits(100, set);
|
||||
DocIdSetIterator iterator = set.iterator();
|
||||
for (int i = randomInt(maxDoc - 1); i < maxDoc; i += randomIntBetween(1, 10)) {
|
||||
if (iterator.docID() < i) {
|
||||
iterator.advance(i);
|
||||
}
|
||||
|
||||
assertEquals(iterator.docID() == i, bits.get(i));
|
||||
}
|
||||
}
|
||||
}
|
@ -28,8 +28,14 @@ import org.apache.lucene.document.Document;
|
||||
import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.StringField;
|
||||
import org.apache.lucene.document.TextField;
|
||||
import org.apache.lucene.index.*;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.index.DirectoryReader;
|
||||
import org.apache.lucene.index.IndexReader;
|
||||
import org.apache.lucene.index.IndexWriter;
|
||||
import org.apache.lucene.index.IndexWriterConfig;
|
||||
import org.apache.lucene.index.IndexableField;
|
||||
import org.apache.lucene.index.NoMergePolicy;
|
||||
import org.apache.lucene.index.Term;
|
||||
import org.apache.lucene.queries.TermsQuery;
|
||||
import org.apache.lucene.search.Filter;
|
||||
import org.apache.lucene.store.Directory;
|
||||
import org.apache.lucene.util.BytesRef;
|
||||
@ -41,9 +47,18 @@ import org.junit.After;
|
||||
import org.junit.Before;
|
||||
import org.junit.Test;
|
||||
|
||||
import java.util.*;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
import static com.carrotsearch.randomizedtesting.RandomizedTest.*;
|
||||
import static com.carrotsearch.randomizedtesting.RandomizedTest.frequently;
|
||||
import static com.carrotsearch.randomizedtesting.RandomizedTest.getRandom;
|
||||
import static com.carrotsearch.randomizedtesting.RandomizedTest.randomAsciiOfLength;
|
||||
import static com.carrotsearch.randomizedtesting.RandomizedTest.randomBoolean;
|
||||
import static com.carrotsearch.randomizedtesting.RandomizedTest.randomInt;
|
||||
import static com.carrotsearch.randomizedtesting.RandomizedTest.randomIntBetween;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
@ -143,7 +158,7 @@ public class FreqTermsEnumTests extends ElasticsearchTestCase {
|
||||
}
|
||||
}
|
||||
}
|
||||
filter = new TermsFilter(filterTerms);
|
||||
filter = Queries.wrap(new TermsQuery(filterTerms));
|
||||
}
|
||||
|
||||
private void addFreqs(Document doc, Map<String, FreqHolder> reference) {
|
||||
@ -175,9 +190,9 @@ public class FreqTermsEnumTests extends ElasticsearchTestCase {
|
||||
|
||||
@Test
|
||||
public void testNonDeletedFreqs() throws Exception {
|
||||
assertAgainstReference(true, true, Queries.MATCH_ALL_FILTER, referenceNotDeleted);
|
||||
assertAgainstReference(true, false, Queries.MATCH_ALL_FILTER, referenceNotDeleted);
|
||||
assertAgainstReference(false, true, Queries.MATCH_ALL_FILTER, referenceNotDeleted);
|
||||
assertAgainstReference(true, true, Queries.newMatchAllFilter(), referenceNotDeleted);
|
||||
assertAgainstReference(true, false, Queries.newMatchAllFilter(), referenceNotDeleted);
|
||||
assertAgainstReference(false, true, Queries.newMatchAllFilter(), referenceNotDeleted);
|
||||
}
|
||||
|
||||
@Test
|
||||
|
@ -59,7 +59,7 @@ public class MatchAllDocsFilterTests extends ElasticsearchTestCase {
|
||||
IndexReader reader = DirectoryReader.open(indexWriter, true);
|
||||
IndexSearcher searcher = new IndexSearcher(reader);
|
||||
|
||||
ConstantScoreQuery query = new ConstantScoreQuery(Queries.MATCH_ALL_FILTER);
|
||||
ConstantScoreQuery query = new ConstantScoreQuery(Queries.newMatchAllFilter());
|
||||
long count = Lucene.count(searcher, query);
|
||||
assertThat(count, equalTo(2l));
|
||||
|
||||
|
@ -1,118 +0,0 @@
|
||||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.common.lucene.search;
|
||||
|
||||
import org.apache.lucene.analysis.core.KeywordAnalyzer;
|
||||
import org.apache.lucene.document.Document;
|
||||
import org.apache.lucene.document.Field;
|
||||
import org.apache.lucene.document.StringField;
|
||||
import org.apache.lucene.index.*;
|
||||
import org.apache.lucene.queries.TermFilter;
|
||||
import org.apache.lucene.queries.TermsFilter;
|
||||
import org.apache.lucene.search.*;
|
||||
import org.apache.lucene.store.Directory;
|
||||
import org.apache.lucene.store.RAMDirectory;
|
||||
import org.apache.lucene.util.BitSet;
|
||||
import org.elasticsearch.common.lucene.docset.DocIdSets;
|
||||
import org.elasticsearch.test.ElasticsearchTestCase;
|
||||
import org.junit.Test;
|
||||
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
|
||||
/**
|
||||
*/
|
||||
public class TermsFilterTests extends ElasticsearchTestCase {
|
||||
|
||||
@Test
|
||||
public void testTermFilter() throws Exception {
|
||||
String fieldName = "field1";
|
||||
Directory rd = new RAMDirectory();
|
||||
IndexWriter w = new IndexWriter(rd, new IndexWriterConfig(new KeywordAnalyzer()));
|
||||
for (int i = 0; i < 100; i++) {
|
||||
Document doc = new Document();
|
||||
int term = i * 10; //terms are units of 10;
|
||||
doc.add(new Field(fieldName, "" + term, StringField.TYPE_NOT_STORED));
|
||||
doc.add(new Field("all", "xxx", StringField.TYPE_NOT_STORED));
|
||||
w.addDocument(doc);
|
||||
if ((i % 40) == 0) {
|
||||
w.commit();
|
||||
}
|
||||
}
|
||||
LeafReader reader = SlowCompositeReaderWrapper.wrap(DirectoryReader.open(w, true));
|
||||
w.close();
|
||||
|
||||
TermFilter tf = new TermFilter(new Term(fieldName, "19"));
|
||||
DocIdSet dis = tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());
|
||||
assertTrue(dis == null || dis.iterator() == null);
|
||||
|
||||
tf = new TermFilter(new Term(fieldName, "20"));
|
||||
DocIdSet result = tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());
|
||||
BitSet bits = DocIdSets.toBitSet(result.iterator(), reader.maxDoc());
|
||||
assertThat(bits.cardinality(), equalTo(1));
|
||||
|
||||
tf = new TermFilter(new Term("all", "xxx"));
|
||||
result = tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());
|
||||
bits = DocIdSets.toBitSet(result.iterator(), reader.maxDoc());
|
||||
assertThat(bits.cardinality(), equalTo(100));
|
||||
|
||||
reader.close();
|
||||
rd.close();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testTermsFilter() throws Exception {
|
||||
String fieldName = "field1";
|
||||
Directory rd = new RAMDirectory();
|
||||
IndexWriter w = new IndexWriter(rd, new IndexWriterConfig(new KeywordAnalyzer()));
|
||||
for (int i = 0; i < 100; i++) {
|
||||
Document doc = new Document();
|
||||
int term = i * 10; //terms are units of 10;
|
||||
doc.add(new Field(fieldName, "" + term, StringField.TYPE_NOT_STORED));
|
||||
doc.add(new Field("all", "xxx", StringField.TYPE_NOT_STORED));
|
||||
w.addDocument(doc);
|
||||
if ((i % 40) == 0) {
|
||||
w.commit();
|
||||
}
|
||||
}
|
||||
LeafReader reader = SlowCompositeReaderWrapper.wrap(DirectoryReader.open(w, true));
|
||||
w.close();
|
||||
|
||||
TermsFilter tf = new TermsFilter(new Term[]{new Term(fieldName, "19")});
|
||||
assertNull(tf.getDocIdSet(reader.getContext(), reader.getLiveDocs()));
|
||||
|
||||
tf = new TermsFilter(new Term[]{new Term(fieldName, "19"), new Term(fieldName, "20")});
|
||||
DocIdSet result = tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());
|
||||
BitSet bits = DocIdSets.toBitSet(result.iterator(), reader.maxDoc());
|
||||
assertThat(bits.cardinality(), equalTo(1));
|
||||
|
||||
tf = new TermsFilter(new Term[]{new Term(fieldName, "19"), new Term(fieldName, "20"), new Term(fieldName, "10")});
|
||||
result = tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());
|
||||
bits = DocIdSets.toBitSet(result.iterator(), reader.maxDoc());
|
||||
assertThat(bits.cardinality(), equalTo(2));
|
||||
|
||||
tf = new TermsFilter(new Term[]{new Term(fieldName, "19"), new Term(fieldName, "20"), new Term(fieldName, "10"), new Term(fieldName, "00")});
|
||||
result = tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());
|
||||
bits = DocIdSets.toBitSet(result.iterator(), reader.maxDoc());
|
||||
assertThat(bits.cardinality(), equalTo(2));
|
||||
|
||||
reader.close();
|
||||
rd.close();
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user