Merge branch 'master' into ccr

This commit is contained in:
Nhat Nguyen 2018-04-10 21:43:08 -04:00
commit 530eff79fe
53 changed files with 1215 additions and 537 deletions

View File

@ -72,7 +72,7 @@ include-tagged::{doc-tests}/SearchDocumentationIT.java[multi-search-execute-list
==== MultiSearchResponse
The `MultiSearchResponse` that is returned by executing the `multiSearch`
The `MultiSearchResponse` that is returned by executing the `multiSearch` method contains
a `MultiSearchResponse.Item` for each `SearchRequest` in the
`MultiSearchRequest`. Each `MultiSearchResponse.Item` contains an
exception in `getFailure` if the request failed or a

View File

@ -58,7 +58,7 @@ characters from the opening `/*` to the closing `*/` are ignored.
==== Keywords
Painless reserves the following keywords for built-in language features.
These keywords cannot be used used in other contexts, such as identifiers.
These keywords cannot be used in other contexts, such as identifiers.
[cols="^1,^1,^1,^1,^1"]
|====

View File

@ -114,7 +114,7 @@ POST /_search
=== Special Paths
Instead of pathing to a metric, `buckets_path` can use a special `"_count"` path. This instructs
the pipeline aggregation to use the document count as it's input. For example, a moving average can be calculated on the document count of each bucket, instead of a specific metric:
the pipeline aggregation to use the document count as its input. For example, a moving average can be calculated on the document count of each bucket, instead of a specific metric:
[source,js]
--------------------------------------------------

View File

@ -9,6 +9,9 @@
`all_fields`, `locale`, `auto_generate_phrase_query` and
`lowercase_expanded_terms` deprecated in 6.x have been removed.
* Purely negative queries (only MUST_NOT clauses) now return a score of `0`
rather than `1`.
==== Adaptive replica selection enabled by default
Adaptive replica selection has been enabled by default. If you wish to return to

View File

@ -5,7 +5,7 @@ experimental[The ranking evaluation API is experimental and may be changed or re
The ranking evaluation API allows to evaluate the quality of ranked search
results over a set of typical search queries. Given this set of queries and a
list or manually rated documents, the `_rank_eval` endpoint calculates and
list of manually rated documents, the `_rank_eval` endpoint calculates and
returns typical information retrieval metrics like _mean reciprocal rank_,
_precision_ or _discounted cumulative gain_.

View File

@ -128,6 +128,15 @@ sudo dpkg -i elasticsearch-{version}.deb
endif::[]
ifdef::include-xpack[]
[[deb-enable-indices]]
==== Enable automatic creation of {xpack} indices
{xpack} will try to automatically create a number of indices within Elasticsearch.
include::{xes-repo-dir}/setup/xpack-indices.asciidoc[]
endif::include-xpack[]
include::init-systemd.asciidoc[]
[[deb-running-init]]

View File

@ -115,6 +115,15 @@ endif::[]
include::skip-set-kernel-parameters.asciidoc[]
ifdef::include-xpack[]
[[rpm-enable-indices]]
==== Enable automatic creation of {xpack} indices
{xpack} will try to automatically create a number of indices within {es}.
include::{xes-repo-dir}/setup/xpack-indices.asciidoc[]
endif::include-xpack[]
include::init-systemd.asciidoc[]
[[rpm-running-init]]

View File

@ -37,7 +37,7 @@ endif::[]
[[install-msi-gui]]
==== Install using the graphical user interface (GUI)
Double-click the downloaded `.msi` package to launch a GUI wizard that will guide you through the
Double-click the downloaded `.msi` package to launch a GUI wizard that will guide you through the
installation process. You can view help on any step by clicking the `?` button, which reveals an
aside panel with additional information for each input:
@ -52,7 +52,7 @@ image::images/msi_installer/msi_installer_locations.png[]
Then select whether to install as a service or start Elasticsearch manually as needed. When
installing as a service, you can also decide which account to run the service under as well
as whether the service should be started after installation and when Windows is started or
as whether the service should be started after installation and when Windows is started or
restarted:
[[msi-installer-service]]
@ -73,14 +73,14 @@ part of the installation, with the option to configure a HTTPS proxy through whi
[[msi-installer-selected-plugins]]
image::images/msi_installer/msi_installer_selected_plugins.png[]
Upon choosing to install X-Pack plugin, an additional step allows a choice of the type of X-Pack
Upon choosing to install X-Pack plugin, an additional step allows a choice of the type of X-Pack
license to install, in addition to X-Pack Security configuration and built-in user configuration:
[[msi-installer-xpack]]
image::images/msi_installer/msi_installer_xpack.png[]
NOTE: X-Pack includes a choice of a Trial or Basic license for 30 days. After that, you can obtain one of the
https://www.elastic.co/subscriptions[available subscriptions] or {ref}/security-settings.html[disable Security].
NOTE: X-Pack includes a choice of a Trial or Basic license for 30 days. After that, you can obtain one of the
https://www.elastic.co/subscriptions[available subscriptions] or {ref}/security-settings.html[disable Security].
The Basic license is free and includes the https://www.elastic.co/products/x-pack/monitoring[Monitoring] extension.
After clicking the install button, the installer will begin installation:
@ -105,7 +105,7 @@ then running:
msiexec.exe /i elasticsearch-{version}.msi /qn
--------------------------------------------
By default, msiexec does not wait for the installation process to complete, since it runs in the
By default, msiexec does not wait for the installation process to complete, since it runs in the
Windows subsystem. To wait on the process to finish and ensure that `%ERRORLEVEL%` is set
accordingly, it is recommended to use `start /wait` to create a process and wait for it to exit
@ -114,8 +114,8 @@ accordingly, it is recommended to use `start /wait` to create a process and wait
start /wait msiexec.exe /i elasticsearch-{version}.msi /qn
--------------------------------------------
As with any MSI installation package, a log file for the installation process can be found
within the `%TEMP%` directory, with a randomly generated name adhering to the format
As with any MSI installation package, a log file for the installation process can be found
within the `%TEMP%` directory, with a randomly generated name adhering to the format
`MSI<random>.LOG`. The path to a log file can be supplied using the `/l` command line argument
["source","sh",subs="attributes,callouts"]
@ -139,126 +139,126 @@ All settings exposed within the GUI are also available as command line arguments
as _properties_ within Windows Installer documentation) that can be passed to msiexec:
[horizontal]
`INSTALLDIR`::
`INSTALLDIR`::
The installation directory. The final directory in the path **must**
The installation directory. The final directory in the path **must**
be the version of Elasticsearch.
Defaults to ++%ProgramW6432%\Elastic\Elasticsearch{backslash}{version}++.
`DATADIRECTORY`::
`DATADIRECTORY`::
The directory in which to store your data.
The directory in which to store your data.
Defaults to `%ALLUSERSPROFILE%\Elastic\Elasticsearch\data`
`CONFIGDIRECTORY`::
`CONFIGDIRECTORY`::
The directory in which to store your configuration.
The directory in which to store your configuration.
Defaults to `%ALLUSERSPROFILE%\Elastic\Elasticsearch\config`
`LOGSDIRECTORY`::
`LOGSDIRECTORY`::
The directory in which to store your logs.
The directory in which to store your logs.
Defaults to `%ALLUSERSPROFILE%\Elastic\Elasticsearch\logs`
`PLACEWRITABLELOCATIONSINSAMEPATH`::
`PLACEWRITABLELOCATIONSINSAMEPATH`::
Whether the data, configuration and logs directories
should be created under the installation directory. Defaults to `false`
`INSTALLASSERVICE`::
`INSTALLASSERVICE`::
Whether Elasticsearch is installed and configured as a Windows Service.
Whether Elasticsearch is installed and configured as a Windows Service.
Defaults to `true`
`STARTAFTERINSTALL`::
`STARTAFTERINSTALL`::
Whether the Windows Service is started after installation finishes.
Whether the Windows Service is started after installation finishes.
Defaults to `true`
`STARTWHENWINDOWSSTARTS`::
`STARTWHENWINDOWSSTARTS`::
Whether the Windows Service is started when Windows is started.
Whether the Windows Service is started when Windows is started.
Defaults to `true`
`USELOCALSYSTEM`::
`USELOCALSYSTEM`::
Whether the Windows service runs under the LocalSystem Account.
Whether the Windows service runs under the LocalSystem Account.
Defaults to `true`
`USENETWORKSERVICE`::
`USENETWORKSERVICE`::
Whether the Windows service runs under the NetworkService Account. Defaults
to `false`
`USEEXISTINGUSER`::
`USEEXISTINGUSER`::
Whether the Windows service runs under a specified existing account. Defaults
to `false`
`USER`::
`USER`::
The username for the account under which the Windows service runs. Defaults to `""`
`PASSWORD`::
`PASSWORD`::
The password for the account under which the Windows service runs. Defaults to `""`
`CLUSTERNAME`::
`CLUSTERNAME`::
The name of the cluster. Defaults to `elasticsearch`
`NODENAME`::
`NODENAME`::
The name of the node. Defaults to `%COMPUTERNAME%`
`MASTERNODE`::
`MASTERNODE`::
Whether Elasticsearch is configured as a master node. Defaults to `true`
`DATANODE`::
`DATANODE`::
Whether Elasticsearch is configured as a data node. Defaults to `true`
`INGESTNODE`::
`INGESTNODE`::
Whether Elasticsearch is configured as an ingest node. Defaults to `true`
`SELECTEDMEMORY`::
`SELECTEDMEMORY`::
The amount of memory to allocate to the JVM heap for Elasticsearch.
Defaults to `2048` unless the target machine has less than 4GB in total, in which case
The amount of memory to allocate to the JVM heap for Elasticsearch.
Defaults to `2048` unless the target machine has less than 4GB in total, in which case
it defaults to 50% of total memory.
`LOCKMEMORY`::
`LOCKMEMORY`::
Whether `bootstrap.memory_lock` should be used to try to lock the process
address space into RAM. Defaults to `false`
`UNICASTNODES`::
`UNICASTNODES`::
A comma separated list of hosts in the form `host:port` or `host` to be used for
unicast discovery. Defaults to `""`
`MINIMUMMASTERNODES`::
`MINIMUMMASTERNODES`::
The minimum number of master-eligible nodes that must be visible
The minimum number of master-eligible nodes that must be visible
in order to form a cluster. Defaults to `""`
`NETWORKHOST`::
`NETWORKHOST`::
The hostname or IP address to bind the node to and _publish_ (advertise) this
The hostname or IP address to bind the node to and _publish_ (advertise) this
host to other nodes in the cluster. Defaults to `""`
`HTTPPORT`::
`HTTPPORT`::
The port to use for exposing Elasticsearch APIs over HTTP. Defaults to `9200`
`TRANSPORTPORT`::
`TRANSPORTPORT`::
The port to use for internal communication between nodes within the cluster.
The port to use for internal communication between nodes within the cluster.
Defaults to `9300`
`PLUGINS`::
`PLUGINS`::
A comma separated list of the plugins to download and install as part of the installation. Defaults to `""`
@ -294,7 +294,7 @@ as _properties_ within Windows Installer documentation) that can be passed to ms
used to bootstrap the cluster and persisted as the `bootstrap.password` setting in the keystore.
Defaults to a randomized value.
`SKIPSETTINGPASSWORDS`::
`SKIPSETTINGPASSWORDS`::
When installing X-Pack plugin with a `Trial` license and X-Pack Security enabled, whether the
installation should skip setting up the built-in users `elastic`, `kibana` and `logstash_system`.
@ -313,7 +313,7 @@ as _properties_ within Windows Installer documentation) that can be passed to ms
`LOGSTASHSYSTEMUSERPASSWORD`::
When installing X-Pack plugin with a `Trial` license and X-Pack Security enabled, the password
to use for the built-in user `logstash_system`. Defaults to `""`
to use for the built-in user `logstash_system`. Defaults to `""`
To pass a value, simply append the property name and value using the format `<PROPERTYNAME>="<VALUE>"` to
the installation command. For example, to use a different installation directory to the default one and to install https://www.elastic.co/products/x-pack[X-Pack]:
@ -324,7 +324,16 @@ start /wait msiexec.exe /i elasticsearch-{version}.msi /qn INSTALLDIR="C:\Custom
--------------------------------------------
Consult the https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988(v=vs.85).aspx[Windows Installer SDK Command-Line Options]
for additional rules related to values containing quotation marks.
for additional rules related to values containing quotation marks.
ifdef::include-xpack[]
[[msi-installer-enable-indices]]
==== Enable automatic creation of {xpack} indices
{xpack} will try to automatically create a number of indices within {es}.
include::{xes-repo-dir}/setup/xpack-indices.asciidoc[]
endif::include-xpack[]
[[msi-installer-command-line-running]]
==== Running Elasticsearch from the command line
@ -374,10 +383,10 @@ include::check-running.asciidoc[]
Elasticsearch can be installed as a service to run in the background or start
automatically at boot time without any user interaction. This can be achieved upon installation
using the following command line options
* `INSTALLASSERVICE=true`
* `STARTAFTERINSTALL=true`
* `STARTWHENWINDOWSSTARTS=true`
* `INSTALLASSERVICE=true`
* `STARTAFTERINSTALL=true`
* `STARTWHENWINDOWSSTARTS=true`
Once installed, Elasticsearch will appear within the Services control panel:
@ -401,18 +410,18 @@ with PowerShell:
Get-Service Elasticsearch | Stop-Service | Start-Service
--------------------------------------------
Changes can be made to jvm.options and elasticsearch.yml configuration files to configure the
service after installation. Most changes (like JVM settings) will require a restart of the
Changes can be made to jvm.options and elasticsearch.yml configuration files to configure the
service after installation. Most changes (like JVM settings) will require a restart of the
service in order to take effect.
[[upgrade-msi-gui]]
==== Upgrade using the graphical user interface (GUI)
The `.msi` package supports upgrading an installed version of Elasticsearch to a newer
version of Elasticsearch. The upgrade process through the GUI handles upgrading all
The `.msi` package supports upgrading an installed version of Elasticsearch to a newer
version of Elasticsearch. The upgrade process through the GUI handles upgrading all
installed plugins as well as retaining both your data and configuration.
Downloading and clicking on a newer version of the `.msi` package will launch the GUI wizard.
Downloading and clicking on a newer version of the `.msi` package will launch the GUI wizard.
The first step will list the read only properties from the previous installation:
[[msi-installer-upgrade-notice]]
@ -423,7 +432,7 @@ The following configuration step allows certain configuration options to be chan
[[msi-installer-upgrade-configuration]]
image::images/msi_installer/msi_installer_upgrade_configuration.png[]
Finally, the plugins step allows currently installed plugins to be upgraded or removed, and
Finally, the plugins step allows currently installed plugins to be upgraded or removed, and
for plugins not currently installed, to be downloaded and installed:
[[msi-installer-upgrade-plugins]]
@ -432,25 +441,25 @@ image::images/msi_installer/msi_installer_upgrade_plugins.png[]
[[upgrade-msi-command-line]]
==== Upgrade using the command line
The `.msi` can also upgrade Elasticsearch using the command line.
The `.msi` can also upgrade Elasticsearch using the command line.
[IMPORTANT]
===========================================
A command line upgrade requires passing the **same** command line properties as
used at first install time; the Windows Installer does not remember these properties.
used at first install time; the Windows Installer does not remember these properties.
For example, if you originally installed with the command line options `PLUGINS="x-pack"` and
`LOCKMEMORY="true"`, then you must pass these same values when performing an
upgrade from the command line.
The **exception** to this is `INSTALLDIR` (if originally specified), which must be a different directory to the
current installation.
The **exception** to this is `INSTALLDIR` (if originally specified), which must be a different directory to the
current installation.
If setting `INSTALLDIR`, the final directory in the path **must** be the version of Elasticsearch e.g.
++C:\Program Files\Elastic\Elasticsearch{backslash}{version}++
===========================================
The simplest upgrade, assuming Elasticsearch was installed using all defaults,
The simplest upgrade, assuming Elasticsearch was installed using all defaults,
is achieved by first navigating to the download directory, then running:
["source","sh",subs="attributes,callouts"]
@ -471,7 +480,7 @@ start /wait msiexec.exe /i elasticsearch-{version}.msi /qn /l upgrade.log
The `.msi` package handles uninstallation of all directories and files added as part of installation.
WARNING: Uninstallation will remove **all** directories and their contents created as part of
WARNING: Uninstallation will remove **all** directories and their contents created as part of
installation, **including data within the data directory**. If you wish to retain your data upon
uninstallation, it is recommended that you make a copy of the data directory before uninstallation.
@ -505,4 +514,4 @@ be passed using the `/l` command line argument
start /wait msiexec.exe /x elasticsearch-{version}.msi /qn /l uninstall.log
--------------------------------------------
include::next-steps.asciidoc[]
include::next-steps.asciidoc[]

View File

@ -70,6 +70,15 @@ cd elasticsearch-{version}/ <2>
endif::[]
ifdef::include-xpack[]
[[zip-targz-enable-indices]]
==== Enable automatic creation of {xpack} indices
{xpack} will try to automatically create a number of indices within {es}.
include::{xes-repo-dir}/setup/xpack-indices.asciidoc[]
endif::include-xpack[]
[[zip-targz-running]]
==== Running Elasticsearch from the command line
@ -197,4 +206,4 @@ directory so that you do not delete important data later on.
|=======================================================================
include::next-steps.asciidoc[]
include::next-steps.asciidoc[]

View File

@ -42,6 +42,15 @@ cd c:\elasticsearch-{version}
endif::[]
ifdef::include-xpack[]
[[windows-enable-indices]]
==== Enable automatic creation of {xpack} indices
{xpack} will try to automatically create a number of indices within {es}.
include::{xes-repo-dir}/setup/xpack-indices.asciidoc[]
endif::include-xpack[]
[[windows-running]]
==== Running Elasticsearch from the command line
@ -268,4 +277,4 @@ directory so that you do not delete important data later on.
|=======================================================================
include::next-steps.asciidoc[]
include::next-steps.asciidoc[]

View File

@ -57,7 +57,11 @@ final class PercolatorMatchedSlotSubFetchPhase implements FetchSubPhase {
@Override
public void hitsExecute(SearchContext context, SearchHit[] hits) throws IOException {
List<PercolateQuery> percolateQueries = locatePercolatorQuery(context.query());
innerHitsExecute(context.query(), context.searcher(), hits);
}
static void innerHitsExecute(Query mainQuery, IndexSearcher indexSearcher, SearchHit[] hits) throws IOException {
List<PercolateQuery> percolateQueries = locatePercolatorQuery(mainQuery);
if (percolateQueries.isEmpty()) {
return;
}
@ -81,11 +85,15 @@ final class PercolatorMatchedSlotSubFetchPhase implements FetchSubPhase {
}
PercolateQuery.QueryStore queryStore = percolateQuery.getQueryStore();
List<LeafReaderContext> ctxs = context.searcher().getIndexReader().leaves();
List<LeafReaderContext> ctxs = indexSearcher.getIndexReader().leaves();
for (SearchHit hit : hits) {
LeafReaderContext ctx = ctxs.get(ReaderUtil.subIndex(hit.docId(), ctxs));
int segmentDocId = hit.docId() - ctx.docBase;
Query query = queryStore.getQueries(ctx).apply(segmentDocId);
if (query == null) {
// This is not a document with a percolator field.
continue;
}
TopDocs topDocs = percolatorIndexSearcher.search(query, memoryIndexMaxDoc, new Sort(SortField.FIELD_DOC));
if (topDocs.totalHits == 0) {

View File

@ -55,7 +55,6 @@ import org.elasticsearch.index.search.ESToParentBlockJoinQuery;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
@ -129,7 +128,7 @@ final class QueryAnalyzer {
* @param indexVersion The create version of the index containing the percolator queries.
*/
static Result analyze(Query query, Version indexVersion) {
Class queryClass = query.getClass();
Class<?> queryClass = query.getClass();
if (queryClass.isAnonymousClass()) {
// Sometimes queries have anonymous classes in that case we need the direct super class.
// (for example blended term query)
@ -311,165 +310,75 @@ final class QueryAnalyzer {
private static BiFunction<Query, Version, Result> booleanQuery() {
return (query, version) -> {
BooleanQuery bq = (BooleanQuery) query;
List<BooleanClause> clauses = bq.clauses();
int minimumShouldMatch = bq.getMinimumNumberShouldMatch();
int numRequiredClauses = 0;
int numOptionalClauses = 0;
int numProhibitedClauses = 0;
for (BooleanClause clause : clauses) {
List<Query> requiredClauses = new ArrayList<>();
List<Query> optionalClauses = new ArrayList<>();
boolean hasProhibitedClauses = false;
for (BooleanClause clause : bq.clauses()) {
if (clause.isRequired()) {
numRequiredClauses++;
}
if (clause.isProhibited()) {
numProhibitedClauses++;
}
if (clause.getOccur() == BooleanClause.Occur.SHOULD) {
numOptionalClauses++;
requiredClauses.add(clause.getQuery());
} else if (clause.isProhibited()) {
hasProhibitedClauses = true;
} else {
assert clause.getOccur() == Occur.SHOULD;
optionalClauses.add(clause.getQuery());
}
}
if (minimumShouldMatch > numOptionalClauses) {
if (minimumShouldMatch > optionalClauses.size()
|| (requiredClauses.isEmpty() && optionalClauses.isEmpty())) {
return new Result(false, Collections.emptySet(), 0);
}
if (numRequiredClauses > 0) {
if (version.onOrAfter(Version.V_6_1_0)) {
UnsupportedQueryException uqe = null;
List<Result> results = new ArrayList<>(numRequiredClauses);
for (BooleanClause clause : clauses) {
if (clause.isRequired()) {
// skip must_not clauses, we don't need to remember the things that do *not* match...
// skip should clauses, this bq has must clauses, so we don't need to remember should clauses,
// since they are completely optional.
try {
Result subResult = analyze(clause.getQuery(), version);
if (subResult.matchAllDocs == false && subResult.extractions.isEmpty()) {
// doesn't match anything
return subResult;
}
results.add(subResult);
} catch (UnsupportedQueryException e) {
uqe = e;
}
}
}
if (results.isEmpty()) {
if (uqe != null) {
// we're unable to select the best clause and an exception occurred, so we bail
throw uqe;
} else {
// We didn't find a clause and no exception occurred, so this bq only contained MatchNoDocsQueries,
return new Result(true, Collections.emptySet(), 1);
}
} else {
int msm = 0;
boolean requiredShouldClauses = minimumShouldMatch > 0 && numOptionalClauses > 0;
boolean verified = uqe == null && numProhibitedClauses == 0 && requiredShouldClauses == false;
boolean matchAllDocs = true;
Set<QueryExtraction> extractions = new HashSet<>();
Set<String> seenRangeFields = new HashSet<>();
for (Result result : results) {
// In case that there are duplicate query extractions we need to be careful with incrementing msm,
// because that could lead to valid matches not becoming candidate matches:
// query: (field:val1 AND field:val2) AND (field:val2 AND field:val3)
// doc: field: val1 val2 val3
// So lets be protective and decrease the msm:
int resultMsm = result.minimumShouldMatch;
for (QueryExtraction queryExtraction : result.extractions) {
if (queryExtraction.range != null) {
// In case of range queries each extraction does not simply increment the minimum_should_match
// for that percolator query like for a term based extraction, so that can lead to more false
// positives for percolator queries with range queries than term based queries.
// The is because the way number fields are extracted from the document to be percolated.
// Per field a single range is extracted and if a percolator query has two or more range queries
// on the same field, then the minimum should match can be higher than clauses in the CoveringQuery.
// Therefore right now the minimum should match is incremented once per number field when processing
// the percolator query at index time.
if (seenRangeFields.add(queryExtraction.range.fieldName)) {
resultMsm = 1;
} else {
resultMsm = 0;
}
}
if (extractions.contains(queryExtraction)) {
// To protect against negative msm:
// (sub results could consist out of disjunction and conjunction and
// then we do not know which extraction contributed to msm)
resultMsm = Math.max(0, resultMsm - 1);
}
}
msm += resultMsm;
if (result.verified == false
// If some inner extractions are optional, the result can't be verified
|| result.minimumShouldMatch < result.extractions.size()) {
verified = false;
}
matchAllDocs &= result.matchAllDocs;
extractions.addAll(result.extractions);
}
if (matchAllDocs) {
return new Result(matchAllDocs, verified);
} else {
return new Result(verified, extractions, msm);
}
if (requiredClauses.size() > 0) {
if (minimumShouldMatch > 0) {
// mix of required clauses and required optional clauses, we turn it into
// a pure conjunction by moving the optional clauses to a sub query to
// simplify logic
BooleanQuery.Builder minShouldMatchQuery = new BooleanQuery.Builder();
minShouldMatchQuery.setMinimumNumberShouldMatch(minimumShouldMatch);
for (Query q : optionalClauses) {
minShouldMatchQuery.add(q, Occur.SHOULD);
}
requiredClauses.add(minShouldMatchQuery.build());
optionalClauses.clear();
minimumShouldMatch = 0;
} else {
Result bestClause = null;
UnsupportedQueryException uqe = null;
boolean hasProhibitedClauses = false;
for (BooleanClause clause : clauses) {
if (clause.isProhibited()) {
hasProhibitedClauses = true;
}
if (clause.isRequired() == false) {
// skip must_not clauses, we don't need to remember the things that do *not* match...
// skip should clauses, this bq has must clauses, so we don't need to remember should clauses,
// since they are completely optional.
continue;
}
Result temp;
try {
temp = analyze(clause.getQuery(), version);
} catch (UnsupportedQueryException e) {
uqe = e;
continue;
}
bestClause = selectBestResult(temp, bestClause);
}
if (bestClause != null) {
if (hasProhibitedClauses || minimumShouldMatch > 0) {
bestClause = bestClause.unverify();
}
return bestClause;
} else {
if (uqe != null) {
// we're unable to select the best clause and an exception occurred, so we bail
throw uqe;
} else {
// We didn't find a clause and no exception occurred, so this bq only contained MatchNoDocsQueries,
return new Result(true, Collections.emptySet(), 0);
}
}
optionalClauses.clear(); // only matter for scoring, not matching
}
} else {
List<Query> disjunctions = new ArrayList<>(numOptionalClauses);
for (BooleanClause clause : clauses) {
if (clause.getOccur() == BooleanClause.Occur.SHOULD) {
disjunctions.add(clause.getQuery());
}
}
return handleDisjunction(disjunctions, minimumShouldMatch, numProhibitedClauses > 0, version);
}
// Now we now have either a pure conjunction or a pure disjunction, with at least one clause
Result result;
if (requiredClauses.size() > 0) {
assert optionalClauses.isEmpty();
assert minimumShouldMatch == 0;
result = handleConjunctionQuery(requiredClauses, version);
} else {
assert requiredClauses.isEmpty();
if (minimumShouldMatch == 0) {
// Lucene always requires one matching clause for disjunctions
minimumShouldMatch = 1;
}
result = handleDisjunctionQuery(optionalClauses, minimumShouldMatch, version);
}
if (hasProhibitedClauses) {
result = result.unverify();
}
return result;
};
}
private static BiFunction<Query, Version, Result> disjunctionMaxQuery() {
return (query, version) -> {
List<Query> disjuncts = ((DisjunctionMaxQuery) query).getDisjuncts();
return handleDisjunction(disjuncts, 1, false, version);
if (disjuncts.isEmpty()) {
return new Result(false, Collections.emptySet(), 0);
} else {
return handleDisjunctionQuery(disjuncts, 1, version);
}
};
}
@ -536,19 +445,148 @@ final class QueryAnalyzer {
};
}
private static Result handleDisjunction(List<Query> disjunctions, int requiredShouldClauses, boolean otherClauses,
Version version) {
private static Result handleConjunctionQuery(List<Query> conjunctions, Version version) {
UnsupportedQueryException uqe = null;
List<Result> results = new ArrayList<>(conjunctions.size());
boolean success = false;
for (Query query : conjunctions) {
try {
Result subResult = analyze(query, version);
if (subResult.isMatchNoDocs()) {
return subResult;
}
results.add(subResult);
success = true;
} catch (UnsupportedQueryException e) {
uqe = e;
}
}
if (success == false) {
// No clauses could be extracted
if (uqe != null) {
throw uqe;
} else {
// Empty conjunction
return new Result(true, Collections.emptySet(), 0);
}
}
Result result = handleConjunction(results, version);
if (uqe != null) {
result = result.unverify();
}
return result;
}
private static Result handleConjunction(List<Result> conjunctions, Version version) {
if (conjunctions.isEmpty()) {
throw new IllegalArgumentException("Must have at least on conjunction sub result");
}
if (version.onOrAfter(Version.V_6_1_0)) {
for (Result subResult : conjunctions) {
if (subResult.isMatchNoDocs()) {
return subResult;
}
}
int msm = 0;
boolean verified = true;
boolean matchAllDocs = true;
boolean hasDuplicateTerms = false;Set<QueryExtraction> extractions = new HashSet<>();
Set<String> seenRangeFields = new HashSet<>();
for (Result result : conjunctions) {
// In case that there are duplicate query extractions we need to be careful with incrementing msm,
// because that could lead to valid matches not becoming candidate matches:
// query: (field:val1 AND field:val2) AND (field:val2 AND field:val3)
// doc: field: val1 val2 val3
// So lets be protective and decrease the msm:
int resultMsm = result.minimumShouldMatch;
for (QueryExtraction queryExtraction : result.extractions) {
if (queryExtraction.range != null) {
// In case of range queries each extraction does not simply increment the minimum_should_match
// for that percolator query like for a term based extraction, so that can lead to more false
// positives for percolator queries with range queries than term based queries.
// The is because the way number fields are extracted from the document to be percolated.
// Per field a single range is extracted and if a percolator query has two or more range queries
// on the same field, then the minimum should match can be higher than clauses in the CoveringQuery.
// Therefore right now the minimum should match is incremented once per number field when processing
// the percolator query at index time.
if (seenRangeFields.add(queryExtraction.range.fieldName)) {
resultMsm = 1;
} else {
resultMsm = 0;
}
}
if (extractions.contains(queryExtraction)) {
resultMsm = 0;
verified = false;
break;
}
}
msm += resultMsm;
if (result.verified == false
// If some inner extractions are optional, the result can't be verified
|| result.minimumShouldMatch < result.extractions.size()) {
verified = false;
}
matchAllDocs &= result.matchAllDocs;
extractions.addAll(result.extractions);
}
if (matchAllDocs) {
return new Result(matchAllDocs, verified);
} else {
return new Result(verified, extractions, hasDuplicateTerms ? 1 : msm);
}
} else {
Result bestClause = null;
for (Result result : conjunctions) {
bestClause = selectBestResult(result, bestClause);
}
return bestClause;
}
}
private static Result handleDisjunctionQuery(List<Query> disjunctions, int requiredShouldClauses, Version version) {
List<Result> subResults = new ArrayList<>();
for (Query query : disjunctions) {
// if either query fails extraction, we need to propagate as we could miss hits otherwise
Result subResult = analyze(query, version);
subResults.add(subResult);
}
return handleDisjunction(subResults, requiredShouldClauses, version);
}
private static Result handleDisjunction(List<Result> disjunctions, int requiredShouldClauses, Version version) {
// Keep track of the msm for each clause:
List<DisjunctionClause> clauses = new ArrayList<>(disjunctions.size());
boolean verified = otherClauses == false;
List<Integer> clauses = new ArrayList<>(disjunctions.size());
boolean verified;
if (version.before(Version.V_6_1_0)) {
verified &= requiredShouldClauses <= 1;
verified = requiredShouldClauses <= 1;
} else {
verified = true;
}
int numMatchAllClauses = 0;
boolean hasRangeExtractions = false;
// In case that there are duplicate extracted terms / ranges then the msm should always be equal to the clause
// with lowest msm, because the at percolate time there is no way to know the number of repetitions per
// extracted term and field value from a percolator document may have more 'weight' than others.
// Example percolator query: value1 OR value2 OR value2 OR value3 OR value3 OR value3 OR value4 OR value5 (msm set to 3)
// In the above example query the extracted msm would be 3
// Example document1: value1 value2 value3
// With the msm and extracted terms this would match and is expected behaviour
// Example document2: value3
// This document should match too (value3 appears in 3 clauses), but with msm set to 3 and the fact
// that fact that only distinct values are indexed in extracted terms field this document would
// never match.
boolean hasDuplicateTerms = false;
Set<QueryExtraction> terms = new HashSet<>();
for (int i = 0; i < disjunctions.size(); i++) {
Query disjunct = disjunctions.get(i);
Result subResult = analyze(disjunct, version);
Result subResult = disjunctions.get(i);
if (subResult.verified == false
// one of the sub queries requires more than one term to match, we can't
// verify it with a single top-level min_should_match
@ -564,35 +602,37 @@ final class QueryAnalyzer {
int resultMsm = subResult.minimumShouldMatch;
for (QueryExtraction extraction : subResult.extractions) {
if (terms.add(extraction) == false) {
resultMsm = Math.max(0, resultMsm - 1);
verified = false;
hasDuplicateTerms = true;
}
}
clauses.add(new DisjunctionClause(resultMsm, subResult.extractions.stream()
.filter(extraction -> extraction.range != null)
.map(extraction -> extraction.range.fieldName)
.collect(toSet())));
if (hasRangeExtractions == false) {
hasRangeExtractions = subResult.extractions.stream().anyMatch(qe -> qe.range != null);
}
clauses.add(resultMsm);
}
boolean matchAllDocs = numMatchAllClauses > 0 && numMatchAllClauses >= requiredShouldClauses;
int msm = 0;
if (version.onOrAfter(Version.V_6_1_0)) {
Set<String> seenRangeFields = new HashSet<>();
if (version.onOrAfter(Version.V_6_1_0) &&
// Having ranges would mean we need to juggle with the msm and that complicates this logic a lot,
// so for now lets not do it.
hasRangeExtractions == false) {
// Figure out what the combined msm is for this disjunction:
// (sum the lowest required clauses, otherwise we're too strict and queries may not match)
clauses = clauses.stream()
.filter(o -> o.msm > 0)
.sorted(Comparator.comparingInt(o -> o.msm))
.filter(val -> val > 0)
.sorted()
.collect(Collectors.toList());
int limit = Math.min(clauses.size(), Math.max(1, requiredShouldClauses));
for (int i = 0; i < limit; i++) {
if (clauses.get(i).rangeFieldNames.isEmpty() == false) {
for (String rangeField: clauses.get(i).rangeFieldNames) {
if (seenRangeFields.add(rangeField)) {
msm += 1;
}
}
} else {
msm += clauses.get(i).msm;
// When there are duplicated query extractions, percolator can no longer reliably determine msm across this disjunction
if (hasDuplicateTerms) {
// pick lowest msm:
msm = clauses.get(0);
} else {
int limit = Math.min(clauses.size(), Math.max(1, requiredShouldClauses));
for (int i = 0; i < limit; i++) {
msm += clauses.get(i);
}
}
} else {
@ -605,17 +645,6 @@ final class QueryAnalyzer {
}
}
static class DisjunctionClause {
final int msm;
final Set<String> rangeFieldNames;
DisjunctionClause(int msm, Set<String> rangeFieldNames) {
this.msm = msm;
this.rangeFieldNames = rangeFieldNames;
}
}
/**
* Return an extraction for the conjunction of {@code result1} and {@code result2}
* by picking up clauses that look most restrictive and making it unverified if
@ -753,6 +782,10 @@ final class QueryAnalyzer {
return this;
}
}
boolean isMatchNoDocs() {
return matchAllDocs == false && extractions.isEmpty();
}
}
static class QueryExtraction {

View File

@ -38,9 +38,12 @@ import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.IndexableField;
import org.apache.lucene.index.LeafReaderContext;
import org.apache.lucene.index.MultiDocValues;
import org.apache.lucene.index.MultiFields;
import org.apache.lucene.index.NoMergePolicy;
import org.apache.lucene.index.NumericDocValues;
import org.apache.lucene.index.PostingsEnum;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermsEnum;
import org.apache.lucene.index.memory.MemoryIndex;
import org.apache.lucene.queries.BlendedTermQuery;
import org.apache.lucene.queries.CommonTermsQuery;
@ -318,6 +321,103 @@ public class CandidateQueryTests extends ESSingleNodeTestCase {
return builder.build();
}
public void testDuel2() throws Exception {
List<String> stringValues = new ArrayList<>();
stringValues.add("value1");
stringValues.add("value2");
stringValues.add("value3");
MappedFieldType intFieldType = mapperService.documentMapper("type").mappers()
.getMapper("int_field").fieldType();
List<int[]> ranges = new ArrayList<>();
ranges.add(new int[]{-5, 5});
ranges.add(new int[]{0, 10});
ranges.add(new int[]{15, 50});
List<ParseContext.Document> documents = new ArrayList<>();
{
addQuery(new TermQuery(new Term("string_field", randomFrom(stringValues))), documents);
}
{
addQuery(new PhraseQuery(0, "string_field", stringValues.toArray(new String[0])), documents);
}
{
int[] range = randomFrom(ranges);
Query rangeQuery = intFieldType.rangeQuery(range[0], range[1], true, true, null, null, null, null);
addQuery(rangeQuery, documents);
}
{
int numBooleanQueries = randomIntBetween(1, 5);
for (int i = 0; i < numBooleanQueries; i++) {
Query randomBQ = randomBQ(1, stringValues, ranges, intFieldType);
addQuery(randomBQ, documents);
}
}
{
addQuery(new MatchNoDocsQuery(), documents);
}
{
addQuery(new MatchAllDocsQuery(), documents);
}
indexWriter.addDocuments(documents);
indexWriter.close();
directoryReader = DirectoryReader.open(directory);
IndexSearcher shardSearcher = newSearcher(directoryReader);
// Disable query cache, because ControlQuery cannot be cached...
shardSearcher.setQueryCache(null);
Document document = new Document();
for (String value : stringValues) {
document.add(new TextField("string_field", value, Field.Store.NO));
logger.info("Test with document: {}" + document);
MemoryIndex memoryIndex = MemoryIndex.fromDocument(document, new WhitespaceAnalyzer());
duelRun(queryStore, memoryIndex, shardSearcher);
}
for (int[] range : ranges) {
List<Field> numberFields =
NumberFieldMapper.NumberType.INTEGER.createFields("int_field", between(range[0], range[1]), true, true, false);
for (Field numberField : numberFields) {
document.add(numberField);
}
logger.info("Test with document: {}" + document);
MemoryIndex memoryIndex = MemoryIndex.fromDocument(document, new WhitespaceAnalyzer());
duelRun(queryStore, memoryIndex, shardSearcher);
}
}
private BooleanQuery randomBQ(int depth, List<String> stringValues, List<int[]> ranges, MappedFieldType intFieldType) {
final int numClauses = randomIntBetween(1, 4);
final boolean onlyShouldClauses = randomBoolean();
final BooleanQuery.Builder builder = new BooleanQuery.Builder();
int numShouldClauses = 0;
for (int i = 0; i < numClauses; i++) {
Query subQuery;
if (randomBoolean() && depth <= 3) {
subQuery = randomBQ(depth + 1, stringValues, ranges, intFieldType);
} else if (randomBoolean()) {
int[] range = randomFrom(ranges);
subQuery = intFieldType.rangeQuery(range[0], range[1], true, true, null, null, null, null);
} else {
subQuery = new TermQuery(new Term("string_field", randomFrom(stringValues)));
}
Occur occur;
if (onlyShouldClauses) {
occur = Occur.SHOULD;
} else {
occur = randomFrom(Arrays.asList(Occur.FILTER, Occur.MUST, Occur.SHOULD));
}
if (occur == Occur.SHOULD) {
numShouldClauses++;
}
builder.add(subQuery, occur);
}
builder.setMinimumNumberShouldMatch(randomIntBetween(0, numShouldClauses));
return builder.build();
}
public void testDuelIdBased() throws Exception {
List<Function<String, Query>> queryFunctions = new ArrayList<>();
queryFunctions.add((id) -> new PrefixQuery(new Term("field", id)));
@ -858,6 +958,90 @@ public class CandidateQueryTests extends ESSingleNodeTestCase {
assertEquals(1, topDocs.scoreDocs[1].doc);
}
public void testDuplicatedClauses2() throws Exception {
List<ParseContext.Document> docs = new ArrayList<>();
BooleanQuery.Builder builder = new BooleanQuery.Builder();
builder.setMinimumNumberShouldMatch(3);
builder.add(new TermQuery(new Term("field", "value1")), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value2")), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value2")), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value3")), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value3")), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value3")), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value4")), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value5")), Occur.SHOULD);
addQuery(builder.build(), docs);
indexWriter.addDocuments(docs);
indexWriter.close();
directoryReader = DirectoryReader.open(directory);
IndexSearcher shardSearcher = newSearcher(directoryReader);
shardSearcher.setQueryCache(null);
Version v = Version.CURRENT;
List<BytesReference> sources = Collections.singletonList(new BytesArray("{}"));
MemoryIndex memoryIndex = new MemoryIndex();
memoryIndex.addField("field", "value1 value4 value5", new WhitespaceAnalyzer());
IndexSearcher percolateSearcher = memoryIndex.createSearcher();
PercolateQuery query = (PercolateQuery) fieldType.percolateQuery("_name", queryStore, sources, percolateSearcher, v);
TopDocs topDocs = shardSearcher.search(query, 10, new Sort(SortField.FIELD_DOC), true, true);
assertEquals(1L, topDocs.totalHits);
assertEquals(0, topDocs.scoreDocs[0].doc);
memoryIndex = new MemoryIndex();
memoryIndex.addField("field", "value1 value2", new WhitespaceAnalyzer());
percolateSearcher = memoryIndex.createSearcher();
query = (PercolateQuery) fieldType.percolateQuery("_name", queryStore, sources, percolateSearcher, v);
topDocs = shardSearcher.search(query, 10, new Sort(SortField.FIELD_DOC), true, true);
assertEquals(1L, topDocs.totalHits);
assertEquals(0, topDocs.scoreDocs[0].doc);
memoryIndex = new MemoryIndex();
memoryIndex.addField("field", "value3", new WhitespaceAnalyzer());
percolateSearcher = memoryIndex.createSearcher();
query = (PercolateQuery) fieldType.percolateQuery("_name", queryStore, sources, percolateSearcher, v);
topDocs = shardSearcher.search(query, 10, new Sort(SortField.FIELD_DOC), true, true);
assertEquals(1L, topDocs.totalHits);
assertEquals(0, topDocs.scoreDocs[0].doc);
}
public void testMsmAndRanges_disjunction() throws Exception {
// Recreates a similar scenario that made testDuel() fail randomly:
// https://github.com/elastic/elasticsearch/issues/29393
List<ParseContext.Document> docs = new ArrayList<>();
BooleanQuery.Builder builder = new BooleanQuery.Builder();
builder.setMinimumNumberShouldMatch(2);
BooleanQuery.Builder builder1 = new BooleanQuery.Builder();
builder1.add(new TermQuery(new Term("field", "value1")), Occur.FILTER);
builder.add(builder1.build(), Occur.SHOULD);
builder.add(new TermQuery(new Term("field", "value2")), Occur.MUST_NOT);
builder.add(IntPoint.newRangeQuery("int_field", 0, 5), Occur.SHOULD);
builder.add(IntPoint.newRangeQuery("int_field", 6, 10), Occur.SHOULD);
addQuery(builder.build(), docs);
indexWriter.addDocuments(docs);
indexWriter.close();
directoryReader = DirectoryReader.open(directory);
IndexSearcher shardSearcher = newSearcher(directoryReader);
shardSearcher.setQueryCache(null);
Version v = Version.CURRENT;
List<BytesReference> sources = Collections.singletonList(new BytesArray("{}"));
Document document = new Document();
document.add(new IntPoint("int_field", 4));
document.add(new IntPoint("int_field", 7));
MemoryIndex memoryIndex = MemoryIndex.fromDocument(document, new WhitespaceAnalyzer());
IndexSearcher percolateSearcher = memoryIndex.createSearcher();
PercolateQuery query = (PercolateQuery) fieldType.percolateQuery("_name", queryStore, sources, percolateSearcher, v);
TopDocs topDocs = shardSearcher.search(query, 10, new Sort(SortField.FIELD_DOC), true, true);
assertEquals(1L, topDocs.totalHits);
assertEquals(0, topDocs.scoreDocs[0].doc);
}
private void duelRun(PercolateQuery.QueryStore queryStore, MemoryIndex memoryIndex, IndexSearcher shardSearcher) throws IOException {
boolean requireScore = randomBoolean();
IndexSearcher percolateSearcher = memoryIndex.createSearcher();
@ -900,7 +1084,17 @@ public class CandidateQueryTests extends ESSingleNodeTestCase {
// Additional stored information that is useful when debugging:
String queryToString = shardSearcher.doc(controlTopDocs.scoreDocs[i].doc).get("query_to_string");
logger.error("topDocs.scoreDocs[{}].query_to_string={}", i, queryToString);
logger.error("controlTopDocs.scoreDocs[{}].query_to_string={}", i, queryToString);
TermsEnum tenum = MultiFields.getFields(shardSearcher.getIndexReader()).terms(fieldType.queryTermsField.name()).iterator();
StringBuilder builder = new StringBuilder();
for (BytesRef term = tenum.next(); term != null; term = tenum.next()) {
PostingsEnum penum = tenum.postings(null);
if (penum.advance(controlTopDocs.scoreDocs[i].doc) == controlTopDocs.scoreDocs[i].doc) {
builder.append(term.utf8ToString()).append(',');
}
}
logger.error("controlTopDocs.scoreDocs[{}].query_terms_field={}", i, builder.toString());
NumericDocValues numericValues =
MultiDocValues.getNumericValues(shardSearcher.getIndexReader(), fieldType.minimumShouldMatchField.name());

View File

@ -877,7 +877,7 @@ public class PercolatorFieldMapperTests extends ESSingleNodeTestCase {
assertThat(values.get(1), equalTo("field\0value2"));
assertThat(values.get(2), equalTo("field\0value3"));
int msm = doc.rootDoc().getFields(fieldType.minimumShouldMatchField.name())[0].numericValue().intValue();
assertThat(msm, equalTo(3));
assertThat(msm, equalTo(2));
qb = boolQuery()
.must(boolQuery().must(termQuery("field", "value1")).must(termQuery("field", "value2")))
@ -901,7 +901,7 @@ public class PercolatorFieldMapperTests extends ESSingleNodeTestCase {
assertThat(values.get(3), equalTo("field\0value4"));
assertThat(values.get(4), equalTo("field\0value5"));
msm = doc.rootDoc().getFields(fieldType.minimumShouldMatchField.name())[0].numericValue().intValue();
assertThat(msm, equalTo(4));
assertThat(msm, equalTo(2));
qb = boolQuery()
.minimumShouldMatch(3)

View File

@ -18,15 +18,82 @@
*/
package org.elasticsearch.percolator;
import org.apache.lucene.analysis.core.WhitespaceAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.RandomIndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.memory.MemoryIndex;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.MatchNoDocsQuery;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.util.FixedBitSet;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.test.ESTestCase;
import java.util.Collections;
import java.util.stream.IntStream;
public class PercolatorMatchedSlotSubFetchPhaseTests extends ESTestCase {
public void testHitsExecute() throws Exception {
try (Directory directory = newDirectory()) {
// Need a one doc index:
try (RandomIndexWriter indexWriter = new RandomIndexWriter(random(), directory)) {
Document document = new Document();
indexWriter.addDocument(document);
}
try (DirectoryReader reader = DirectoryReader.open(directory)) {
IndexSearcher indexSearcher = new IndexSearcher(reader);
// A match:
{
SearchHit[] hits = new SearchHit[]{new SearchHit(0)};
PercolateQuery.QueryStore queryStore = ctx -> docId -> new TermQuery(new Term("field", "value"));
MemoryIndex memoryIndex = new MemoryIndex();
memoryIndex.addField("field", "value", new WhitespaceAnalyzer());
PercolateQuery percolateQuery = new PercolateQuery("_name", queryStore, Collections.emptyList(),
new MatchAllDocsQuery(), memoryIndex.createSearcher(), new MatchNoDocsQuery());
PercolatorMatchedSlotSubFetchPhase.innerHitsExecute(percolateQuery, indexSearcher, hits);
assertNotNull(hits[0].field(PercolatorMatchedSlotSubFetchPhase.FIELD_NAME_PREFIX));
assertEquals(0, (int) hits[0].field(PercolatorMatchedSlotSubFetchPhase.FIELD_NAME_PREFIX).getValue());
}
// No match:
{
SearchHit[] hits = new SearchHit[]{new SearchHit(0)};
PercolateQuery.QueryStore queryStore = ctx -> docId -> new TermQuery(new Term("field", "value"));
MemoryIndex memoryIndex = new MemoryIndex();
memoryIndex.addField("field", "value1", new WhitespaceAnalyzer());
PercolateQuery percolateQuery = new PercolateQuery("_name", queryStore, Collections.emptyList(),
new MatchAllDocsQuery(), memoryIndex.createSearcher(), new MatchNoDocsQuery());
PercolatorMatchedSlotSubFetchPhase.innerHitsExecute(percolateQuery, indexSearcher, hits);
assertNull(hits[0].field(PercolatorMatchedSlotSubFetchPhase.FIELD_NAME_PREFIX));
}
// No query:
{
SearchHit[] hits = new SearchHit[]{new SearchHit(0)};
PercolateQuery.QueryStore queryStore = ctx -> docId -> null;
MemoryIndex memoryIndex = new MemoryIndex();
memoryIndex.addField("field", "value", new WhitespaceAnalyzer());
PercolateQuery percolateQuery = new PercolateQuery("_name", queryStore, Collections.emptyList(),
new MatchAllDocsQuery(), memoryIndex.createSearcher(), new MatchNoDocsQuery());
PercolatorMatchedSlotSubFetchPhase.innerHitsExecute(percolateQuery, indexSearcher, hits);
assertNull(hits[0].field(PercolatorMatchedSlotSubFetchPhase.FIELD_NAME_PREFIX));
}
}
}
}
public void testConvertTopDocsToSlots() {
ScoreDoc[] scoreDocs = new ScoreDoc[randomInt(128)];
for (int i = 0; i < scoreDocs.length; i++) {

View File

@ -65,6 +65,7 @@ import org.elasticsearch.test.ESTestCase;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashSet;
import java.util.List;
@ -175,7 +176,7 @@ public class QueryAnalyzerTests extends ESTestCase {
assertTermsEqual(result.extractions, new Term("_field", "_term1"), new Term("_field", "_term2"));
assertEquals(1, result.minimumShouldMatch); // because of the dup term
}
public void testExtractQueryMetadata_booleanQuery() {
BooleanQuery.Builder builder = new BooleanQuery.Builder();
@ -538,10 +539,8 @@ public class QueryAnalyzerTests extends ESTestCase {
builder.setMinimumNumberShouldMatch(1);
result = analyze(builder.build(), Version.CURRENT);
assertThat("Must clause is exact, but m_s_m is 1 so one should clause must match too", result.verified, is(false));
assertThat(result.minimumShouldMatch, equalTo(1));
assertThat(result.extractions.size(), equalTo(1));
extractions = new ArrayList<>(result.extractions);
assertThat(extractions.get(0).term, equalTo(new Term("_field", "_term3")));
assertThat(result.minimumShouldMatch, equalTo(2));
assertTermsEqual(result.extractions, termQuery1.getTerm(), termQuery2.getTerm(), termQuery3.getTerm());
builder = new BooleanQuery.Builder();
BooleanQuery.Builder innerBuilder = new BooleanQuery.Builder();
@ -1297,7 +1296,7 @@ public class QueryAnalyzerTests extends ESTestCase {
boolQuery.add(LongPoint.newRangeQuery("_field2", 10, 15), BooleanClause.Occur.SHOULD);
result = analyze(boolQuery.build(), Version.CURRENT);
assertFalse(result.verified);
assertThat(result.minimumShouldMatch, equalTo(2));
assertThat(result.minimumShouldMatch, equalTo(1));
assertEquals(2, result.extractions.size());
assertEquals("_field2", new ArrayList<>(result.extractions).get(0).range.fieldName);
assertEquals("_field1", new ArrayList<>(result.extractions).get(1).range.fieldName);
@ -1337,9 +1336,9 @@ public class QueryAnalyzerTests extends ESTestCase {
BooleanClause.Occur.MUST
);
Result result = analyze(builder.build(), Version.CURRENT);
assertThat(result.verified, is(true));
assertThat(result.verified, is(false));
assertThat(result.matchAllDocs, is(false));
assertThat(result.minimumShouldMatch, equalTo(4));
assertThat(result.minimumShouldMatch, equalTo(2));
assertTermsEqual(result.extractions, new Term("field", "value1"), new Term("field", "value2"),
new Term("field", "value3"), new Term("field", "value4"));
@ -1373,6 +1372,21 @@ public class QueryAnalyzerTests extends ESTestCase {
new Term("field", "value3"), new Term("field", "value4"));
}
public void testEmptyQueries() {
BooleanQuery.Builder builder = new BooleanQuery.Builder();
Result result = analyze(builder.build(), Version.CURRENT);
assertThat(result.verified, is(false));
assertThat(result.matchAllDocs, is(false));
assertThat(result.minimumShouldMatch, equalTo(0));
assertThat(result.extractions.size(), equalTo(0));
result = analyze(new DisjunctionMaxQuery(Collections.emptyList(), 0f), Version.CURRENT);
assertThat(result.verified, is(false));
assertThat(result.matchAllDocs, is(false));
assertThat(result.minimumShouldMatch, equalTo(0));
assertThat(result.extractions.size(), equalTo(0));
}
private static void assertDimension(byte[] expected, Consumer<byte[]> consumer) {
byte[] dest = new byte[expected.length];
consumer.accept(dest);

View File

@ -27,7 +27,7 @@ import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.mapper.UidFieldMapper;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.slice.SliceBuilder;
import org.elasticsearch.tasks.TaskId;
@ -127,7 +127,7 @@ class BulkByScrollParallelizationHelper {
LeaderBulkByScrollTaskState worker = task.getLeaderState();
int totalSlices = worker.getSlices();
TaskId parentTaskId = new TaskId(localNodeId, task.getId());
for (final SearchRequest slice : sliceIntoSubRequests(request.getSearchRequest(), UidFieldMapper.NAME, totalSlices)) {
for (final SearchRequest slice : sliceIntoSubRequests(request.getSearchRequest(), IdFieldMapper.NAME, totalSlices)) {
// TODO move the request to the correct node. maybe here or somehow do it as part of startup for reindex in general....
Request requestForSlice = request.forSlice(parentTaskId, slice, totalSlices);
ActionListener<BulkByScrollResponse> sliceListener = ActionListener.wrap(

View File

@ -42,8 +42,8 @@ setup:
---
"Sliced scroll":
- skip:
version: " - 5.3.0"
reason: Prior version uses a random seed per node to compute the hash of the keys.
version: " - 6.99.99"
reason: Disabled until pr 29353 is backported
- do:
search:

View File

@ -99,18 +99,13 @@ public class Queries {
.build();
}
private static boolean isNegativeQuery(Query q) {
static boolean isNegativeQuery(Query q) {
if (!(q instanceof BooleanQuery)) {
return false;
}
List<BooleanClause> clauses = ((BooleanQuery) q).clauses();
if (clauses.isEmpty()) {
return false;
}
for (BooleanClause clause : clauses) {
if (!clause.isProhibited()) return false;
}
return true;
return clauses.isEmpty() == false &&
clauses.stream().allMatch(BooleanClause::isProhibited);
}
public static Query fixNegativeQueryIfNeeded(Query q) {
@ -120,7 +115,7 @@ public class Queries {
for (BooleanClause clause : bq) {
builder.add(clause);
}
builder.add(newMatchAllQuery(), BooleanClause.Occur.MUST);
builder.add(newMatchAllQuery(), BooleanClause.Occur.FILTER);
return builder.build();
}
return q;

View File

@ -24,13 +24,8 @@ import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentFragment;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.joda.time.Period;
import org.joda.time.PeriodType;
import org.joda.time.format.PeriodFormat;
import org.joda.time.format.PeriodFormatter;
import java.io.IOException;
import java.util.Collections;
@ -240,19 +235,6 @@ public class TimeValue implements Writeable, Comparable<TimeValue>, ToXContentFr
return daysFrac();
}
private final PeriodFormatter defaultFormatter = PeriodFormat.getDefault()
.withParseType(PeriodType.standard());
public String format() {
Period period = new Period(millis());
return defaultFormatter.print(period);
}
public String format(PeriodType type) {
Period period = new Period(millis());
return PeriodFormat.getDefault().withParseType(type).print(period);
}
/**
* Returns a {@link String} representation of the current {@link TimeValue}.
*

View File

@ -75,16 +75,6 @@ public class LocaleUtils {
return locale;
}
/**
* Parse the string describing a locale into a {@link Locale} object
* for 5.x indices.
*/
@Deprecated
public static Locale parse5x(String localeStr) {
final String[] parts = localeStr.split("_", -1);
return parseParts(parts);
}
private static Locale parseParts(String[] parts) {
switch (parts.length) {
case 3:

View File

@ -38,6 +38,9 @@ public enum VersionType implements Writeable {
if (expectedVersion == Versions.MATCH_DELETED) {
return "document already exists (current version [" + currentVersion + "])";
}
if (currentVersion == Versions.NOT_FOUND) {
return "document does not exist (expected version [" + expectedVersion + "])";
}
return "current version [" + currentVersion + "] is different than the one provided [" + expectedVersion + "]";
}
@ -48,6 +51,9 @@ public enum VersionType implements Writeable {
@Override
public String explainConflictForReads(long currentVersion, long expectedVersion) {
if (currentVersion == Versions.NOT_FOUND) {
return "document does not exist (expected version [" + expectedVersion + "])";
}
return "current version [" + currentVersion + "] is different than the one provided [" + expectedVersion + "]";
}
@ -123,6 +129,9 @@ public enum VersionType implements Writeable {
@Override
public String explainConflictForReads(long currentVersion, long expectedVersion) {
if (currentVersion == Versions.NOT_FOUND) {
return "document does not exist (expected version [" + expectedVersion + "])";
}
return "current version [" + currentVersion + "] is different than the one provided [" + expectedVersion + "]";
}
@ -178,6 +187,9 @@ public enum VersionType implements Writeable {
@Override
public String explainConflictForReads(long currentVersion, long expectedVersion) {
if (currentVersion == Versions.NOT_FOUND) {
return "document does not exist (expected version [" + expectedVersion + "])";
}
return "current version [" + currentVersion + "] is different than the one provided [" + expectedVersion + "]";
}

View File

@ -33,7 +33,6 @@ import org.apache.lucene.search.IndexOrDocValuesQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.Version;
import org.elasticsearch.common.Explicit;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.geo.ShapeRelation;
@ -159,13 +158,7 @@ public class DateFieldMapper extends FieldMapper {
builder.ignoreMalformed(TypeParsers.nodeBooleanValue(name, "ignore_malformed", propNode, parserContext));
iterator.remove();
} else if (propName.equals("locale")) {
Locale locale;
if (parserContext.indexVersionCreated().onOrAfter(Version.V_6_0_0_beta2)) {
locale = LocaleUtils.parse(propNode.toString());
} else {
locale = LocaleUtils.parse5x(propNode.toString());
}
builder.locale(locale);
builder.locale(LocaleUtils.parse(propNode.toString()));
iterator.remove();
} else if (propName.equals("format")) {
builder.dateTimeFormatter(parseDateTimeFormatter(propNode));

View File

@ -161,7 +161,7 @@ public class IdFieldMapper extends MetadataFieldMapper {
@Override
public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName) {
if (indexOptions() == IndexOptions.NONE) {
throw new IllegalArgumentException("Fielddata access on the _uid field is disallowed");
throw new IllegalArgumentException("Fielddata access on the _id field is disallowed");
}
final IndexFieldData.Builder fieldDataBuilder = new PagedBytesIndexFieldData.Builder(
TextFieldMapper.Defaults.FIELDDATA_MIN_FREQUENCY,

View File

@ -186,13 +186,7 @@ public class RangeFieldMapper extends FieldMapper {
builder.coerce(TypeParsers.nodeBooleanValue(name, "coerce", propNode, parserContext));
iterator.remove();
} else if (propName.equals("locale")) {
Locale locale;
if (parserContext.indexVersionCreated().onOrAfter(Version.V_6_0_0_beta2)) {
locale = LocaleUtils.parse(propNode.toString());
} else {
locale = LocaleUtils.parse5x(propNode.toString());
}
builder.locale(locale);
builder.locale(LocaleUtils.parse(propNode.toString()));
iterator.remove();
} else if (propName.equals("format")) {
builder.dateTimeFormatter(parseDateTimeFormatter(propNode));

View File

@ -164,16 +164,20 @@ public class TranslogWriter extends BaseTranslogReader implements Closeable {
return tragedy;
}
private synchronized void closeWithTragicEvent(Exception exception) throws IOException {
assert exception != null;
private synchronized void closeWithTragicEvent(final Exception ex) {
assert ex != null;
if (tragedy == null) {
tragedy = exception;
} else if (tragedy != exception) {
tragedy = ex;
} else if (tragedy != ex) {
// it should be safe to call closeWithTragicEvents on multiple layers without
// worrying about self suppression.
tragedy.addSuppressed(exception);
tragedy.addSuppressed(ex);
}
try {
close();
} catch (final IOException | RuntimeException e) {
ex.addSuppressed(e);
}
close();
}
/**
@ -194,11 +198,7 @@ public class TranslogWriter extends BaseTranslogReader implements Closeable {
try {
data.writeTo(outputStream);
} catch (final Exception ex) {
try {
closeWithTragicEvent(ex);
} catch (final Exception inner) {
ex.addSuppressed(inner);
}
closeWithTragicEvent(ex);
throw ex;
}
totalOffset += data.length();
@ -290,13 +290,9 @@ public class TranslogWriter extends BaseTranslogReader implements Closeable {
synchronized (this) {
try {
sync(); // sync before we close..
} catch (IOException e) {
try {
closeWithTragicEvent(e);
} catch (Exception inner) {
e.addSuppressed(inner);
}
throw e;
} catch (final Exception ex) {
closeWithTragicEvent(ex);
throw ex;
}
if (closed.compareAndSet(false, true)) {
return new TranslogReader(getLastSyncedCheckpoint(), channel, path, getFirstOperationOffset());
@ -346,12 +342,8 @@ public class TranslogWriter extends BaseTranslogReader implements Closeable {
try {
outputStream.flush();
checkpointToSync = getCheckpoint();
} catch (Exception ex) {
try {
closeWithTragicEvent(ex);
} catch (Exception inner) {
ex.addSuppressed(inner);
}
} catch (final Exception ex) {
closeWithTragicEvent(ex);
throw ex;
}
}
@ -360,12 +352,8 @@ public class TranslogWriter extends BaseTranslogReader implements Closeable {
try {
channel.force(false);
writeCheckpoint(channelFactory, path.getParent(), checkpointToSync);
} catch (Exception ex) {
try {
closeWithTragicEvent(ex);
} catch (Exception inner) {
ex.addSuppressed(inner);
}
} catch (final Exception ex) {
closeWithTragicEvent(ex);
throw ex;
}
assert lastSyncedCheckpoint.offset <= checkpointToSync.offset :
@ -392,13 +380,9 @@ public class TranslogWriter extends BaseTranslogReader implements Closeable {
}
}
}
} catch (final IOException e) {
try {
closeWithTragicEvent(e);
} catch (final IOException inner) {
e.addSuppressed(inner);
}
throw e;
} catch (final Exception ex) {
closeWithTragicEvent(ex);
throw ex;
}
// we don't have to have a lock here because we only write ahead to the file, so all writes has been complete
// for the requested location.
@ -451,12 +435,8 @@ public class TranslogWriter extends BaseTranslogReader implements Closeable {
try {
ensureOpen();
super.flush();
} catch (Exception ex) {
try {
closeWithTragicEvent(ex);
} catch (Exception inner) {
ex.addSuppressed(inner);
}
} catch (final Exception ex) {
closeWithTragicEvent(ex);
throw ex;
}
}

View File

@ -210,7 +210,7 @@ public class SearchService extends AbstractLifecycleComponent implements IndexEv
if (defaultKeepAlive.millis() > maxKeepAlive.millis()) {
throw new IllegalArgumentException("Default keep alive setting for scroll [" + DEFAULT_KEEPALIVE_SETTING.getKey() + "]" +
" should be smaller than max keep alive [" + MAX_KEEPALIVE_SETTING.getKey() + "], " +
"was (" + defaultKeepAlive.format() + " > " + maxKeepAlive.format() + ")");
"was (" + defaultKeepAlive + " > " + maxKeepAlive + ")");
}
}
@ -673,8 +673,8 @@ public class SearchService extends AbstractLifecycleComponent implements IndexEv
private void contextScrollKeepAlive(SearchContext context, long keepAlive) throws IOException {
if (keepAlive > maxKeepAlive) {
throw new IllegalArgumentException(
"Keep alive for scroll (" + TimeValue.timeValueMillis(keepAlive).format() + ") is too large. " +
"It must be less than (" + TimeValue.timeValueMillis(maxKeepAlive).format() + "). " +
"Keep alive for scroll (" + TimeValue.timeValueMillis(keepAlive) + ") is too large. " +
"It must be less than (" + TimeValue.timeValueMillis(maxKeepAlive) + "). " +
"This limit can be set by changing the [" + MAX_KEEPALIVE_SETTING.getKey() + "] cluster level setting.");
}
context.keepAlive(keepAlive);

View File

@ -26,7 +26,11 @@ import org.apache.lucene.search.Query;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.CheckedFunction;
import org.elasticsearch.index.fielddata.SortedBinaryDocValues;
import org.elasticsearch.index.mapper.KeywordFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.StringFieldType;
import org.elasticsearch.index.mapper.TextFieldMapper;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.LeafBucketCollector;
import java.io.IOException;
@ -40,8 +44,8 @@ class BinaryValuesSource extends SingleDimensionValuesSource<BytesRef> {
private BytesRef currentValue;
BinaryValuesSource(MappedFieldType fieldType, CheckedFunction<LeafReaderContext, SortedBinaryDocValues, IOException> docValuesFunc,
int size, int reverseMul) {
super(fieldType, size, reverseMul);
DocValueFormat format, Object missing, int size, int reverseMul) {
super(format, fieldType, missing, size, reverseMul);
this.docValuesFunc = docValuesFunc;
this.values = new BytesRef[size];
}
@ -72,10 +76,8 @@ class BinaryValuesSource extends SingleDimensionValuesSource<BytesRef> {
@Override
void setAfter(Comparable<?> value) {
if (value.getClass() == BytesRef.class) {
afterValue = (BytesRef) value;
} else if (value.getClass() == String.class) {
afterValue = new BytesRef((String) value);
if (value.getClass() == String.class) {
afterValue = format.parseBytesRef(value.toString());
} else {
throw new IllegalArgumentException("invalid value, expected string, got " + value.getClass().getSimpleName());
}
@ -120,7 +122,8 @@ class BinaryValuesSource extends SingleDimensionValuesSource<BytesRef> {
@Override
SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query query) {
if (checkIfSortedDocsIsApplicable(reader, fieldType) == false ||
(query != null && query.getClass() != MatchAllDocsQuery.class)) {
fieldType instanceof StringFieldType == false ||
(query != null && query.getClass() != MatchAllDocsQuery.class)) {
return null;
}
return new TermsSortedDocsProducer(fieldType.name());

View File

@ -263,30 +263,84 @@ final class CompositeAggregator extends BucketsAggregator {
final int reverseMul = configs[i].reverseMul();
if (configs[i].valuesSource() instanceof ValuesSource.Bytes.WithOrdinals && reader instanceof DirectoryReader) {
ValuesSource.Bytes.WithOrdinals vs = (ValuesSource.Bytes.WithOrdinals) configs[i].valuesSource();
sources[i] = new GlobalOrdinalValuesSource(bigArrays, configs[i].fieldType(), vs::globalOrdinalsValues, size, reverseMul);
sources[i] = new GlobalOrdinalValuesSource(
bigArrays,
configs[i].fieldType(),
vs::globalOrdinalsValues,
configs[i].format(),
configs[i].missing(),
size,
reverseMul
);
if (i == 0 && sources[i].createSortedDocsProducerOrNull(reader, query) != null) {
// this the leading source and we can optimize it with the sorted docs producer but
// we don't want to use global ordinals because the number of visited documents
// should be low and global ordinals need one lookup per visited term.
Releasables.close(sources[i]);
sources[i] = new BinaryValuesSource(configs[i].fieldType(), vs::bytesValues, size, reverseMul);
sources[i] = new BinaryValuesSource(
configs[i].fieldType(),
vs::bytesValues,
configs[i].format(),
configs[i].missing(),
size,
reverseMul
);
}
} else if (configs[i].valuesSource() instanceof ValuesSource.Bytes) {
ValuesSource.Bytes vs = (ValuesSource.Bytes) configs[i].valuesSource();
sources[i] = new BinaryValuesSource(configs[i].fieldType(), vs::bytesValues, size, reverseMul);
sources[i] = new BinaryValuesSource(
configs[i].fieldType(),
vs::bytesValues,
configs[i].format(),
configs[i].missing(),
size,
reverseMul
);
} else if (configs[i].valuesSource() instanceof ValuesSource.Numeric) {
final ValuesSource.Numeric vs = (ValuesSource.Numeric) configs[i].valuesSource();
if (vs.isFloatingPoint()) {
sources[i] = new DoubleValuesSource(bigArrays, configs[i].fieldType(), vs::doubleValues, size, reverseMul);
sources[i] = new DoubleValuesSource(
bigArrays,
configs[i].fieldType(),
vs::doubleValues,
configs[i].format(),
configs[i].missing(),
size,
reverseMul
);
} else {
if (vs instanceof RoundingValuesSource) {
sources[i] = new LongValuesSource(bigArrays, configs[i].fieldType(), vs::longValues,
((RoundingValuesSource) vs)::round, configs[i].format(), size, reverseMul);
sources[i] = new LongValuesSource(
bigArrays,
configs[i].fieldType(),
vs::longValues,
((RoundingValuesSource) vs)::round,
configs[i].format(),
configs[i].missing(),
size,
reverseMul
);
} else {
sources[i] = new LongValuesSource(bigArrays, configs[i].fieldType(), vs::longValues,
(value) -> value, configs[i].format(), size, reverseMul);
sources[i] = new LongValuesSource(
bigArrays,
configs[i].fieldType(),
vs::longValues,
(value) -> value,
configs[i].format(),
configs[i].missing(),
size,
reverseMul
);
}
}
} else {
throw new IllegalArgumentException("Unknown value source: " + configs[i].valuesSource().getClass().getName() +
" for field: " + sources[i].fieldType.name());
}
}
return sources;

View File

@ -291,6 +291,7 @@ public abstract class CompositeValuesSourceBuilder<AB extends CompositeValuesSou
public final CompositeValuesSourceConfig build(SearchContext context) throws IOException {
ValuesSourceConfig<?> config = ValuesSourceConfig.resolve(context.getQueryShardContext(),
valueType, field, script, missing, null, format);
if (config.unmapped() && field != null && config.missing() == null) {
// this source cannot produce any values so we refuse to build
// since composite buckets are not created on null values

View File

@ -32,13 +32,25 @@ class CompositeValuesSourceConfig {
private final ValuesSource vs;
private final DocValueFormat format;
private final int reverseMul;
private final Object missing;
CompositeValuesSourceConfig(String name, @Nullable MappedFieldType fieldType, ValuesSource vs, DocValueFormat format, SortOrder order) {
/**
* Creates a new {@link CompositeValuesSourceConfig}.
* @param name The name of the source.
* @param fieldType The field type or null if the source is a script.
* @param vs The underlying {@link ValuesSource}.
* @param format The {@link DocValueFormat} of this source.
* @param order The sort order associated with this source.
* @param missing The missing value or null if documents with missing value should be ignored.
*/
CompositeValuesSourceConfig(String name, @Nullable MappedFieldType fieldType, ValuesSource vs, DocValueFormat format,
SortOrder order, @Nullable Object missing) {
this.name = name;
this.fieldType = fieldType;
this.vs = vs;
this.format = format;
this.reverseMul = order == SortOrder.ASC ? 1 : -1;
this.missing = missing;
}
/**
@ -70,6 +82,13 @@ class CompositeValuesSourceConfig {
return format;
}
/**
* The missing value for this configuration or null if documents with missing value should be ignored.
*/
Object missing() {
return missing;
}
/**
* The sort order for the values source (e.g. -1 for descending and 1 for ascending).
*/

View File

@ -33,7 +33,6 @@ import org.elasticsearch.script.Script;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;
import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;
import org.elasticsearch.search.aggregations.support.FieldContext;
import org.elasticsearch.search.aggregations.support.ValueType;
import org.elasticsearch.search.aggregations.support.ValuesSource;
import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
@ -227,7 +226,7 @@ public class DateHistogramValuesSourceBuilder extends CompositeValuesSourceBuild
// is specified in the builder.
final DocValueFormat docValueFormat = format() == null ? DocValueFormat.RAW : config.format();
final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;
return new CompositeValuesSourceConfig(name, fieldType, vs, docValueFormat, order());
return new CompositeValuesSourceConfig(name, fieldType, vs, docValueFormat, order(), missing());
} else {
throw new IllegalArgumentException("invalid source, expected numeric, got " + orig.getClass().getSimpleName());
}

View File

@ -28,6 +28,7 @@ import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.common.util.DoubleArray;
import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.LeafBucketCollector;
import java.io.IOException;
@ -42,8 +43,8 @@ class DoubleValuesSource extends SingleDimensionValuesSource<Double> {
DoubleValuesSource(BigArrays bigArrays, MappedFieldType fieldType,
CheckedFunction<LeafReaderContext, SortedNumericDoubleValues, IOException> docValuesFunc,
int size, int reverseMul) {
super(fieldType, size, reverseMul);
DocValueFormat format, Object missing, int size, int reverseMul) {
super(format, fieldType, missing, size, reverseMul);
this.docValuesFunc = docValuesFunc;
this.values = bigArrays.newDoubleArray(size, false);
}
@ -77,7 +78,9 @@ class DoubleValuesSource extends SingleDimensionValuesSource<Double> {
if (value instanceof Number) {
afterValue = ((Number) value).doubleValue();
} else {
afterValue = Double.parseDouble(value.toString());
afterValue = format.parseDouble(value.toString(), false, () -> {
throw new IllegalArgumentException("now() is not supported in [after] key");
});
}
}

View File

@ -30,6 +30,8 @@ import org.elasticsearch.common.lease.Releasables;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.common.util.LongArray;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.StringFieldType;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.LeafBucketCollector;
import java.io.IOException;
@ -52,8 +54,8 @@ class GlobalOrdinalValuesSource extends SingleDimensionValuesSource<BytesRef> {
GlobalOrdinalValuesSource(BigArrays bigArrays,
MappedFieldType type, CheckedFunction<LeafReaderContext, SortedSetDocValues, IOException> docValuesFunc,
int size, int reverseMul) {
super(type, size, reverseMul);
DocValueFormat format, Object missing, int size, int reverseMul) {
super(format, type, missing, size, reverseMul);
this.docValuesFunc = docValuesFunc;
this.values = bigArrays.newLongArray(size, false);
}
@ -87,10 +89,8 @@ class GlobalOrdinalValuesSource extends SingleDimensionValuesSource<BytesRef> {
@Override
void setAfter(Comparable<?> value) {
if (value instanceof BytesRef) {
afterValue = (BytesRef) value;
} else if (value instanceof String) {
afterValue = new BytesRef(value.toString());
if (value.getClass() == String.class) {
afterValue = format.parseBytesRef(value.toString());
} else {
throw new IllegalArgumentException("invalid value, expected string, got " + value.getClass().getSimpleName());
}
@ -164,7 +164,8 @@ class GlobalOrdinalValuesSource extends SingleDimensionValuesSource<BytesRef> {
@Override
SortedDocsProducer createSortedDocsProducerOrNull(IndexReader reader, Query query) {
if (checkIfSortedDocsIsApplicable(reader, fieldType) == false ||
(query != null && query.getClass() != MatchAllDocsQuery.class)) {
fieldType instanceof StringFieldType == false ||
(query != null && query.getClass() != MatchAllDocsQuery.class)) {
return null;
}
return new TermsSortedDocsProducer(fieldType.name());

View File

@ -115,7 +115,7 @@ public class HistogramValuesSourceBuilder extends CompositeValuesSourceBuilder<H
ValuesSource.Numeric numeric = (ValuesSource.Numeric) orig;
final HistogramValuesSource vs = new HistogramValuesSource(numeric, interval);
final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;
return new CompositeValuesSourceConfig(name, fieldType, vs, config.format(), order());
return new CompositeValuesSourceConfig(name, fieldType, vs, config.format(), order(), missing());
} else {
throw new IllegalArgumentException("invalid source, expected numeric, got " + orig.getClass().getSimpleName());
}

View File

@ -47,19 +47,16 @@ import java.util.function.ToLongFunction;
class LongValuesSource extends SingleDimensionValuesSource<Long> {
private final CheckedFunction<LeafReaderContext, SortedNumericDocValues, IOException> docValuesFunc;
private final LongUnaryOperator rounding;
// handles "format" for date histogram source
private final DocValueFormat format;
private final LongArray values;
private long currentValue;
LongValuesSource(BigArrays bigArrays, MappedFieldType fieldType,
CheckedFunction<LeafReaderContext, SortedNumericDocValues, IOException> docValuesFunc,
LongUnaryOperator rounding, DocValueFormat format, int size, int reverseMul) {
super(fieldType, size, reverseMul);
LongUnaryOperator rounding, DocValueFormat format, Object missing, int size, int reverseMul) {
super(format, fieldType, missing, size, reverseMul);
this.docValuesFunc = docValuesFunc;
this.rounding = rounding;
this.format = format;
this.values = bigArrays.newLongArray(size, false);
}

View File

@ -26,6 +26,7 @@ import org.apache.lucene.search.Query;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.lease.Releasable;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.LeafBucketCollector;
import org.elasticsearch.search.sort.SortOrder;
@ -35,21 +36,31 @@ import java.io.IOException;
* A source that can record and compare values of similar type.
*/
abstract class SingleDimensionValuesSource<T extends Comparable<T>> implements Releasable {
protected final DocValueFormat format;
@Nullable
protected final MappedFieldType fieldType;
@Nullable
protected final Object missing;
protected final int size;
protected final int reverseMul;
protected T afterValue;
@Nullable
protected MappedFieldType fieldType;
/**
* Ctr
* Creates a new {@link SingleDimensionValuesSource}.
*
* @param fieldType The fieldType associated with the source.
* @param format The format of the source.
* @param fieldType The field type or null if the source is a script.
* @param missing The missing value or null if documents with missing value should be ignored.
* @param size The number of values to record.
* @param reverseMul -1 if the natural order ({@link SortOrder#ASC} should be reversed.
*/
SingleDimensionValuesSource(@Nullable MappedFieldType fieldType, int size, int reverseMul) {
SingleDimensionValuesSource(DocValueFormat format, @Nullable MappedFieldType fieldType, @Nullable Object missing,
int size, int reverseMul) {
this.format = format;
this.fieldType = fieldType;
this.missing = missing;
this.size = size;
this.reverseMul = reverseMul;
this.afterValue = null;
@ -127,6 +138,7 @@ abstract class SingleDimensionValuesSource<T extends Comparable<T>> implements R
*/
protected boolean checkIfSortedDocsIsApplicable(IndexReader reader, MappedFieldType fieldType) {
if (fieldType == null ||
missing != null ||
fieldType.indexOptions() == IndexOptions.NONE ||
// inverse of the natural order
reverseMul == -1) {

View File

@ -24,7 +24,9 @@ import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.support.ValuesSource;
import org.elasticsearch.search.aggregations.support.ValuesSourceConfig;
import org.elasticsearch.search.internal.SearchContext;
@ -84,6 +86,13 @@ public class TermsValuesSourceBuilder extends CompositeValuesSourceBuilder<Terms
vs = ValuesSource.Numeric.EMPTY;
}
final MappedFieldType fieldType = config.fieldContext() != null ? config.fieldContext().fieldType() : null;
return new CompositeValuesSourceConfig(name, fieldType, vs, config.format(), order());
final DocValueFormat format;
if (format() == null && fieldType instanceof DateFieldMapper.DateFieldType) {
// defaults to the raw format on date fields (preserve timestamp as longs).
format = DocValueFormat.RAW;
} else {
format = config.format();
}
return new CompositeValuesSourceConfig(name, fieldType, vs, format, order(), missing());
}
}

View File

@ -22,11 +22,14 @@ package org.elasticsearch.search.slice;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.MatchNoDocsQuery;
import org.apache.lucene.search.Query;
import org.elasticsearch.Version;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.logging.DeprecationLogger;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
@ -53,6 +56,9 @@ import java.util.Objects;
* {@link org.elasticsearch.search.slice.DocValuesSliceQuery} is used to filter the results.
*/
public class SliceBuilder implements Writeable, ToXContentObject {
private static final DeprecationLogger DEPRECATION_LOG = new DeprecationLogger(Loggers.getLogger(SliceBuilder.class));
public static final ParseField FIELD_FIELD = new ParseField("field");
public static final ParseField ID_FIELD = new ParseField("id");
public static final ParseField MAX_FIELD = new ParseField("max");
@ -66,7 +72,7 @@ public class SliceBuilder implements Writeable, ToXContentObject {
}
/** Name of field to slice against (_uid by default) */
private String field = UidFieldMapper.NAME;
private String field = IdFieldMapper.NAME;
/** The id of the slice */
private int id = -1;
/** Max number of slices */
@ -75,7 +81,7 @@ public class SliceBuilder implements Writeable, ToXContentObject {
private SliceBuilder() {}
public SliceBuilder(int id, int max) {
this(UidFieldMapper.NAME, id, max);
this(IdFieldMapper.NAME, id, max);
}
/**
@ -91,14 +97,23 @@ public class SliceBuilder implements Writeable, ToXContentObject {
}
public SliceBuilder(StreamInput in) throws IOException {
this.field = in.readString();
String field = in.readString();
if (UidFieldMapper.NAME.equals(field) && in.getVersion().before(Version.V_6_3_0)) {
// This is safe because _id and _uid are handled the same way in #toFilter
field = IdFieldMapper.NAME;
}
this.field = field;
this.id = in.readVInt();
this.max = in.readVInt();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(field);
if (IdFieldMapper.NAME.equals(field) && out.getVersion().before(Version.V_6_3_0)) {
out.writeString(UidFieldMapper.NAME);
} else {
out.writeString(field);
}
out.writeVInt(id);
out.writeVInt(max);
}
@ -201,6 +216,14 @@ public class SliceBuilder implements Writeable, ToXContentObject {
if (context.getIndexSettings().isSingleType()) {
// on new indices, the _id acts as a _uid
field = IdFieldMapper.NAME;
DEPRECATION_LOG.deprecated("Computing slices on the [_uid] field is deprecated for 6.x indices, use [_id] instead");
}
useTermQuery = true;
} else if (IdFieldMapper.NAME.equals(field)) {
if (context.getIndexSettings().isSingleType() == false) {
// on old indices, we need _uid. We maintain this so that users
// can use _id to slice even if they still have 5.x indices.
field = UidFieldMapper.NAME;
}
useTermQuery = true;
} else if (type.hasDocValues() == false) {

View File

@ -41,12 +41,6 @@ public abstract class BaseRandomBinaryDocValuesRangeQueryTestCase extends BaseRa
// Can't test this how BaseRangeFieldQueryTestCase works now, because we're using BinaryDocValuesField here.
}
@Override
public void testRandomBig() throws Exception {
// Test regardless whether -Dtests.nightly=true has been specified:
super.testRandomBig();
}
@Override
protected final Field newRangeField(Range box) {
AbstractRange<?> testRange = (AbstractRange<?>) box;

View File

@ -120,9 +120,9 @@ public class UUIDTests extends ESTestCase {
Logger logger = Loggers.getLogger(UUIDTests.class);
// Low number so that the test runs quickly, but the results are more interesting with larger numbers
// of indexed documents
assertThat(testCompression(500000, 10000, 3, logger), Matchers.lessThan(12d)); // ~10.8 in practice
assertThat(testCompression(500000, 1000, 3, logger), Matchers.lessThan(14d)); // ~11.5 in practice
assertThat(testCompression(500000, 100, 3, logger), Matchers.lessThan(21d)); // ~19.5 in practice
assertThat(testCompression(100000, 10000, 3, logger), Matchers.lessThan(14d)); // ~12 in practice
assertThat(testCompression(100000, 1000, 3, logger), Matchers.lessThan(15d)); // ~13 in practice
assertThat(testCompression(100000, 100, 3, logger), Matchers.lessThan(21d)); // ~20 in practice
}
private static double testCompression(int numDocs, int numDocsPerSecond, int numNodes, Logger logger) throws Exception {

View File

@ -17,10 +17,16 @@
* under the License.
*/
package org.apache.lucene.search;
package org.elasticsearch.common.lucene.search;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanClause.Occur;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.DocValuesFieldExistsQuery;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.TermQuery;
import org.elasticsearch.Version;
import org.elasticsearch.common.lucene.search.Queries;
import org.elasticsearch.index.mapper.SeqNoFieldMapper;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.VersionUtils;
@ -43,4 +49,25 @@ public class QueriesTests extends ESTestCase {
}
}
public void testIsNegativeQuery() {
assertFalse(Queries.isNegativeQuery(new MatchAllDocsQuery()));
assertFalse(Queries.isNegativeQuery(new BooleanQuery.Builder().build()));
assertFalse(Queries.isNegativeQuery(new BooleanQuery.Builder()
.add(new TermQuery(new Term("foo", "bar")), Occur.MUST).build()));
assertTrue(Queries.isNegativeQuery(new BooleanQuery.Builder()
.add(new TermQuery(new Term("foo", "bar")), Occur.MUST_NOT).build()));
assertFalse(Queries.isNegativeQuery(new BooleanQuery.Builder()
.add(new MatchAllDocsQuery(), Occur.MUST)
.add(new MatchAllDocsQuery(), Occur.MUST_NOT).build()));
}
public void testFixNegativeQuery() {
assertEquals(new BooleanQuery.Builder()
.add(new MatchAllDocsQuery(), Occur.FILTER)
.add(new TermQuery(new Term("foo", "bar")), Occur.MUST_NOT).build(),
Queries.fixNegativeQueryIfNeeded(
new BooleanQuery.Builder()
.add(new TermQuery(new Term("foo", "bar")), Occur.MUST_NOT)
.build()));
}
}

View File

@ -57,13 +57,6 @@ public class TimeValueTests extends ESTestCase {
assertThat("1000d", equalTo(new TimeValue(1000, TimeUnit.DAYS).toString()));
}
public void testFormat() {
assertThat(new TimeValue(1025, TimeUnit.MILLISECONDS).format(PeriodType.dayTime()), equalTo("1 second and 25 milliseconds"));
assertThat(new TimeValue(1, TimeUnit.MINUTES).format(PeriodType.dayTime()), equalTo("1 minute"));
assertThat(new TimeValue(65, TimeUnit.MINUTES).format(PeriodType.dayTime()), equalTo("1 hour and 5 minutes"));
assertThat(new TimeValue(24 * 600 + 85, TimeUnit.MINUTES).format(PeriodType.dayTime()), equalTo("241 hours and 25 minutes"));
}
public void testMinusOne() {
assertThat(new TimeValue(-1).nanos(), lessThan(0L));
}

View File

@ -35,6 +35,11 @@ public class VersionTypeTests extends ESTestCase {
assertFalse(VersionType.INTERNAL.isVersionConflictForWrites(Versions.NOT_FOUND, Versions.MATCH_ANY, randomBoolean()));
assertFalse(VersionType.INTERNAL.isVersionConflictForReads(Versions.NOT_FOUND, Versions.MATCH_ANY));
assertEquals("current version [1] is different than the one provided [2]",
VersionType.INTERNAL.explainConflictForReads(1, 2));
assertEquals("document does not exist (expected version [2])",
VersionType.INTERNAL.explainConflictForReads(Versions.NOT_FOUND, 2));
// deletes
assertFalse(VersionType.INTERNAL.isVersionConflictForWrites(Versions.NOT_FOUND, Versions.MATCH_DELETED, true));
assertFalse(VersionType.INTERNAL.isVersionConflictForWrites(10, Versions.MATCH_DELETED, true));
@ -70,6 +75,11 @@ public class VersionTypeTests extends ESTestCase {
assertTrue(VersionType.EXTERNAL.validateVersionForReads(randomIntBetween(1, Integer.MAX_VALUE)));
assertFalse(VersionType.EXTERNAL.validateVersionForReads(randomIntBetween(Integer.MIN_VALUE, -1)));
assertEquals("current version [1] is different than the one provided [2]",
VersionType.EXTERNAL.explainConflictForReads(1, 2));
assertEquals("document does not exist (expected version [2])",
VersionType.EXTERNAL.explainConflictForReads(Versions.NOT_FOUND, 2));
assertTrue(VersionType.EXTERNAL_GTE.validateVersionForWrites(randomIntBetween(1, Integer.MAX_VALUE)));
assertFalse(VersionType.EXTERNAL_GTE.validateVersionForWrites(Versions.MATCH_ANY));
assertFalse(VersionType.EXTERNAL_GTE.validateVersionForWrites(randomIntBetween(Integer.MIN_VALUE, 0)));
@ -77,6 +87,11 @@ public class VersionTypeTests extends ESTestCase {
assertTrue(VersionType.EXTERNAL_GTE.validateVersionForReads(randomIntBetween(1, Integer.MAX_VALUE)));
assertFalse(VersionType.EXTERNAL_GTE.validateVersionForReads(randomIntBetween(Integer.MIN_VALUE, -1)));
assertEquals("current version [1] is different than the one provided [2]",
VersionType.EXTERNAL_GTE.explainConflictForReads(1, 2));
assertEquals("document does not exist (expected version [2])",
VersionType.EXTERNAL_GTE.explainConflictForReads(Versions.NOT_FOUND, 2));
assertTrue(VersionType.INTERNAL.validateVersionForWrites(randomIntBetween(1, Integer.MAX_VALUE)));
assertTrue(VersionType.INTERNAL.validateVersionForWrites(Versions.MATCH_ANY));
assertFalse(VersionType.INTERNAL.validateVersionForWrites(randomIntBetween(Integer.MIN_VALUE, 0)));

View File

@ -22,6 +22,7 @@ package org.elasticsearch.search.aggregations.bucket.composite;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.DoublePoint;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.InetAddressPoint;
import org.apache.lucene.document.IntPoint;
import org.apache.lucene.document.LongPoint;
import org.apache.lucene.document.SortedNumericDocValuesField;
@ -41,6 +42,7 @@ import org.apache.lucene.util.NumericUtils;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.index.mapper.ContentPath;
import org.elasticsearch.index.mapper.DateFieldMapper;
import org.elasticsearch.index.mapper.IpFieldMapper;
import org.elasticsearch.index.mapper.KeywordFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.Mapper;
@ -62,6 +64,7 @@ import org.junit.After;
import org.junit.Before;
import java.io.IOException;
import java.net.InetAddress;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
@ -87,7 +90,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
@Before
public void setUp() throws Exception {
super.setUp();
FIELD_TYPES = new MappedFieldType[6];
FIELD_TYPES = new MappedFieldType[7];
FIELD_TYPES[0] = new KeywordFieldMapper.KeywordFieldType();
FIELD_TYPES[0].setName("keyword");
FIELD_TYPES[0].setHasDocValues(true);
@ -113,6 +116,10 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
FIELD_TYPES[5] = new KeywordFieldMapper.KeywordFieldType();
FIELD_TYPES[5].setName("terms");
FIELD_TYPES[5].setHasDocValues(true);
FIELD_TYPES[6] = new IpFieldMapper.IpFieldType();
FIELD_TYPES[6].setName("ip");
FIELD_TYPES[6].setHasDocValues(true);
}
@Override
@ -812,6 +819,47 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
);
}
public void testWithDateTerms() throws IOException {
final List<Map<String, List<Object>>> dataset = new ArrayList<>();
dataset.addAll(
Arrays.asList(
createDocument("date", asLong("2017-10-20T03:08:45")),
createDocument("date", asLong("2016-09-20T09:00:34")),
createDocument("date", asLong("2016-09-20T11:34:00")),
createDocument("date", asLong("2017-10-20T06:09:24")),
createDocument("date", asLong("2017-10-19T06:09:24")),
createDocument("long", 4L)
)
);
testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery("date"),
LongPoint.newRangeQuery(
"date",
asLong("2016-09-20T09:00:34"),
asLong("2017-10-20T06:09:24")
)), dataset,
() -> {
TermsValuesSourceBuilder histo = new TermsValuesSourceBuilder("date")
.field("date");
return new CompositeAggregationBuilder("name", Collections.singletonList(histo));
},
(result) -> {
assertEquals(5, result.getBuckets().size());
assertEquals("{date=1508479764000}", result.afterKey().toString());
assertThat(result.getBuckets().get(0).getKey().get("date"), instanceOf(Long.class));
assertEquals("{date=1474362034000}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=1474371240000}", result.getBuckets().get(1).getKeyAsString());
assertEquals(1L, result.getBuckets().get(1).getDocCount());
assertEquals("{date=1508393364000}", result.getBuckets().get(2).getKeyAsString());
assertEquals(1L, result.getBuckets().get(2).getDocCount());
assertEquals("{date=1508468925000}", result.getBuckets().get(3).getKeyAsString());
assertEquals(1L, result.getBuckets().get(3).getDocCount());
assertEquals("{date=1508479764000}", result.getBuckets().get(4).getKeyAsString());
assertEquals(1L, result.getBuckets().get(4).getDocCount());
}
);
}
public void testWithDateHistogramAndFormat() throws IOException {
final List<Map<String, List<Object>>> dataset = new ArrayList<>();
dataset.addAll(
@ -1437,6 +1485,51 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
assertEquals(expected, seen);
}
public void testWithIP() throws Exception {
final List<Map<String, List<Object>>> dataset = new ArrayList<>();
dataset.addAll(
Arrays.asList(
createDocument("ip", InetAddress.getByName("127.0.0.1")),
createDocument("ip", InetAddress.getByName("192.168.0.1")),
createDocument("ip", InetAddress.getByName("::1")),
createDocument("ip", InetAddress.getByName("::1")),
createDocument("ip", InetAddress.getByName("192.168.0.1"))
)
);
testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery("ip")), dataset,
() -> {
TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder("ip")
.field("ip");
return new CompositeAggregationBuilder("name", Collections.singletonList(terms));
}, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{ip=192.168.0.1}", result.afterKey().toString());
assertEquals("{ip=::1}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{ip=127.0.0.1}", result.getBuckets().get(1).getKeyAsString());
assertEquals(1L, result.getBuckets().get(1).getDocCount());
assertEquals("{ip=192.168.0.1}", result.getBuckets().get(2).getKeyAsString());
assertEquals(2L, result.getBuckets().get(2).getDocCount());
}
);
testSearchCase(Arrays.asList(new MatchAllDocsQuery(), new DocValuesFieldExistsQuery("ip")), dataset,
() -> {
TermsValuesSourceBuilder terms = new TermsValuesSourceBuilder("ip")
.field("ip");
return new CompositeAggregationBuilder("name", Collections.singletonList(terms))
.aggregateAfter(Collections.singletonMap("ip", "::1"));
}, (result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{ip=192.168.0.1}", result.afterKey().toString());
assertEquals("{ip=127.0.0.1}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{ip=192.168.0.1}", result.getBuckets().get(1).getKeyAsString());
assertEquals(2L, result.getBuckets().get(1).getDocCount());
}
);
}
private void testSearchCase(List<Query> queries,
List<Map<String, List<Object>>> dataset,
Supplier<CompositeAggregationBuilder> create,
@ -1491,6 +1584,9 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
} else if (value instanceof String) {
doc.add(new SortedSetDocValuesField(name, new BytesRef((String) value)));
doc.add(new StringField(name, new BytesRef((String) value), Field.Store.NO));
} else if (value instanceof InetAddress) {
doc.add(new SortedSetDocValuesField(name, new BytesRef(InetAddressPoint.encode((InetAddress) value))));
doc.add(new InetAddressPoint(name, (InetAddress) value));
} else {
throw new AssertionError("invalid object: " + value.getClass().getSimpleName());
}

View File

@ -212,20 +212,28 @@ public class CompositeValuesCollectorQueueTests extends AggregatorTestCase {
if (types[i].clazz == Long.class) {
sources[i] = new LongValuesSource(bigArrays, fieldType,
context -> DocValues.getSortedNumeric(context.reader(), fieldType.name()), value -> value,
DocValueFormat.RAW, size, 1);
DocValueFormat.RAW, null, size, 1);
} else if (types[i].clazz == Double.class) {
sources[i] = new DoubleValuesSource(bigArrays, fieldType,
sources[i] = new DoubleValuesSource(
bigArrays, fieldType,
context -> FieldData.sortableLongBitsToDoubles(DocValues.getSortedNumeric(context.reader(), fieldType.name())),
size, 1);
DocValueFormat.RAW, null, size, 1
);
} else if (types[i].clazz == BytesRef.class) {
if (forceMerge) {
// we don't create global ordinals but we test this mode when the reader has a single segment
// since ordinals are global in this case.
sources[i] = new GlobalOrdinalValuesSource(bigArrays, fieldType,
context -> DocValues.getSortedSet(context.reader(), fieldType.name()), size, 1);
sources[i] = new GlobalOrdinalValuesSource(
bigArrays, fieldType,
context -> DocValues.getSortedSet(context.reader(), fieldType.name()),
DocValueFormat.RAW, null, size, 1
);
} else {
sources[i] = new BinaryValuesSource(fieldType,
context -> FieldData.toString(DocValues.getSortedSet(context.reader(), fieldType.name())), size, 1);
sources[i] = new BinaryValuesSource(
fieldType,
context -> FieldData.toString(DocValues.getSortedSet(context.reader(), fieldType.name())),
DocValueFormat.RAW, null, size, 1
);
}
} else {
assert(false);

View File

@ -25,6 +25,7 @@ import org.apache.lucene.index.Term;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.TermQuery;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.index.mapper.IpFieldMapper;
import org.elasticsearch.index.mapper.KeywordFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.NumberFieldMapper;
@ -38,28 +39,100 @@ public class SingleDimensionValuesSourceTests extends ESTestCase {
public void testBinarySorted() {
MappedFieldType keyword = new KeywordFieldMapper.KeywordFieldType();
keyword.setName("keyword");
BinaryValuesSource source = new BinaryValuesSource(keyword, context -> null, 1, 1);
BinaryValuesSource source = new BinaryValuesSource(
keyword,
context -> null,
DocValueFormat.RAW,
null,
1,
1
);
assertNull(source.createSortedDocsProducerOrNull(mockIndexReader(100, 49), null));
IndexReader reader = mockIndexReader(1, 1);
assertNotNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));
assertNotNull(source.createSortedDocsProducerOrNull(reader, null));
assertNull(source.createSortedDocsProducerOrNull(reader,
new TermQuery(new Term("keyword", "toto)"))));
source = new BinaryValuesSource(keyword, context -> null, 0, -1);
source = new BinaryValuesSource(
keyword,
context -> null,
DocValueFormat.RAW,
"missing_value",
1,
1
);
assertNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));
assertNull(source.createSortedDocsProducerOrNull(reader, null));
source = new BinaryValuesSource(
keyword,
context -> null,
DocValueFormat.RAW,
null,
0,
-1
);
assertNull(source.createSortedDocsProducerOrNull(reader, null));
MappedFieldType ip = new IpFieldMapper.IpFieldType();
ip.setName("ip");
source = new BinaryValuesSource(ip, context -> null, DocValueFormat.RAW,null, 1, 1);
assertNull(source.createSortedDocsProducerOrNull(reader, null));
}
public void testGlobalOrdinalsSorted() {
MappedFieldType keyword = new KeywordFieldMapper.KeywordFieldType();
final MappedFieldType keyword = new KeywordFieldMapper.KeywordFieldType();
keyword.setName("keyword");
BinaryValuesSource source = new BinaryValuesSource(keyword, context -> null, 1, 1);
GlobalOrdinalValuesSource source = new GlobalOrdinalValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
keyword, context -> null,
DocValueFormat.RAW,
null,
1,
1
);
assertNull(source.createSortedDocsProducerOrNull(mockIndexReader(100, 49), null));
IndexReader reader = mockIndexReader(1, 1);
assertNotNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));
assertNotNull(source.createSortedDocsProducerOrNull(reader, null));
assertNull(source.createSortedDocsProducerOrNull(reader,
new TermQuery(new Term("keyword", "toto)"))));
source = new BinaryValuesSource(keyword, context -> null, 1, -1);
source = new GlobalOrdinalValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
keyword,
context -> null,
DocValueFormat.RAW,
"missing_value",
1,
1
);
assertNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));
assertNull(source.createSortedDocsProducerOrNull(reader, null));
source = new GlobalOrdinalValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
keyword,
context -> null,
DocValueFormat.RAW,
null,
1,
-1
);
assertNull(source.createSortedDocsProducerOrNull(reader, null));
final MappedFieldType ip = new IpFieldMapper.IpFieldType();
ip.setName("ip");
source = new GlobalOrdinalValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
ip,
context -> null,
DocValueFormat.RAW,
null,
1,
1
);
assertNull(source.createSortedDocsProducerOrNull(reader, null));
}
@ -72,23 +145,62 @@ public class SingleDimensionValuesSourceTests extends ESTestCase {
numberType == NumberFieldMapper.NumberType.SHORT ||
numberType == NumberFieldMapper.NumberType.INTEGER ||
numberType == NumberFieldMapper.NumberType.LONG) {
source = new LongValuesSource(BigArrays.NON_RECYCLING_INSTANCE,
number, context -> null, value -> value, DocValueFormat.RAW, 1, 1);
source = new LongValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
number,
context -> null,
value -> value,
DocValueFormat.RAW,
null,
1,
1
);
assertNull(source.createSortedDocsProducerOrNull(mockIndexReader(100, 49), null));
IndexReader reader = mockIndexReader(1, 1);
assertNotNull(source.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));
assertNotNull(source.createSortedDocsProducerOrNull(reader, null));
assertNotNull(source.createSortedDocsProducerOrNull(reader, LongPoint.newRangeQuery("number", 0, 1)));
assertNull(source.createSortedDocsProducerOrNull(reader, new TermQuery(new Term("keyword", "toto)"))));
LongValuesSource sourceRev =
new LongValuesSource(BigArrays.NON_RECYCLING_INSTANCE,
number, context -> null, value -> value, DocValueFormat.RAW, 1, -1);
LongValuesSource sourceWithMissing = new LongValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
number,
context -> null,
value -> value,
DocValueFormat.RAW,
0d,
1,
1);
assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, new MatchAllDocsQuery()));
assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, null));
assertNull(sourceWithMissing.createSortedDocsProducerOrNull(reader, new TermQuery(new Term("keyword", "toto)"))));
LongValuesSource sourceRev = new LongValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
number,
context -> null,
value -> value,
DocValueFormat.RAW,
null,
1,
-1
);
assertNull(sourceRev.createSortedDocsProducerOrNull(reader, null));
} else if (numberType == NumberFieldMapper.NumberType.HALF_FLOAT ||
numberType == NumberFieldMapper.NumberType.FLOAT ||
numberType == NumberFieldMapper.NumberType.DOUBLE) {
source = new DoubleValuesSource(BigArrays.NON_RECYCLING_INSTANCE,
number, context -> null, 1, 1);
numberType == NumberFieldMapper.NumberType.FLOAT ||
numberType == NumberFieldMapper.NumberType.DOUBLE) {
source = new DoubleValuesSource(
BigArrays.NON_RECYCLING_INSTANCE,
number,
context -> null,
DocValueFormat.RAW,
null,
1,
1
);
IndexReader reader = mockIndexReader(1, 1);
assertNull(source.createSortedDocsProducerOrNull(reader, null));
} else{
throw new AssertionError ("missing type:" + numberType.typeName());
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.search.aggregations.pipeline.moving.avg;
import org.apache.lucene.util.LuceneTestCase;
import org.elasticsearch.action.bulk.BulkRequestBuilder;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequestBuilder;
@ -67,6 +68,7 @@ import static org.hamcrest.Matchers.is;
import static org.hamcrest.core.IsNull.notNullValue;
import static org.hamcrest.core.IsNull.nullValue;
@LuceneTestCase.AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/29456")
@ESIntegTestCase.SuiteScopeTestCase
public class MovAvgIT extends ESIntegTestCase {
private static final String INTERVAL_FIELD = "l_value";
@ -600,6 +602,7 @@ public class MovAvgIT extends ESIntegTestCase {
}
}
public void testHoltWintersValuedField() {
SearchResponse response = client()
.prepareSearch("idx").setTypes("type")
@ -1292,8 +1295,8 @@ public class MovAvgIT extends ESIntegTestCase {
assertThat("[_count] movavg should be NaN, but is ["+countMovAvg.value()+"] instead", countMovAvg.value(), equalTo(Double.NaN));
} else {
assertThat("[_count] movavg is null", countMovAvg, notNullValue());
assertTrue("[_count] movavg does not match expected [" + countMovAvg.value() + " vs " + expectedCount + "]",
nearlyEqual(countMovAvg.value(), expectedCount, 0.1));
assertEquals("[_count] movavg does not match expected [" + countMovAvg.value() + " vs " + expectedCount + "]",
countMovAvg.value(), expectedCount, 0.1);
}
// This is a gap bucket
@ -1304,29 +1307,8 @@ public class MovAvgIT extends ESIntegTestCase {
assertThat("[value] movavg should be NaN, but is ["+valuesMovAvg.value()+"] instead", valuesMovAvg.value(), equalTo(Double.NaN));
} else {
assertThat("[value] movavg is null", valuesMovAvg, notNullValue());
assertTrue("[value] movavg does not match expected [" + valuesMovAvg.value() + " vs " + expectedValue + "]",
nearlyEqual(valuesMovAvg.value(), expectedValue, 0.1));
}
}
/**
* Better floating point comparisons courtesy of https://github.com/brazzy/floating-point-gui.de
*
* Snippet adapted to use doubles instead of floats
*/
private static boolean nearlyEqual(double a, double b, double epsilon) {
final double absA = Math.abs(a);
final double absB = Math.abs(b);
final double diff = Math.abs(a - b);
if (a == b) { // shortcut, handles infinities
return true;
} else if (a == 0 || b == 0 || diff < Double.MIN_NORMAL) {
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * Double.MIN_NORMAL);
} else { // use relative error
return diff / Math.min((absA + absB), Double.MAX_VALUE) < epsilon;
assertEquals("[value] movavg does not match expected [" + valuesMovAvg.value() + " vs " + expectedValue + "]",
valuesMovAvg.value(), expectedValue, 0.1);
}
}

View File

@ -39,7 +39,6 @@ import java.util.List;
import java.util.Map;
import static org.elasticsearch.search.profile.query.RandomQueryGenerator.randomQueryBuilder;
import static org.elasticsearch.test.hamcrest.DoubleMatcher.nearlyEqual;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.greaterThan;
import static org.hamcrest.Matchers.isEmptyOrNullString;
@ -156,8 +155,8 @@ public class QueryProfilerIT extends ESIntegTestCase {
assertTrue("Vanilla maxScore is NaN but Profile is not [" + profileMaxScore + "]",
Float.isNaN(profileMaxScore));
} else {
assertTrue("Profile maxScore of [" + profileMaxScore + "] is not close to Vanilla maxScore [" + vanillaMaxScore + "]",
nearlyEqual(vanillaMaxScore, profileMaxScore, 0.001));
assertEquals("Profile maxScore of [" + profileMaxScore + "] is not close to Vanilla maxScore [" + vanillaMaxScore + "]",
vanillaMaxScore, profileMaxScore, 0.001);
}
assertThat(

View File

@ -534,7 +534,7 @@ public class SearchScrollIT extends ESIntegTestCase {
client().admin().cluster().prepareUpdateSettings()
.setPersistentSettings(Settings.builder().put("search.max_keep_alive", "1m").put("search.default_keep_alive", "2m")).get
());
assertThat(exc.getMessage(), containsString("was (2 minutes > 1 minute)"));
assertThat(exc.getMessage(), containsString("was (2m > 1m)"));
assertAcked(client().admin().cluster().prepareUpdateSettings()
.setPersistentSettings(Settings.builder().put("search.default_keep_alive", "5m").put("search.max_keep_alive", "5m")).get());
@ -548,14 +548,14 @@ public class SearchScrollIT extends ESIntegTestCase {
exc = expectThrows(IllegalArgumentException.class, () -> client().admin().cluster().prepareUpdateSettings()
.setPersistentSettings(Settings.builder().put("search.default_keep_alive", "3m")).get());
assertThat(exc.getMessage(), containsString("was (3 minutes > 2 minutes)"));
assertThat(exc.getMessage(), containsString("was (3m > 2m)"));
assertAcked(client().admin().cluster().prepareUpdateSettings()
.setPersistentSettings(Settings.builder().put("search.default_keep_alive", "1m")).get());
exc = expectThrows(IllegalArgumentException.class, () -> client().admin().cluster().prepareUpdateSettings()
.setPersistentSettings(Settings.builder().put("search.max_keep_alive", "30s")).get());
assertThat(exc.getMessage(), containsString("was (1 minute > 30 seconds)"));
assertThat(exc.getMessage(), containsString("was (1m > 30s)"));
}
public void testInvalidScrollKeepAlive() throws IOException {
@ -577,7 +577,7 @@ public class SearchScrollIT extends ESIntegTestCase {
IllegalArgumentException illegalArgumentException =
(IllegalArgumentException) ExceptionsHelper.unwrap(exc, IllegalArgumentException.class);
assertNotNull(illegalArgumentException);
assertThat(illegalArgumentException.getMessage(), containsString("Keep alive for scroll (2 hours) is too large"));
assertThat(illegalArgumentException.getMessage(), containsString("Keep alive for scroll (2h) is too large"));
SearchResponse searchResponse = client().prepareSearch()
.setQuery(matchAllQuery())
@ -594,7 +594,7 @@ public class SearchScrollIT extends ESIntegTestCase {
illegalArgumentException =
(IllegalArgumentException) ExceptionsHelper.unwrap(exc, IllegalArgumentException.class);
assertNotNull(illegalArgumentException);
assertThat(illegalArgumentException.getMessage(), containsString("Keep alive for scroll (3 hours) is too large"));
assertThat(illegalArgumentException.getMessage(), containsString("Keep alive for scroll (3h) is too large"));
}
private void assertToXContentResponse(ClearScrollResponse response, boolean succeed, int numFreed) throws IOException {

View File

@ -40,6 +40,7 @@ import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.fielddata.IndexNumericFieldData;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.UidFieldMapper;
import org.elasticsearch.index.query.QueryShardContext;
@ -158,9 +159,9 @@ public class SliceBuilderTests extends ESTestCase {
return null;
}
};
fieldType.setName(UidFieldMapper.NAME);
fieldType.setName(IdFieldMapper.NAME);
fieldType.setHasDocValues(false);
when(context.fieldMapper(UidFieldMapper.NAME)).thenReturn(fieldType);
when(context.fieldMapper(IdFieldMapper.NAME)).thenReturn(fieldType);
when(context.getIndexReader()).thenReturn(reader);
Settings settings = Settings.builder()
.put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)
@ -225,7 +226,7 @@ public class SliceBuilderTests extends ESTestCase {
Map<Integer, AtomicInteger> numSliceMap = new HashMap<>();
for (int i = 0; i < numSlices; i++) {
for (int j = 0; j < numShards; j++) {
SliceBuilder slice = new SliceBuilder("_uid", i, numSlices);
SliceBuilder slice = new SliceBuilder("_id", i, numSlices);
Query q = slice.toFilter(context, j, numShards);
if (q instanceof TermsSliceQuery || q instanceof MatchAllDocsQuery) {
AtomicInteger count = numSliceMap.get(j);
@ -254,7 +255,7 @@ public class SliceBuilderTests extends ESTestCase {
List<Integer> targetShards = new ArrayList<>();
for (int i = 0; i < numSlices; i++) {
for (int j = 0; j < numShards; j++) {
SliceBuilder slice = new SliceBuilder("_uid", i, numSlices);
SliceBuilder slice = new SliceBuilder("_id", i, numSlices);
Query q = slice.toFilter(context, j, numShards);
if (q instanceof MatchNoDocsQuery == false) {
assertThat(q, instanceOf(MatchAllDocsQuery.class));
@ -270,7 +271,7 @@ public class SliceBuilderTests extends ESTestCase {
numSlices = numShards;
for (int i = 0; i < numSlices; i++) {
for (int j = 0; j < numShards; j++) {
SliceBuilder slice = new SliceBuilder("_uid", i, numSlices);
SliceBuilder slice = new SliceBuilder("_id", i, numSlices);
Query q = slice.toFilter(context, j, numShards);
if (i == j) {
assertThat(q, instanceOf(MatchAllDocsQuery.class));
@ -311,4 +312,68 @@ public class SliceBuilderTests extends ESTestCase {
assertThat(exc.getMessage(), containsString("cannot load numeric doc values"));
}
}
public void testToFilterDeprecationMessage() throws IOException {
Directory dir = new RAMDirectory();
try (IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(new MockAnalyzer(random())))) {
writer.commit();
}
QueryShardContext context = mock(QueryShardContext.class);
try (IndexReader reader = DirectoryReader.open(dir)) {
MappedFieldType fieldType = new MappedFieldType() {
@Override
public MappedFieldType clone() {
return null;
}
@Override
public String typeName() {
return null;
}
@Override
public Query termQuery(Object value, @Nullable QueryShardContext context) {
return null;
}
public Query existsQuery(QueryShardContext context) {
return null;
}
};
fieldType.setName(UidFieldMapper.NAME);
fieldType.setHasDocValues(false);
when(context.fieldMapper(UidFieldMapper.NAME)).thenReturn(fieldType);
when(context.getIndexReader()).thenReturn(reader);
Settings settings = Settings.builder()
.put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT)
.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 2)
.put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)
.build();
IndexMetaData indexState = IndexMetaData.builder("index").settings(settings).build();
IndexSettings indexSettings = new IndexSettings(indexState, Settings.EMPTY);
when(context.getIndexSettings()).thenReturn(indexSettings);
SliceBuilder builder = new SliceBuilder("_uid", 5, 10);
Query query = builder.toFilter(context, 0, 1);
assertThat(query, instanceOf(TermsSliceQuery.class));
assertThat(builder.toFilter(context, 0, 1), equalTo(query));
assertWarnings("Computing slices on the [_uid] field is deprecated for 6.x indices, use [_id] instead");
}
}
public void testSerializationBackcompat() throws IOException {
SliceBuilder sliceBuilder = new SliceBuilder(1, 5);
assertEquals(IdFieldMapper.NAME, sliceBuilder.getField());
SliceBuilder copy62 = copyWriteable(sliceBuilder,
new NamedWriteableRegistry(Collections.emptyList()),
SliceBuilder::new, Version.V_6_2_0);
assertEquals(sliceBuilder, copy62);
SliceBuilder copy63 = copyWriteable(copy62,
new NamedWriteableRegistry(Collections.emptyList()),
SliceBuilder::new, Version.V_6_3_0);
assertEquals(sliceBuilder, copy63);
}
}

View File

@ -1,45 +0,0 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.test.hamcrest;
public class DoubleMatcher {
/**
* Better floating point comparisons courtesy of https://github.com/brazzy/floating-point-gui.de
*
* Snippet adapted to use doubles instead of floats
*/
public static boolean nearlyEqual(double a, double b, double epsilon) {
final double absA = Math.abs(a);
final double absB = Math.abs(b);
final double diff = Math.abs(a - b);
if (a == b) { // shortcut, handles infinities
return true;
} else if (a == 0 || b == 0 || diff < Double.MIN_NORMAL) {
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * Double.MIN_NORMAL);
} else { // use relative error
return diff / Math.min((absA + absB), Double.MAX_VALUE) < epsilon;
}
}
}

View File

@ -140,6 +140,7 @@ public class BootstrapForTesting {
addClassCodebase(codebases,"plugin-classloader", "org.elasticsearch.plugins.ExtendedPluginsClassLoader");
addClassCodebase(codebases,"elasticsearch-nio", "org.elasticsearch.nio.ChannelFactory");
addClassCodebase(codebases, "elasticsearch-secure-sm", "org.elasticsearch.secure_sm.SecureSM");
addClassCodebase(codebases, "elasticsearch-rest-client", "org.elasticsearch.client.RestClient");
}
final Policy testFramework = Security.readPolicy(Bootstrap.class.getResource("test-framework.policy"), codebases);
final Policy esPolicy = new ESPolicy(codebases, perms, getPluginPermissions(), true);