Merge branch 'master' into rankeval

This commit is contained in:
Christoph Büscher 2017-11-21 14:11:02 +01:00
commit d979ccace9
1436 changed files with 39530 additions and 15700 deletions

View File

@ -38,6 +38,11 @@ If you have a bugfix or new feature that you would like to contribute to Elastic
We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code.
Note that it is unlikely the project will merge refactors for the sake of refactoring. These
types of pull requests have a high cost to maintainers in reviewing and testing with little
to no tangible benefit. This especially includes changes generated by tools. For example,
converting all generic interface instances to use the diamond operator.
The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) is similar. Details for individual projects can be found below.
### Fork and clone the repository
@ -106,9 +111,16 @@ then `File->New Project From Existing Sources`. Point to the root of
the source directory, select
`Import project from external model->Gradle`, enable
`Use auto-import`. Additionally, in order to run tests directly from
IDEA 2017.2 and above it is required to disable IDEA run launcher,
which can be achieved by adding `-Didea.no.launcher=true`
[JVM option](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties)
IDEA 2017.2 and above it is required to disable IDEA run launcher to avoid
finding yourself in "jar hell", which can be achieved by adding the
`-Didea.no.launcher=true` [JVM
option](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties)
or by adding `idea.no.launcher=true` to the
`idea.properties`[https://www.jetbrains.com/help/idea/file-idea-properties.html]
file which can be accessed under Help > Edit Custom Properties within IDEA. You
may also need to [remove `ant-javafx.jar` from your
classpath][https://github.com/elastic/elasticsearch/issues/14348] if that is
reported as a source of jar hell.
The Elasticsearch codebase makes heavy use of Java `assert`s and the
test runner requires that assertions be enabled within the JVM. This

View File

@ -208,8 +208,7 @@ In order to create a distribution, simply run the @gradle assemble@ command in t
The distribution for each project will be created under the @build/distributions@ directory in that project.
See the "TESTING":TESTING.asciidoc file for more information about
running the Elasticsearch test suite.
See the "TESTING":TESTING.asciidoc file for more information about running the Elasticsearch test suite.
h3. Upgrading from Elasticsearch 1.x?

View File

@ -351,7 +351,8 @@ These are the linux flavors the Vagrantfile currently supports:
* debian-9 aka stretch, the current debian stable distribution
* centos-6
* centos-7
* fedora-25
* fedora-26
* fedora-27
* oel-6 aka Oracle Enterprise Linux 6
* oel-7 aka Oracle Enterprise Linux 7
* sles-12
@ -427,23 +428,23 @@ sudo -E bats $BATS_TESTS/*.bats
You can also use Gradle to prepare the test environment and then starts a single VM:
-------------------------------------------------
gradle vagrantFedora25#up
gradle vagrantFedora27#up
-------------------------------------------------
Or any of vagrantCentos6#up, vagrantCentos7#up, vagrantDebian8#up,
vagrantFedora25#up, vagrantOel6#up, vagrantOel7#up, vagrantOpensuse13#up,
vagrantSles12#up, vagrantUbuntu1404#up, vagrantUbuntu1604#up.
vagrantDebian9#up, vagrantFedora26#up, vagrantFedora27#up, vagrantOel6#up, vagrantOel7#up,
vagrantOpensuse42#up,vagrantSles12#up, vagrantUbuntu1404#up, vagrantUbuntu1604#up.
Once up, you can then connect to the VM using SSH from the elasticsearch directory:
-------------------------------------------------
vagrant ssh fedora-25
vagrant ssh fedora-27
-------------------------------------------------
Or from another directory:
-------------------------------------------------
VAGRANT_CWD=/path/to/elasticsearch vagrant ssh fedora-25
VAGRANT_CWD=/path/to/elasticsearch vagrant ssh fedora-27
-------------------------------------------------
Note: Starting vagrant VM outside of the elasticsearch folder requires to
@ -471,28 +472,30 @@ is tested depends on the branch. On master, this will test against the current
stable branch. On the stable branch, it will test against the latest release
branch. Finally, on a release branch, it will test against the most recent release.
=== BWC Testing against a specific branch
=== BWC Testing against a specific remote/branch
Sometimes a backward compatibility change spans two versions. A common case is a new functionality
that needs a BWC bridge in and an unreleased versioned of a release branch (for example, 5.x).
To test the changes, you can instruct gradle to build the BWC version from a local branch instead of
pulling the release branch from GitHub. You do so using the `tests.bwc.refspec` system property:
To test the changes, you can instruct gradle to build the BWC version from a another remote/branch combination instead of
pulling the release branch from GitHub. You do so using the `tests.bwc.remote` and `tests.bwc.refspec` system properties:
-------------------------------------------------
gradle check -Dtests.bwc.refspec=origin/index_req_bwc_5.x
gradle check -Dtests.bwc.remote=${remote} -Dtests.bwc.refspec=index_req_bwc_5.x
-------------------------------------------------
The branch needs to be available on the local clone that the BWC makes of the repository you run the
tests from. Using the `origin` remote is a handy trick to make sure that a branch is available
and is up to date in the case of multiple runs.
The branch needs to be available on the remote that the BWC makes of the
repository you run the tests from. Using the remote is a handy trick to make
sure that a branch is available and is up to date in the case of multiple runs.
Example:
Say you need to make a change to `master` and have a BWC layer in `5.x`. You will need to:
. Create a branch called `index_req_change` off `master`. This will contain your change.
Say you need to make a change to `master` and have a BWC layer in `5.x`. You
will need to:
. Create a branch called `index_req_change` off your remote `${remote}`. This
will contain your change.
. Create a branch called `index_req_bwc_5.x` off `5.x`. This will contain your bwc layer.
. If not running the tests locally, push both branches to your remote repository.
. Run the tests with `gradle check -Dtests.bwc.refspec=origin/index_req_bwc_5.x`
. Push both branches to your remote repository.
. Run the tests with `gradle check -Dtests.bwc.remote=${remote} -Dtests.bwc.refspec=index_req_bwc_5.x`.
== Coverage analysis

8
Vagrantfile vendored
View File

@ -60,8 +60,12 @@ Vagrant.configure(2) do |config|
config.vm.box = "elastic/oraclelinux-7-x86_64"
rpm_common config
end
config.vm.define "fedora-25" do |config|
config.vm.box = "elastic/fedora-25-x86_64"
config.vm.define "fedora-26" do |config|
config.vm.box = "elastic/fedora-26-x86_64"
dnf_common config
end
config.vm.define "fedora-27" do |config|
config.vm.box = "elastic/fedora-27-x86_64"
dnf_common config
end
config.vm.define "opensuse-42" do |config|

View File

@ -81,6 +81,7 @@ List<Version> versions = []
// keep track of the previous major version's last minor, so we know where wire compat begins
int prevMinorIndex = -1 // index in the versions list of the last minor from the prev major
int lastPrevMinor = -1 // the minor version number from the prev major we most recently seen
int prevBugfixIndex = -1 // index in the versions list of the last bugfix release from the prev major
for (String line : versionLines) {
/* Note that this skips alphas and betas which is fine because they aren't
* compatible with anything. */
@ -108,12 +109,19 @@ for (String line : versionLines) {
lastPrevMinor = minor
}
}
if (major == prevMajor) {
prevBugfixIndex = versions.size() - 1
}
}
}
if (versions.toSorted { it.id } != versions) {
println "Versions: ${versions}"
throw new GradleException("Versions.java contains out of order version constants")
}
if (prevBugfixIndex != -1) {
versions[prevBugfixIndex] = new Version(versions[prevBugfixIndex].major, versions[prevBugfixIndex].minor,
versions[prevBugfixIndex].bugfix, versions[prevBugfixIndex].suffix, true)
}
if (currentVersion.bugfix == 0) {
// If on a release branch, after the initial release of that branch, the bugfix version will
// be bumped, and will be != 0. On master and N.x branches, we want to test against the
@ -223,6 +231,7 @@ subprojects {
"org.elasticsearch.gradle:build-tools:${version}": ':build-tools',
"org.elasticsearch:rest-api-spec:${version}": ':rest-api-spec',
"org.elasticsearch:elasticsearch:${version}": ':core',
"org.elasticsearch:elasticsearch-cli:${version}": ':core:cli',
"org.elasticsearch.client:elasticsearch-rest-client:${version}": ':client:rest',
"org.elasticsearch.client:elasticsearch-rest-client-sniffer:${version}": ':client:sniffer',
"org.elasticsearch.client:elasticsearch-rest-high-level-client:${version}": ':client:rest-high-level',
@ -261,6 +270,11 @@ subprojects {
ext.projectSubstitutions["org.elasticsearch.distribution.rpm:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-release-snapshot'
ext.projectSubstitutions["org.elasticsearch.distribution.zip:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-release-snapshot'
}
} else if (indexCompatVersions[-2].snapshot) {
/* This is a terrible hack for the bump to 6.0.1 which will be fixed by #27397 */
ext.projectSubstitutions["org.elasticsearch.distribution.deb:elasticsearch:${indexCompatVersions[-2]}"] = ':distribution:bwc-release-snapshot'
ext.projectSubstitutions["org.elasticsearch.distribution.rpm:elasticsearch:${indexCompatVersions[-2]}"] = ':distribution:bwc-release-snapshot'
ext.projectSubstitutions["org.elasticsearch.distribution.zip:elasticsearch:${indexCompatVersions[-2]}"] = ':distribution:bwc-release-snapshot'
}
project.afterEvaluate {
configurations.all {

View File

@ -19,6 +19,8 @@
import java.nio.file.Files
import org.gradle.util.GradleVersion
apply plugin: 'groovy'
group = 'org.elasticsearch.gradle'
@ -92,16 +94,18 @@ dependencies {
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
compile 'de.thetaphi:forbiddenapis:2.3'
compile 'de.thetaphi:forbiddenapis:2.4.1'
compile 'org.apache.rat:apache-rat:0.11'
compile "org.elasticsearch:jna:4.4.0-1"
}
// Gradle 2.14+ removed ProgressLogger(-Factory) classes from the public APIs
// Use logging dependency instead
// Gradle 4.3.1 stopped releasing the logging jars to jcenter, just use the last available one
GradleVersion logVersion = GradleVersion.current() > GradleVersion.version('4.3') ? GradleVersion.version('4.3') : GradleVersion.current()
dependencies {
compileOnly "org.gradle:gradle-logging:${GradleVersion.current().getVersion()}"
compileOnly "org.gradle:gradle-logging:${logVersion.getVersion()}"
compile 'ru.vyarus:gradle-animalsniffer-plugin:1.2.0' // Gradle 2.14 requires a version > 1.0.1
}

View File

@ -123,12 +123,20 @@ class BuildPlugin implements Plugin<Project> {
}
println " Random Testing Seed : ${project.testSeed}"
// enforce gradle version
GradleVersion minGradle = GradleVersion.version('3.3')
if (GradleVersion.current() < minGradle) {
// enforce Gradle version
final GradleVersion currentGradleVersion = GradleVersion.current();
final GradleVersion minGradle = GradleVersion.version('3.3')
if (currentGradleVersion < minGradle) {
throw new GradleException("${minGradle} or above is required to build elasticsearch")
}
final GradleVersion gradle42 = GradleVersion.version('4.2')
final GradleVersion gradle43 = GradleVersion.version('4.3')
if (currentGradleVersion >= gradle42 && currentGradleVersion < gradle43) {
throw new GradleException("${currentGradleVersion} is not compatible with the elasticsearch build")
}
// enforce Java version
if (javaVersionEnum < minimumJava) {
throw new GradleException("Java ${minimumJava} or above is required to build Elasticsearch")
@ -231,7 +239,7 @@ class BuildPlugin implements Plugin<Project> {
/** Return the configuration name used for finding transitive deps of the given dependency. */
private static String transitiveDepConfigName(String groupId, String artifactId, String version) {
return "_transitive_${groupId}:${artifactId}:${version}"
return "_transitive_${groupId}_${artifactId}_${version}"
}
/**

View File

@ -63,13 +63,11 @@ class ClusterConfiguration {
boolean debug = false
/**
* if <code>true</code> each node will be configured with <tt>discovery.zen.minimum_master_nodes</tt> set
* to the total number of nodes in the cluster. This will also cause that each node has `0s` state recovery
* timeout which can lead to issues if for instance an existing clusterstate is expected to be recovered
* before any tests start
* Configuration of the setting <tt>discovery.zen.minimum_master_nodes</tt> on the nodes.
* In case of more than one node, this defaults to the number of nodes
*/
@Input
boolean useMinimumMasterNodes = true
Closure<Integer> minimumMasterNodes = { getNumNodes() > 1 ? getNumNodes() : -1 }
@Input
String jvmArgs = "-Xms" + System.getProperty('tests.heap.size', '512m') +

View File

@ -311,13 +311,14 @@ class ClusterFormationTasks {
// Define a node attribute so we can test that it exists
'node.attr.testattr' : 'test'
]
// we set min master nodes to the total number of nodes in the cluster and
// basically skip initial state recovery to allow the cluster to form using a realistic master election
// this means all nodes must be up, join the seed node and do a master election. This will also allow new and
// old nodes in the BWC case to become the master
if (node.config.useMinimumMasterNodes && node.config.numNodes > 1) {
esConfig['discovery.zen.minimum_master_nodes'] = node.config.numNodes
esConfig['discovery.initial_state_timeout'] = '0s' // don't wait for state.. just start up quickly
int minimumMasterNodes = node.config.minimumMasterNodes.call()
if (minimumMasterNodes > 0) {
esConfig['discovery.zen.minimum_master_nodes'] = minimumMasterNodes
}
if (node.config.numNodes > 1) {
// don't wait for state.. just start up quickly
// this will also allow new and old nodes in the BWC case to become the master
esConfig['discovery.initial_state_timeout'] = '0s'
}
esConfig['node.max_local_storage_nodes'] = node.config.numNodes
esConfig['http.port'] = node.config.httpPort
@ -424,7 +425,7 @@ class ClusterFormationTasks {
Project pluginProject = plugin.getValue()
verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject)
String configurationName = "_plugin_${prefix}_${pluginProject.path}"
String configurationName = pluginConfigurationName(prefix, pluginProject)
Configuration configuration = project.configurations.findByName(configurationName)
if (configuration == null) {
configuration = project.configurations.create(configurationName)
@ -453,13 +454,21 @@ class ClusterFormationTasks {
return copyPlugins
}
private static String pluginConfigurationName(final String prefix, final Project project) {
return "_plugin_${prefix}_${project.path}".replace(':', '_')
}
private static String pluginBwcConfigurationName(final String prefix, final Project project) {
return "_plugin_bwc_${prefix}_${project.path}".replace(':', '_')
}
/** Configures task to copy a plugin based on a zip file resolved using dependencies for an older version */
static Task configureCopyBwcPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) {
Configuration bwcPlugins = project.configurations.getByName("${prefix}_elasticsearchBwcPlugins")
for (Map.Entry<String, Project> plugin : node.config.plugins.entrySet()) {
Project pluginProject = plugin.getValue()
verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject)
String configurationName = "_plugin_bwc_${prefix}_${pluginProject.path}"
String configurationName = pluginBwcConfigurationName(prefix, pluginProject)
Configuration configuration = project.configurations.findByName(configurationName)
if (configuration == null) {
configuration = project.configurations.create(configurationName)
@ -498,9 +507,9 @@ class ClusterFormationTasks {
static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin, String prefix) {
final FileCollection pluginZip;
if (node.nodeVersion != VersionProperties.elasticsearch) {
pluginZip = project.configurations.getByName("_plugin_bwc_${prefix}_${plugin.path}")
pluginZip = project.configurations.getByName(pluginBwcConfigurationName(prefix, plugin))
} else {
pluginZip = project.configurations.getByName("_plugin_${prefix}_${plugin.path}")
pluginZip = project.configurations.getByName(pluginConfigurationName(prefix, plugin))
}
// delay reading the file location until execution time by wrapping in a closure within a GString
final Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}"

View File

@ -19,7 +19,8 @@ class VagrantTestPlugin implements Plugin<Project> {
'centos-7',
'debian-8',
'debian-9',
'fedora-25',
'fedora-26',
'fedora-27',
'oel-6',
'oel-7',
'opensuse-42',

View File

@ -1,16 +1,14 @@
# When updating elasticsearch, please update 'rest' version in core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy
elasticsearch = 7.0.0-alpha1
lucene = 7.0.0-snapshot-d94a5f0
lucene = 7.1.0
# optional dependencies
spatial4j = 0.6
jts = 1.13
jackson = 2.8.6
snakeyaml = 1.15
jackson = 2.8.10
snakeyaml = 1.17
# when updating log4j, please update also docs/java-api/index.asciidoc
# when updating this version, please check if https://github.com/apache/logging-log4j2/pull/109 is released into the version that you are
# bumping to; if it is, remove the assumeTrues in EvilLoggerTests
log4j = 2.9.0
log4j = 2.9.1
slf4j = 1.6.2
# when updating the JNA version, also update the version in buildSrc/build.gradle

View File

@ -142,8 +142,8 @@ public class NoopSearchRequestBuilder extends ActionRequestBuilder<SearchRequest
/**
* Sets the preference to execute the search. Defaults to randomize across shards. Can be set to
* <tt>_local</tt> to prefer local shards, <tt>_primary</tt> to execute only on primary shards, or
* a custom value, which guarantees that the same order will be used across different requests.
* <tt>_local</tt> to prefer local shards or a custom value, which guarantees that the same order
* will be used across different requests.
*/
public NoopSearchRequestBuilder setPreference(String preference) {
request.preference(preference);

View File

@ -53,6 +53,7 @@ public class TransportNoopSearchAction extends HandledTransportAction<SearchRequ
new SearchHit[0], 0L, 0.0f),
new InternalAggregations(Collections.emptyList()),
new Suggest(Collections.emptyList()),
new SearchProfileShardResults(Collections.emptyMap()), false, false, 1), "", 1, 1, 0, 0, new ShardSearchFailure[0]));
new SearchProfileShardResults(Collections.emptyMap()), false, false, 1), "", 1, 1, 0, 0, ShardSearchFailure.EMPTY_ARRAY,
SearchResponse.Clusters.EMPTY));
}
}

View File

@ -0,0 +1,63 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.http.Header;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;
import java.io.IOException;
import java.util.Collections;
/**
* A wrapper for the {@link RestHighLevelClient} that provides methods for accessing the Indices API.
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html">Indices API on elastic.co</a>
*/
public final class IndicesClient {
private final RestHighLevelClient restHighLevelClient;
public IndicesClient(RestHighLevelClient restHighLevelClient) {
this.restHighLevelClient = restHighLevelClient;
}
/**
* Deletes an index using the Delete Index API
* <p>
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html">
* Delete Index API on elastic.co</a>
*/
public DeleteIndexResponse deleteIndex(DeleteIndexRequest deleteIndexRequest, Header... headers) throws IOException {
return restHighLevelClient.performRequestAndParseEntity(deleteIndexRequest, Request::deleteIndex, DeleteIndexResponse::fromXContent,
Collections.emptySet(), headers);
}
/**
* Asynchronously deletes an index using the Delete Index API
* <p>
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html">
* Delete Index API on elastic.co</a>
*/
public void deleteIndexAsync(DeleteIndexRequest deleteIndexRequest, ActionListener<DeleteIndexResponse> listener, Header... headers) {
restHighLevelClient.performRequestAsyncAndParseEntity(deleteIndexRequest, Request::deleteIndex, DeleteIndexResponse::fromXContent,
listener, Collections.emptySet(), headers);
}
}

View File

@ -29,6 +29,7 @@ import org.apache.http.entity.ByteArrayEntity;
import org.apache.http.entity.ContentType;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.get.GetRequest;
@ -63,30 +64,47 @@ import java.util.Collections;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
import java.util.Objects;
import java.util.StringJoiner;
final class Request {
public final class Request {
static final XContentType REQUEST_BODY_CONTENT_TYPE = XContentType.JSON;
final String method;
final String endpoint;
final Map<String, String> params;
final HttpEntity entity;
private final String method;
private final String endpoint;
private final Map<String, String> parameters;
private final HttpEntity entity;
Request(String method, String endpoint, Map<String, String> params, HttpEntity entity) {
this.method = method;
this.endpoint = endpoint;
this.params = params;
public Request(String method, String endpoint, Map<String, String> parameters, HttpEntity entity) {
this.method = Objects.requireNonNull(method, "method cannot be null");
this.endpoint = Objects.requireNonNull(endpoint, "endpoint cannot be null");
this.parameters = Objects.requireNonNull(parameters, "parameters cannot be null");
this.entity = entity;
}
public String getMethod() {
return method;
}
public String getEndpoint() {
return endpoint;
}
public Map<String, String> getParameters() {
return parameters;
}
public HttpEntity getEntity() {
return entity;
}
@Override
public String toString() {
return "Request{" +
"method='" + method + '\'' +
", endpoint='" + endpoint + '\'' +
", params=" + params +
", params=" + parameters +
", hasBody=" + (entity != null) +
'}';
}
@ -106,6 +124,17 @@ final class Request {
return new Request(HttpDelete.METHOD_NAME, endpoint, parameters.getParams(), null);
}
static Request deleteIndex(DeleteIndexRequest deleteIndexRequest) {
String endpoint = endpoint(deleteIndexRequest.indices(), Strings.EMPTY_ARRAY, "");
Params parameters = Params.builder();
parameters.withTimeout(deleteIndexRequest.timeout());
parameters.withMasterTimeout(deleteIndexRequest.masterNodeTimeout());
parameters.withIndicesOptions(deleteIndexRequest.indicesOptions());
return new Request(HttpDelete.METHOD_NAME, endpoint, parameters.getParams(), null);
}
static Request info() {
return new Request(HttpGet.METHOD_NAME, "/", Collections.emptyMap(), null);
}
@ -162,23 +191,23 @@ final class Request {
metadata.field("_id", request.id());
}
if (Strings.hasLength(request.routing())) {
metadata.field("_routing", request.routing());
metadata.field("routing", request.routing());
}
if (Strings.hasLength(request.parent())) {
metadata.field("_parent", request.parent());
metadata.field("parent", request.parent());
}
if (request.version() != Versions.MATCH_ANY) {
metadata.field("_version", request.version());
metadata.field("version", request.version());
}
VersionType versionType = request.versionType();
if (versionType != VersionType.INTERNAL) {
if (versionType == VersionType.EXTERNAL) {
metadata.field("_version_type", "external");
metadata.field("version_type", "external");
} else if (versionType == VersionType.EXTERNAL_GTE) {
metadata.field("_version_type", "external_gte");
metadata.field("version_type", "external_gte");
} else if (versionType == VersionType.FORCE) {
metadata.field("_version_type", "force");
metadata.field("version_type", "force");
}
}
@ -190,7 +219,7 @@ final class Request {
} else if (opType == DocWriteRequest.OpType.UPDATE) {
UpdateRequest updateRequest = (UpdateRequest) request;
if (updateRequest.retryOnConflict() > 0) {
metadata.field("_retry_on_conflict", updateRequest.retryOnConflict());
metadata.field("retry_on_conflict", updateRequest.retryOnConflict());
}
if (updateRequest.fetchSource() != null) {
metadata.field("_source", updateRequest.fetchSource());
@ -233,7 +262,7 @@ final class Request {
static Request exists(GetRequest getRequest) {
Request request = get(getRequest);
return new Request(HttpHead.METHOD_NAME, request.endpoint, request.params, null);
return new Request(HttpHead.METHOD_NAME, request.endpoint, request.parameters, null);
}
static Request get(GetRequest getRequest) {
@ -381,7 +410,7 @@ final class Request {
* @return the {@link ContentType}
*/
@SuppressForbidden(reason = "Only allowed place to convert a XContentType to a ContentType")
static ContentType createContentType(final XContentType xContentType) {
public static ContentType createContentType(final XContentType xContentType) {
return ContentType.create(xContentType.mediaTypeWithoutParameters(), (Charset) null);
}
@ -432,6 +461,10 @@ final class Request {
return this;
}
Params withMasterTimeout(TimeValue masterTimeout) {
return putParam("master_timeout", masterTimeout);
}
Params withParent(String parent) {
return putParam("parent", parent);
}

View File

@ -56,6 +56,8 @@ import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.search.aggregations.Aggregation;
import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.adjacency.ParsedAdjacencyMatrix;
import org.elasticsearch.search.aggregations.bucket.composite.CompositeAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.composite.ParsedComposite;
import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.filter.FiltersAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.filter.ParsedFilter;
@ -176,6 +178,8 @@ public class RestHighLevelClient implements Closeable {
private final NamedXContentRegistry registry;
private final CheckedConsumer<RestClient, IOException> doClose;
private final IndicesClient indicesClient = new IndicesClient(this);
/**
* Creates a {@link RestHighLevelClient} given the low level {@link RestClientBuilder} that allows to build the
* {@link RestClient} to be used to perform requests.
@ -220,12 +224,21 @@ public class RestHighLevelClient implements Closeable {
doClose.accept(client);
}
/**
* Provides an {@link IndicesClient} which can be used to access the Indices API.
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html">Indices API on elastic.co</a>
*/
public final IndicesClient indices() {
return indicesClient;
}
/**
* Executes a bulk request using the Bulk API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html">Bulk API on elastic.co</a>
*/
public BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOException {
public final BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, emptySet(), headers);
}
@ -234,14 +247,14 @@ public class RestHighLevelClient implements Closeable {
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html">Bulk API on elastic.co</a>
*/
public void bulkAsync(BulkRequest bulkRequest, ActionListener<BulkResponse> listener, Header... headers) {
public final void bulkAsync(BulkRequest bulkRequest, ActionListener<BulkResponse> listener, Header... headers) {
performRequestAsyncAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, listener, emptySet(), headers);
}
/**
* Pings the remote Elasticsearch cluster and returns true if the ping succeeded, false otherwise
*/
public boolean ping(Header... headers) throws IOException {
public final boolean ping(Header... headers) throws IOException {
return performRequest(new MainRequest(), (request) -> Request.ping(), RestHighLevelClient::convertExistsResponse,
emptySet(), headers);
}
@ -249,7 +262,7 @@ public class RestHighLevelClient implements Closeable {
/**
* Get the cluster info otherwise provided when sending an HTTP request to port 9200
*/
public MainResponse info(Header... headers) throws IOException {
public final MainResponse info(Header... headers) throws IOException {
return performRequestAndParseEntity(new MainRequest(), (request) -> Request.info(), MainResponse::fromXContent, emptySet(),
headers);
}
@ -259,7 +272,7 @@ public class RestHighLevelClient implements Closeable {
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html">Get API on elastic.co</a>
*/
public GetResponse get(GetRequest getRequest, Header... headers) throws IOException {
public final GetResponse get(GetRequest getRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, singleton(404), headers);
}
@ -277,7 +290,7 @@ public class RestHighLevelClient implements Closeable {
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html">Get API on elastic.co</a>
*/
public boolean exists(GetRequest getRequest, Header... headers) throws IOException {
public final boolean exists(GetRequest getRequest, Header... headers) throws IOException {
return performRequest(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, emptySet(), headers);
}
@ -286,7 +299,7 @@ public class RestHighLevelClient implements Closeable {
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html">Get API on elastic.co</a>
*/
public void existsAsync(GetRequest getRequest, ActionListener<Boolean> listener, Header... headers) {
public final void existsAsync(GetRequest getRequest, ActionListener<Boolean> listener, Header... headers) {
performRequestAsync(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, listener, emptySet(), headers);
}
@ -295,7 +308,7 @@ public class RestHighLevelClient implements Closeable {
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html">Index API on elastic.co</a>
*/
public IndexResponse index(IndexRequest indexRequest, Header... headers) throws IOException {
public final IndexResponse index(IndexRequest indexRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, emptySet(), headers);
}
@ -304,7 +317,7 @@ public class RestHighLevelClient implements Closeable {
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html">Index API on elastic.co</a>
*/
public void indexAsync(IndexRequest indexRequest, ActionListener<IndexResponse> listener, Header... headers) {
public final void indexAsync(IndexRequest indexRequest, ActionListener<IndexResponse> listener, Header... headers) {
performRequestAsyncAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, listener, emptySet(), headers);
}
@ -313,7 +326,7 @@ public class RestHighLevelClient implements Closeable {
* <p>
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update.html">Update API on elastic.co</a>
*/
public UpdateResponse update(UpdateRequest updateRequest, Header... headers) throws IOException {
public final UpdateResponse update(UpdateRequest updateRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, emptySet(), headers);
}
@ -322,99 +335,101 @@ public class RestHighLevelClient implements Closeable {
* <p>
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update.html">Update API on elastic.co</a>
*/
public void updateAsync(UpdateRequest updateRequest, ActionListener<UpdateResponse> listener, Header... headers) {
public final void updateAsync(UpdateRequest updateRequest, ActionListener<UpdateResponse> listener, Header... headers) {
performRequestAsyncAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, listener, emptySet(), headers);
}
/**
* Deletes a document by id using the Delete api
* Deletes a document by id using the Delete API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete.html">Delete API on elastic.co</a>
*/
public DeleteResponse delete(DeleteRequest deleteRequest, Header... headers) throws IOException {
public final DeleteResponse delete(DeleteRequest deleteRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, Collections.singleton(404),
headers);
}
/**
* Asynchronously deletes a document by id using the Delete api
* Asynchronously deletes a document by id using the Delete API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete.html">Delete API on elastic.co</a>
*/
public void deleteAsync(DeleteRequest deleteRequest, ActionListener<DeleteResponse> listener, Header... headers) {
public final void deleteAsync(DeleteRequest deleteRequest, ActionListener<DeleteResponse> listener, Header... headers) {
performRequestAsyncAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, listener,
Collections.singleton(404), headers);
}
/**
* Executes a search using the Search api
* Executes a search using the Search API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html">Search API on elastic.co</a>
*/
public SearchResponse search(SearchRequest searchRequest, Header... headers) throws IOException {
public final SearchResponse search(SearchRequest searchRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, emptySet(), headers);
}
/**
* Asynchronously executes a search using the Search api
* Asynchronously executes a search using the Search API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html">Search API on elastic.co</a>
*/
public void searchAsync(SearchRequest searchRequest, ActionListener<SearchResponse> listener, Header... headers) {
public final void searchAsync(SearchRequest searchRequest, ActionListener<SearchResponse> listener, Header... headers) {
performRequestAsyncAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, listener, emptySet(), headers);
}
/**
* Executes a search using the Search Scroll api
* Executes a search using the Search Scroll API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html">Search Scroll
* API on elastic.co</a>
*/
public SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, Header... headers) throws IOException {
public final SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, emptySet(), headers);
}
/**
* Asynchronously executes a search using the Search Scroll api
* Asynchronously executes a search using the Search Scroll API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html">Search Scroll
* API on elastic.co</a>
*/
public void searchScrollAsync(SearchScrollRequest searchScrollRequest, ActionListener<SearchResponse> listener, Header... headers) {
public final void searchScrollAsync(SearchScrollRequest searchScrollRequest,
ActionListener<SearchResponse> listener, Header... headers) {
performRequestAsyncAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent,
listener, emptySet(), headers);
}
/**
* Clears one or more scroll ids using the Clear Scroll api
* Clears one or more scroll ids using the Clear Scroll API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#_clear_scroll_api">
* Clear Scroll API on elastic.co</a>
*/
public ClearScrollResponse clearScroll(ClearScrollRequest clearScrollRequest, Header... headers) throws IOException {
public final ClearScrollResponse clearScroll(ClearScrollRequest clearScrollRequest, Header... headers) throws IOException {
return performRequestAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent,
emptySet(), headers);
}
/**
* Asynchronously clears one or more scroll ids using the Clear Scroll api
* Asynchronously clears one or more scroll ids using the Clear Scroll API
*
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#_clear_scroll_api">
* Clear Scroll API on elastic.co</a>
*/
public void clearScrollAsync(ClearScrollRequest clearScrollRequest, ActionListener<ClearScrollResponse> listener, Header... headers) {
public final void clearScrollAsync(ClearScrollRequest clearScrollRequest,
ActionListener<ClearScrollResponse> listener, Header... headers) {
performRequestAsyncAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent,
listener, emptySet(), headers);
}
protected <Req extends ActionRequest, Resp> Resp performRequestAndParseEntity(Req request,
protected final <Req extends ActionRequest, Resp> Resp performRequestAndParseEntity(Req request,
CheckedFunction<Req, Request, IOException> requestConverter,
CheckedFunction<XContentParser, Resp, IOException> entityParser,
Set<Integer> ignores, Header... headers) throws IOException {
return performRequest(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser), ignores, headers);
}
protected <Req extends ActionRequest, Resp> Resp performRequest(Req request,
protected final <Req extends ActionRequest, Resp> Resp performRequest(Req request,
CheckedFunction<Req, Request, IOException> requestConverter,
CheckedFunction<Response, Resp, IOException> responseConverter,
Set<Integer> ignores, Header... headers) throws IOException {
@ -425,7 +440,7 @@ public class RestHighLevelClient implements Closeable {
Request req = requestConverter.apply(request);
Response response;
try {
response = client.performRequest(req.method, req.endpoint, req.params, req.entity, headers);
response = client.performRequest(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), headers);
} catch (ResponseException e) {
if (ignores.contains(e.getResponse().getStatusLine().getStatusCode())) {
try {
@ -448,7 +463,7 @@ public class RestHighLevelClient implements Closeable {
}
}
protected <Req extends ActionRequest, Resp> void performRequestAsyncAndParseEntity(Req request,
protected final <Req extends ActionRequest, Resp> void performRequestAsyncAndParseEntity(Req request,
CheckedFunction<Req, Request, IOException> requestConverter,
CheckedFunction<XContentParser, Resp, IOException> entityParser,
ActionListener<Resp> listener, Set<Integer> ignores, Header... headers) {
@ -456,7 +471,7 @@ public class RestHighLevelClient implements Closeable {
listener, ignores, headers);
}
protected <Req extends ActionRequest, Resp> void performRequestAsync(Req request,
protected final <Req extends ActionRequest, Resp> void performRequestAsync(Req request,
CheckedFunction<Req, Request, IOException> requestConverter,
CheckedFunction<Response, Resp, IOException> responseConverter,
ActionListener<Resp> listener, Set<Integer> ignores, Header... headers) {
@ -474,10 +489,10 @@ public class RestHighLevelClient implements Closeable {
}
ResponseListener responseListener = wrapResponseListener(responseConverter, listener, ignores);
client.performRequestAsync(req.method, req.endpoint, req.params, req.entity, responseListener, headers);
client.performRequestAsync(req.getMethod(), req.getEndpoint(), req.getParameters(), req.getEntity(), responseListener, headers);
}
<Resp> ResponseListener wrapResponseListener(CheckedFunction<Response, Resp, IOException> responseConverter,
final <Resp> ResponseListener wrapResponseListener(CheckedFunction<Response, Resp, IOException> responseConverter,
ActionListener<Resp> actionListener, Set<Integer> ignores) {
return new ResponseListener() {
@Override
@ -522,7 +537,7 @@ public class RestHighLevelClient implements Closeable {
* that wraps the original {@link ResponseException}. The potential exception obtained while parsing is added to the returned
* exception as a suppressed exception. This method is guaranteed to not throw any exception eventually thrown while parsing.
*/
ElasticsearchStatusException parseResponseException(ResponseException responseException) {
protected final ElasticsearchStatusException parseResponseException(ResponseException responseException) {
Response response = responseException.getResponse();
HttpEntity entity = response.getEntity();
ElasticsearchStatusException elasticsearchException;
@ -542,8 +557,8 @@ public class RestHighLevelClient implements Closeable {
return elasticsearchException;
}
<Resp> Resp parseEntity(
HttpEntity entity, CheckedFunction<XContentParser, Resp, IOException> entityParser) throws IOException {
protected final <Resp> Resp parseEntity(final HttpEntity entity,
final CheckedFunction<XContentParser, Resp, IOException> entityParser) throws IOException {
if (entity == null) {
throw new IllegalStateException("Response body expected but not returned");
}
@ -608,6 +623,7 @@ public class RestHighLevelClient implements Closeable {
map.put(ScriptedMetricAggregationBuilder.NAME, (p, c) -> ParsedScriptedMetric.fromXContent(p, (String) c));
map.put(IpRangeAggregationBuilder.NAME, (p, c) -> ParsedBinaryRange.fromXContent(p, (String) c));
map.put(TopHitsAggregationBuilder.NAME, (p, c) -> ParsedTopHits.fromXContent(p, (String) c));
map.put(CompositeAggregationBuilder.NAME, (p, c) -> ParsedComposite.fromXContent(p, (String) c));
List<NamedXContentRegistry.Entry> entries = map.entrySet().stream()
.map(entry -> new NamedXContentRegistry.Entry(Aggregation.class, new ParseField(entry.getKey()), entry.getValue()))
.collect(Collectors.toList());

View File

@ -39,7 +39,6 @@ import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.xcontent.XContentBuilder;
@ -50,7 +49,6 @@ import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.script.Script;
import org.elasticsearch.script.ScriptType;
import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
import org.elasticsearch.threadpool.ThreadPool;
import java.io.IOException;
import java.util.Collections;
@ -614,14 +612,14 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
}
};
ThreadPool threadPool = new ThreadPool(Settings.builder().put("node.name", getClass().getName()).build());
// Pull the client to a variable to work around https://bugs.eclipse.org/bugs/show_bug.cgi?id=514884
RestHighLevelClient hlClient = highLevelClient();
try(BulkProcessor processor = new BulkProcessor.Builder(hlClient::bulkAsync, listener, threadPool)
.setConcurrentRequests(0)
.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.GB))
.setBulkActions(nbItems + 1)
.build()) {
try (BulkProcessor processor = BulkProcessor.builder(hlClient::bulkAsync, listener)
.setConcurrentRequests(0)
.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.GB))
.setBulkActions(nbItems + 1)
.build()) {
for (int i = 0; i < nbItems; i++) {
String id = String.valueOf(i);
boolean erroneous = randomBoolean();
@ -631,7 +629,7 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
if (opType == DocWriteRequest.OpType.DELETE) {
if (erroneous == false) {
assertEquals(RestStatus.CREATED,
highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
}
DeleteRequest deleteRequest = new DeleteRequest("index", "test", id);
processor.add(deleteRequest);
@ -653,10 +651,10 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
} else if (opType == DocWriteRequest.OpType.UPDATE) {
UpdateRequest updateRequest = new UpdateRequest("index", "test", id)
.doc(new IndexRequest().source(xContentType, "id", i));
.doc(new IndexRequest().source(xContentType, "id", i));
if (erroneous == false) {
assertEquals(RestStatus.CREATED,
highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status());
}
processor.add(updateRequest);
}
@ -676,8 +674,6 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
assertNull(error.get());
validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest);
terminate(threadPool);
}
private void validateBulkResponses(int nbItems, boolean[] errors, BulkResponse bulkResponse, BulkRequest bulkRequest) {

View File

@ -22,14 +22,12 @@ package org.elasticsearch.client;
import org.apache.http.Header;
import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.HttpResponse;
import org.apache.http.ProtocolVersion;
import org.apache.http.RequestLine;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.entity.ByteArrayEntity;
import org.apache.http.entity.ContentType;
import org.apache.http.message.BasicHeader;
import org.apache.http.message.BasicHttpResponse;
import org.apache.http.message.BasicRequestLine;
import org.apache.http.message.BasicStatusLine;
import org.apache.lucene.util.BytesRef;
@ -38,6 +36,12 @@ import org.elasticsearch.Version;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.main.MainRequest;
import org.elasticsearch.action.main.MainResponse;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Request;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.ResponseListener;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.xcontent.XContentHelper;
@ -48,11 +52,14 @@ import org.junit.Before;
import java.io.IOException;
import java.lang.reflect.Method;
import java.lang.reflect.Modifier;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.stream.Collectors;
import static java.util.Collections.emptyMap;
import static java.util.Collections.emptySet;
import static org.elasticsearch.client.ESRestHighLevelClientTestCase.execute;
import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyMapOf;
import static org.mockito.Matchers.anyObject;
@ -60,6 +67,7 @@ import static org.mockito.Matchers.anyVararg;
import static org.mockito.Matchers.eq;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
/**
* Test and demonstrates how {@link RestHighLevelClient} can be extended to support custom endpoints.
@ -92,31 +100,45 @@ public class CustomRestHighLevelClientTests extends ESTestCase {
final MainRequest request = new MainRequest();
final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10));
MainResponse response = execute(request, restHighLevelClient::custom, restHighLevelClient::customAsync, header);
MainResponse response = restHighLevelClient.custom(request, header);
assertEquals(header.getValue(), response.getNodeName());
response = execute(request, restHighLevelClient::customAndParse, restHighLevelClient::customAndParseAsync, header);
response = restHighLevelClient.customAndParse(request, header);
assertEquals(header.getValue(), response.getNodeName());
}
public void testCustomEndpointAsync() throws Exception {
final MainRequest request = new MainRequest();
final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10));
PlainActionFuture<MainResponse> future = PlainActionFuture.newFuture();
restHighLevelClient.customAsync(request, future, header);
assertEquals(header.getValue(), future.get().getNodeName());
future = PlainActionFuture.newFuture();
restHighLevelClient.customAndParseAsync(request, future, header);
assertEquals(header.getValue(), future.get().getNodeName());
}
/**
* The {@link RestHighLevelClient} must declare the following execution methods using the <code>protected</code> modifier
* so that they can be used by subclasses to implement custom logic.
*/
@SuppressForbidden(reason = "We're forced to uses Class#getDeclaredMethods() here because this test checks protected methods")
public void testMethodsVisibility() throws ClassNotFoundException {
String[] methodNames = new String[]{"performRequest", "performRequestAndParseEntity", "performRequestAsync",
"performRequestAsyncAndParseEntity"};
for (String methodName : methodNames) {
boolean found = false;
for (Method method : RestHighLevelClient.class.getDeclaredMethods()) {
if (method.getName().equals(methodName)) {
assertTrue("Method " + methodName + " must be protected", Modifier.isProtected(method.getModifiers()));
found = true;
}
}
assertTrue("Failed to find method " + methodName, found);
}
final String[] methodNames = new String[]{"performRequest",
"performRequestAsync",
"performRequestAndParseEntity",
"performRequestAsyncAndParseEntity",
"parseEntity",
"parseResponseException"};
final List<String> protectedMethods = Arrays.stream(RestHighLevelClient.class.getDeclaredMethods())
.filter(method -> Modifier.isProtected(method.getModifiers()))
.map(Method::getName)
.collect(Collectors.toList());
assertThat(protectedMethods, containsInAnyOrder(methodNames));
}
/**
@ -135,15 +157,20 @@ public class CustomRestHighLevelClientTests extends ESTestCase {
* Mocks the synchronous request execution like if it was executed by Elasticsearch.
*/
private Response mockPerformRequest(Header httpHeader) throws IOException {
final Response mockResponse = mock(Response.class);
when(mockResponse.getHost()).thenReturn(new HttpHost("localhost", 9200));
ProtocolVersion protocol = new ProtocolVersion("HTTP", 1, 1);
HttpResponse httpResponse = new BasicHttpResponse(new BasicStatusLine(protocol, 200, "OK"));
when(mockResponse.getStatusLine()).thenReturn(new BasicStatusLine(protocol, 200, "OK"));
MainResponse response = new MainResponse(httpHeader.getValue(), Version.CURRENT, ClusterName.DEFAULT, "_na", Build.CURRENT, true);
BytesRef bytesRef = XContentHelper.toXContent(response, XContentType.JSON, false).toBytesRef();
httpResponse.setEntity(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON));
when(mockResponse.getEntity()).thenReturn(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON));
RequestLine requestLine = new BasicRequestLine(HttpGet.METHOD_NAME, ENDPOINT, protocol);
return new Response(requestLine, new HttpHost("localhost", 9200), httpResponse);
when(mockResponse.getRequestLine()).thenReturn(requestLine);
return mockResponse;
}
/**

View File

@ -0,0 +1,68 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;
import org.elasticsearch.rest.RestStatus;
import java.io.IOException;
public class IndicesClientIT extends ESRestHighLevelClientTestCase {
public void testDeleteIndex() throws IOException {
{
// Delete index if exists
String indexName = "test_index";
createIndex(indexName);
DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(indexName);
DeleteIndexResponse deleteIndexResponse =
execute(deleteIndexRequest, highLevelClient().indices()::deleteIndex, highLevelClient().indices()::deleteIndexAsync);
assertTrue(deleteIndexResponse.isAcknowledged());
assertFalse(indexExists(indexName));
}
{
// Return 404 if index doesn't exist
String nonExistentIndex = "non_existent_index";
assertFalse(indexExists(nonExistentIndex));
DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(nonExistentIndex);
ElasticsearchException exception = expectThrows(ElasticsearchException.class,
() -> execute(deleteIndexRequest, highLevelClient().indices()::deleteIndex, highLevelClient().indices()::deleteIndexAsync));
assertEquals(RestStatus.NOT_FOUND, exception.status());
}
}
private static void createIndex(String index) throws IOException {
Response response = client().performRequest("PUT", index);
assertEquals(200, response.getStatusLine().getStatusCode());
}
private static boolean indexExists(String index) throws IOException {
Response response = client().performRequest("HEAD", index);
return response.getStatusLine().getStatusCode() == 200;
}
}

View File

@ -21,8 +21,11 @@ package org.elasticsearch.client;
import org.apache.http.HttpEntity;
import org.apache.http.entity.ByteArrayEntity;
import org.apache.http.entity.ContentType;
import org.apache.http.entity.StringEntity;
import org.apache.http.util.EntityUtils;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkShardRequest;
import org.elasticsearch.action.delete.DeleteRequest;
@ -34,6 +37,8 @@ import org.elasticsearch.action.search.SearchScrollRequest;
import org.elasticsearch.action.search.SearchType;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.action.support.master.MasterNodeRequest;
import org.elasticsearch.action.support.replication.ReplicatedWriteRequest;
import org.elasticsearch.action.support.replication.ReplicationRequest;
import org.elasticsearch.action.update.UpdateRequest;
@ -42,6 +47,7 @@ import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.Streams;
import org.elasticsearch.common.lucene.uid.Versions;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentHelper;
@ -64,12 +70,15 @@ import org.elasticsearch.test.RandomObjects;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.Constructor;
import java.lang.reflect.Modifier;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
import java.util.StringJoiner;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Supplier;
import static java.util.Collections.singletonMap;
import static org.elasticsearch.client.Request.enforceSameContentType;
@ -77,20 +86,50 @@ import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertToXC
public class RequestTests extends ESTestCase {
public void testConstructor() throws Exception {
final String method = randomFrom("GET", "PUT", "POST", "HEAD", "DELETE");
final String endpoint = randomAlphaOfLengthBetween(1, 10);
final Map<String, String> parameters = singletonMap(randomAlphaOfLength(5), randomAlphaOfLength(5));
final HttpEntity entity = randomBoolean() ? new StringEntity(randomAlphaOfLengthBetween(1, 100), ContentType.TEXT_PLAIN) : null;
NullPointerException e = expectThrows(NullPointerException.class, () -> new Request(null, endpoint, parameters, entity));
assertEquals("method cannot be null", e.getMessage());
e = expectThrows(NullPointerException.class, () -> new Request(method, null, parameters, entity));
assertEquals("endpoint cannot be null", e.getMessage());
e = expectThrows(NullPointerException.class, () -> new Request(method, endpoint, null, entity));
assertEquals("parameters cannot be null", e.getMessage());
final Request request = new Request(method, endpoint, parameters, entity);
assertEquals(method, request.getMethod());
assertEquals(endpoint, request.getEndpoint());
assertEquals(parameters, request.getParameters());
assertEquals(entity, request.getEntity());
final Constructor<?>[] constructors = Request.class.getConstructors();
assertEquals("Expected only 1 constructor", 1, constructors.length);
assertTrue("Request constructor is not public", Modifier.isPublic(constructors[0].getModifiers()));
}
public void testClassVisibility() throws Exception {
assertTrue("Request class is not public", Modifier.isPublic(Request.class.getModifiers()));
}
public void testPing() {
Request request = Request.ping();
assertEquals("/", request.endpoint);
assertEquals(0, request.params.size());
assertNull(request.entity);
assertEquals("HEAD", request.method);
assertEquals("/", request.getEndpoint());
assertEquals(0, request.getParameters().size());
assertNull(request.getEntity());
assertEquals("HEAD", request.getMethod());
}
public void testInfo() {
Request request = Request.info();
assertEquals("/", request.endpoint);
assertEquals(0, request.params.size());
assertNull(request.entity);
assertEquals("GET", request.method);
assertEquals("/", request.getEndpoint());
assertEquals(0, request.getParameters().size());
assertNull(request.getEntity());
assertEquals("GET", request.getMethod());
}
public void testGet() {
@ -105,7 +144,7 @@ public class RequestTests extends ESTestCase {
Map<String, String> expectedParams = new HashMap<>();
setRandomTimeout(deleteRequest, expectedParams);
setRandomTimeout(deleteRequest::timeout, ReplicationRequest.DEFAULT_TIMEOUT, expectedParams);
setRandomRefreshPolicy(deleteRequest, expectedParams);
setRandomVersion(deleteRequest, expectedParams);
setRandomVersionType(deleteRequest, expectedParams);
@ -124,10 +163,10 @@ public class RequestTests extends ESTestCase {
}
Request request = Request.delete(deleteRequest);
assertEquals("/" + index + "/" + type + "/" + id, request.endpoint);
assertEquals(expectedParams, request.params);
assertEquals("DELETE", request.method);
assertNull(request.entity);
assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
assertEquals("DELETE", request.getMethod());
assertNull(request.getEntity());
}
public void testExists() {
@ -200,10 +239,34 @@ public class RequestTests extends ESTestCase {
}
}
Request request = requestConverter.apply(getRequest);
assertEquals("/" + index + "/" + type + "/" + id, request.endpoint);
assertEquals(expectedParams, request.params);
assertNull(request.entity);
assertEquals(method, request.method);
assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
assertNull(request.getEntity());
assertEquals(method, request.getMethod());
}
public void testDeleteIndex() throws IOException {
DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest();
int numIndices = randomIntBetween(0, 5);
String[] indices = new String[numIndices];
for (int i = 0; i < numIndices; i++) {
indices[i] = "index-" + randomAlphaOfLengthBetween(2, 5);
}
deleteIndexRequest.indices(indices);
Map<String, String> expectedParams = new HashMap<>();
setRandomTimeout(deleteIndexRequest::timeout, AcknowledgedRequest.DEFAULT_ACK_TIMEOUT, expectedParams);
setRandomMasterTimeout(deleteIndexRequest, expectedParams);
setRandomIndicesOptions(deleteIndexRequest::indicesOptions, deleteIndexRequest::indicesOptions, expectedParams);
Request request = Request.deleteIndex(deleteIndexRequest);
assertEquals("/" + String.join(",", indices), request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
assertEquals("DELETE", request.getMethod());
assertNull(request.getEntity());
}
public void testIndex() throws IOException {
@ -224,7 +287,7 @@ public class RequestTests extends ESTestCase {
}
}
setRandomTimeout(indexRequest, expectedParams);
setRandomTimeout(indexRequest::timeout, ReplicationRequest.DEFAULT_TIMEOUT, expectedParams);
setRandomRefreshPolicy(indexRequest, expectedParams);
// There is some logic around _create endpoint and version/version type
@ -267,16 +330,16 @@ public class RequestTests extends ESTestCase {
Request request = Request.index(indexRequest);
if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) {
assertEquals("/" + index + "/" + type + "/" + id + "/_create", request.endpoint);
assertEquals("/" + index + "/" + type + "/" + id + "/_create", request.getEndpoint());
} else if (id != null) {
assertEquals("/" + index + "/" + type + "/" + id, request.endpoint);
assertEquals("/" + index + "/" + type + "/" + id, request.getEndpoint());
} else {
assertEquals("/" + index + "/" + type, request.endpoint);
assertEquals("/" + index + "/" + type, request.getEndpoint());
}
assertEquals(expectedParams, request.params);
assertEquals(method, request.method);
assertEquals(expectedParams, request.getParameters());
assertEquals(method, request.getMethod());
HttpEntity entity = request.entity;
HttpEntity entity = request.getEntity();
assertTrue(entity instanceof ByteArrayEntity);
assertEquals(indexRequest.getContentType().mediaTypeWithoutParameters(), entity.getContentType().getValue());
try (XContentParser parser = createParser(xContentType.xContent(), entity.getContent())) {
@ -367,11 +430,11 @@ public class RequestTests extends ESTestCase {
}
Request request = Request.update(updateRequest);
assertEquals("/" + index + "/" + type + "/" + id + "/_update", request.endpoint);
assertEquals(expectedParams, request.params);
assertEquals("POST", request.method);
assertEquals("/" + index + "/" + type + "/" + id + "/_update", request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
assertEquals("POST", request.getMethod());
HttpEntity entity = request.entity;
HttpEntity entity = request.getEntity();
assertTrue(entity instanceof ByteArrayEntity);
UpdateRequest parsedUpdateRequest = new UpdateRequest();
@ -485,12 +548,12 @@ public class RequestTests extends ESTestCase {
}
Request request = Request.bulk(bulkRequest);
assertEquals("/_bulk", request.endpoint);
assertEquals(expectedParams, request.params);
assertEquals("POST", request.method);
assertEquals(xContentType.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());
byte[] content = new byte[(int) request.entity.getContentLength()];
try (InputStream inputStream = request.entity.getContent()) {
assertEquals("/_bulk", request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
assertEquals("POST", request.getMethod());
assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
byte[] content = new byte[(int) request.getEntity().getContentLength()];
try (InputStream inputStream = request.getEntity().getContent()) {
Streams.readFully(inputStream, content);
}
@ -541,7 +604,7 @@ public class RequestTests extends ESTestCase {
bulkRequest.add(new DeleteRequest("index", "type", "2"));
Request request = Request.bulk(bulkRequest);
assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());
assertEquals(XContentType.JSON.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
}
{
XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
@ -551,7 +614,7 @@ public class RequestTests extends ESTestCase {
bulkRequest.add(new DeleteRequest("index", "type", "2"));
Request request = Request.bulk(bulkRequest);
assertEquals(xContentType.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());
assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
}
{
XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE);
@ -563,7 +626,7 @@ public class RequestTests extends ESTestCase {
}
Request request = Request.bulk(new BulkRequest().add(updateRequest));
assertEquals(xContentType.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());
assertEquals(xContentType.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
}
{
BulkRequest bulkRequest = new BulkRequest();
@ -644,20 +707,7 @@ public class RequestTests extends ESTestCase {
expectedParams.put("scroll", searchRequest.scroll().keepAlive().getStringRep());
}
if (randomBoolean()) {
searchRequest.indicesOptions(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean()));
}
expectedParams.put("ignore_unavailable", Boolean.toString(searchRequest.indicesOptions().ignoreUnavailable()));
expectedParams.put("allow_no_indices", Boolean.toString(searchRequest.indicesOptions().allowNoIndices()));
if (searchRequest.indicesOptions().expandWildcardsOpen() && searchRequest.indicesOptions().expandWildcardsClosed()) {
expectedParams.put("expand_wildcards", "open,closed");
} else if (searchRequest.indicesOptions().expandWildcardsOpen()) {
expectedParams.put("expand_wildcards", "open");
} else if (searchRequest.indicesOptions().expandWildcardsClosed()) {
expectedParams.put("expand_wildcards", "closed");
} else {
expectedParams.put("expand_wildcards", "none");
}
setRandomIndicesOptions(searchRequest::indicesOptions, searchRequest::indicesOptions, expectedParams);
SearchSourceBuilder searchSourceBuilder = null;
if (frequently()) {
@ -712,12 +762,12 @@ public class RequestTests extends ESTestCase {
endpoint.add(type);
}
endpoint.add("_search");
assertEquals(endpoint.toString(), request.endpoint);
assertEquals(expectedParams, request.params);
assertEquals(endpoint.toString(), request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
if (searchSourceBuilder == null) {
assertNull(request.entity);
assertNull(request.getEntity());
} else {
assertToXContentBody(searchSourceBuilder, request.entity);
assertToXContentBody(searchSourceBuilder, request.getEntity());
}
}
@ -728,11 +778,11 @@ public class RequestTests extends ESTestCase {
searchScrollRequest.scroll(randomPositiveTimeValue());
}
Request request = Request.searchScroll(searchScrollRequest);
assertEquals("GET", request.method);
assertEquals("/_search/scroll", request.endpoint);
assertEquals(0, request.params.size());
assertToXContentBody(searchScrollRequest, request.entity);
assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());
assertEquals("GET", request.getMethod());
assertEquals("/_search/scroll", request.getEndpoint());
assertEquals(0, request.getParameters().size());
assertToXContentBody(searchScrollRequest, request.getEntity());
assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
}
public void testClearScroll() throws IOException {
@ -742,11 +792,11 @@ public class RequestTests extends ESTestCase {
clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10));
}
Request request = Request.clearScroll(clearScrollRequest);
assertEquals("DELETE", request.method);
assertEquals("/_search/scroll", request.endpoint);
assertEquals(0, request.params.size());
assertToXContentBody(clearScrollRequest, request.entity);
assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.entity.getContentType().getValue());
assertEquals("DELETE", request.getMethod());
assertEquals("/_search/scroll", request.getEndpoint());
assertEquals(0, request.getParameters().size());
assertToXContentBody(clearScrollRequest, request.getEntity());
assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaTypeWithoutParameters(), request.getEntity().getContentType().getValue());
}
private static void assertToXContentBody(ToXContent expectedBody, HttpEntity actualEntity) throws IOException {
@ -869,13 +919,43 @@ public class RequestTests extends ESTestCase {
}
}
private static void setRandomTimeout(ReplicationRequest<?> request, Map<String, String> expectedParams) {
private static void setRandomIndicesOptions(Consumer<IndicesOptions> setter, Supplier<IndicesOptions> getter,
Map<String, String> expectedParams) {
if (randomBoolean()) {
setter.accept(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(),
randomBoolean()));
}
expectedParams.put("ignore_unavailable", Boolean.toString(getter.get().ignoreUnavailable()));
expectedParams.put("allow_no_indices", Boolean.toString(getter.get().allowNoIndices()));
if (getter.get().expandWildcardsOpen() && getter.get().expandWildcardsClosed()) {
expectedParams.put("expand_wildcards", "open,closed");
} else if (getter.get().expandWildcardsOpen()) {
expectedParams.put("expand_wildcards", "open");
} else if (getter.get().expandWildcardsClosed()) {
expectedParams.put("expand_wildcards", "closed");
} else {
expectedParams.put("expand_wildcards", "none");
}
}
private static void setRandomTimeout(Consumer<String> setter, TimeValue defaultTimeout, Map<String, String> expectedParams) {
if (randomBoolean()) {
String timeout = randomTimeValue();
request.timeout(timeout);
setter.accept(timeout);
expectedParams.put("timeout", timeout);
} else {
expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep());
expectedParams.put("timeout", defaultTimeout.getStringRep());
}
}
private static void setRandomMasterTimeout(MasterNodeRequest<?> request, Map<String, String> expectedParams) {
if (randomBoolean()) {
String masterTimeout = randomTimeValue();
request.masterNodeTimeout(masterTimeout);
expectedParams.put("master_timeout", masterTimeout);
} else {
expectedParams.put("master_timeout", MasterNodeRequest.DEFAULT_MASTER_NODE_TIMEOUT.getStringRep());
}
}

View File

@ -36,7 +36,7 @@ import static org.hamcrest.CoreMatchers.instanceOf;
import static org.mockito.Mockito.mock;
/**
* This test works against a {@link RestHighLevelClient} subclass that simulats how custom response sections returned by
* This test works against a {@link RestHighLevelClient} subclass that simulates how custom response sections returned by
* Elasticsearch plugins can be parsed using the high level client.
*/
public class RestHighLevelClientExtTests extends ESTestCase {

View File

@ -165,7 +165,8 @@ public class RestHighLevelClientTests extends ESTestCase {
public void testSearchScroll() throws IOException {
Header[] headers = randomHeaders(random(), "Header");
SearchResponse mockSearchResponse = new SearchResponse(new SearchResponseSections(SearchHits.empty(), InternalAggregations.EMPTY,
null, false, false, null, 1), randomAlphaOfLengthBetween(5, 10), 5, 5, 0, 100, new ShardSearchFailure[0]);
null, false, false, null, 1), randomAlphaOfLengthBetween(5, 10), 5, 5, 0, 100, ShardSearchFailure.EMPTY_ARRAY,
SearchResponse.Clusters.EMPTY);
mockResponse(mockSearchResponse);
SearchResponse searchResponse = restHighLevelClient.searchScroll(new SearchScrollRequest(randomAlphaOfLengthBetween(5, 10)),
headers);

View File

@ -470,5 +470,6 @@ public class SearchIT extends ESRestHighLevelClientTestCase {
assertThat(searchResponse.getTotalShards(), greaterThan(0));
assertEquals(searchResponse.getTotalShards(), searchResponse.getSuccessfulShards());
assertEquals(0, searchResponse.getShardFailures().length);
assertEquals(SearchResponse.Clusters.EMPTY, searchResponse.getClusters());
}
}

View File

@ -19,13 +19,11 @@
package org.elasticsearch.client.documentation;
import org.elasticsearch.Build;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.ContentType;
import org.apache.http.nio.entity.NStringEntity;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.DocWriteResponse;
@ -40,7 +38,6 @@ import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.main.MainResponse;
import org.elasticsearch.action.support.ActiveShardCount;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.action.support.replication.ReplicationResponse;
@ -49,9 +46,7 @@ import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
@ -64,7 +59,7 @@ import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.script.Script;
import org.elasticsearch.script.ScriptType;
import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.threadpool.Scheduler;
import java.io.IOException;
import java.util.Collections;
@ -868,31 +863,27 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
public void testBulkProcessor() throws InterruptedException, IOException {
Settings settings = Settings.builder().put("node.name", "my-application").build();
RestHighLevelClient client = highLevelClient();
{
// tag::bulk-processor-init
ThreadPool threadPool = new ThreadPool(settings); // <1>
BulkProcessor.Listener listener = new BulkProcessor.Listener() { // <2>
BulkProcessor.Listener listener = new BulkProcessor.Listener() { // <1>
@Override
public void beforeBulk(long executionId, BulkRequest request) {
// <3>
// <2>
}
@Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
// <4>
// <3>
}
@Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
// <5>
// <4>
}
};
BulkProcessor bulkProcessor = new BulkProcessor.Builder(client::bulkAsync, listener, threadPool)
.build(); // <6>
BulkProcessor bulkProcessor = BulkProcessor.builder(client::bulkAsync, listener).build(); // <5>
// end::bulk-processor-init
assertNotNull(bulkProcessor);
@ -917,7 +908,6 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
// tag::bulk-processor-close
bulkProcessor.close();
// end::bulk-processor-close
terminate(threadPool);
}
{
// tag::bulk-processor-listener
@ -944,19 +934,14 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
};
// end::bulk-processor-listener
ThreadPool threadPool = new ThreadPool(settings);
try {
// tag::bulk-processor-options
BulkProcessor.Builder builder = new BulkProcessor.Builder(client::bulkAsync, listener, threadPool);
builder.setBulkActions(500); // <1>
builder.setBulkSize(new ByteSizeValue(1L, ByteSizeUnit.MB)); // <2>
builder.setConcurrentRequests(0); // <3>
builder.setFlushInterval(TimeValue.timeValueSeconds(10L)); // <4>
builder.setBackoffPolicy(BackoffPolicy.constantBackoff(TimeValue.timeValueSeconds(1L), 3)); // <5>
// end::bulk-processor-options
} finally {
terminate(threadPool);
}
// tag::bulk-processor-options
BulkProcessor.Builder builder = BulkProcessor.builder(client::bulkAsync, listener);
builder.setBulkActions(500); // <1>
builder.setBulkSize(new ByteSizeValue(1L, ByteSizeUnit.MB)); // <2>
builder.setConcurrentRequests(0); // <3>
builder.setFlushInterval(TimeValue.timeValueSeconds(10L)); // <4>
builder.setBackoffPolicy(BackoffPolicy.constantBackoff(TimeValue.timeValueSeconds(1L), 3)); // <5>
// end::bulk-processor-options
}
}
}

View File

@ -0,0 +1,116 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.documentation;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.rest.RestStatus;
import java.io.IOException;
/**
* This class is used to generate the Java Indices API documentation.
* You need to wrap your code between two tags like:
* // tag::example[]
* // end::example[]
*
* Where example is your tag name.
*
* Then in the documentation, you can extract what is between tag and end tags with
* ["source","java",subs="attributes,callouts,macros"]
* --------------------------------------------------
* include-tagged::{doc-tests}/CRUDDocumentationIT.java[example]
* --------------------------------------------------
*/
public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase {
public void testDeleteIndex() throws IOException {
RestHighLevelClient client = highLevelClient();
{
Response createIndexResponse = client().performRequest("PUT", "/posts");
assertEquals(200, createIndexResponse.getStatusLine().getStatusCode());
}
{
// tag::delete-index-request
DeleteIndexRequest request = new DeleteIndexRequest("posts"); // <1>
// end::delete-index-request
// tag::delete-index-execute
DeleteIndexResponse deleteIndexResponse = client.indices().deleteIndex(request);
// end::delete-index-execute
assertTrue(deleteIndexResponse.isAcknowledged());
// tag::delete-index-response
boolean acknowledged = deleteIndexResponse.isAcknowledged(); // <1>
// end::delete-index-response
// tag::delete-index-execute-async
client.indices().deleteIndexAsync(request, new ActionListener<DeleteIndexResponse>() {
@Override
public void onResponse(DeleteIndexResponse deleteIndexResponse) {
// <1>
}
@Override
public void onFailure(Exception e) {
// <2>
}
});
// end::delete-index-execute-async
}
{
DeleteIndexRequest request = new DeleteIndexRequest("posts");
// tag::delete-index-request-timeout
request.timeout(TimeValue.timeValueMinutes(2)); // <1>
request.timeout("2m"); // <2>
// end::delete-index-request-timeout
// tag::delete-index-request-masterTimeout
request.masterNodeTimeout(TimeValue.timeValueMinutes(1)); // <1>
request.timeout("1m"); // <2>
// end::delete-index-request-masterTimeout
// tag::delete-index-request-indicesOptions
request.indicesOptions(IndicesOptions.lenientExpandOpen()); // <1>
// end::delete-index-request-indicesOptions
}
{
// tag::delete-index-notfound
try {
DeleteIndexRequest request = new DeleteIndexRequest("does_not_exist");
DeleteIndexResponse deleteIndexResponse = client.indices().deleteIndex(request);
} catch (ElasticsearchException exception) {
if (exception.status() == RestStatus.NOT_FOUND) {
// <1>
}
}
// end::delete-index-notfound
}
}
}

View File

@ -23,7 +23,7 @@ import org.apache.lucene.search.join.ScoreMode;
import org.elasticsearch.common.geo.GeoPoint;
import org.elasticsearch.common.geo.ShapeRelation;
import org.elasticsearch.common.geo.builders.CoordinatesBuilder;
import org.elasticsearch.common.geo.builders.ShapeBuilders;
import org.elasticsearch.common.geo.builders.MultiPointBuilder;
import org.elasticsearch.common.unit.DistanceUnit;
import org.elasticsearch.index.query.GeoShapeQueryBuilder;
import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder;
@ -189,7 +189,7 @@ public class QueryDSLDocumentationTests extends ESTestCase {
// tag::geo_shape
GeoShapeQueryBuilder qb = geoShapeQuery(
"pin.location", // <1>
ShapeBuilders.newMultiPoint( // <2>
new MultiPointBuilder( // <2>
new CoordinatesBuilder()
.coordinate(0, 0)
.coordinate(0, 10)

View File

@ -50,6 +50,7 @@ import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.Comparator;
@ -91,8 +92,9 @@ public class RestClient implements Closeable {
private static final Log logger = LogFactory.getLog(RestClient.class);
private final CloseableHttpAsyncClient client;
//we don't rely on default headers supported by HttpAsyncClient as those cannot be replaced
private final Header[] defaultHeaders;
// We don't rely on default headers supported by HttpAsyncClient as those cannot be replaced.
// These are package private for tests.
final List<Header> defaultHeaders;
private final long maxRetryTimeoutMillis;
private final String pathPrefix;
private final AtomicInteger lastHostIndex = new AtomicInteger(0);
@ -104,7 +106,7 @@ public class RestClient implements Closeable {
HttpHost[] hosts, String pathPrefix, FailureListener failureListener) {
this.client = client;
this.maxRetryTimeoutMillis = maxRetryTimeoutMillis;
this.defaultHeaders = defaultHeaders;
this.defaultHeaders = Collections.unmodifiableList(Arrays.asList(defaultHeaders));
this.failureListener = failureListener;
this.pathPrefix = pathPrefix;
setHosts(hosts);

View File

@ -33,6 +33,8 @@ import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.apache.http.impl.nio.reactor.IOReactorConfig;
import org.apache.http.message.BasicHeader;
import org.apache.http.nio.entity.NStringEntity;
import org.apache.http.ssl.SSLContextBuilder;
import org.apache.http.ssl.SSLContexts;
import org.apache.http.util.EntityUtils;
import org.elasticsearch.client.HttpAsyncResponseConsumerFactory;
import org.elasticsearch.client.Response;
@ -47,9 +49,6 @@ import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.KeyStore;
import java.security.KeyStoreException;
import java.security.NoSuchAlgorithmException;
import java.security.cert.CertificateException;
import java.util.Collections;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
@ -258,7 +257,7 @@ public class RestClientDocumentation {
}
@SuppressWarnings("unused")
public void testCommonConfiguration() throws IOException, KeyStoreException, CertificateException, NoSuchAlgorithmException {
public void testCommonConfiguration() throws Exception {
{
//tag::rest-client-config-timeouts
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200))
@ -318,13 +317,14 @@ public class RestClientDocumentation {
{
Path keyStorePath = Paths.get("");
String keyStorePass = "";
final SSLContext sslContext = null;
//tag::rest-client-config-encrypted-communication
KeyStore keystore = KeyStore.getInstance("jks");
KeyStore truststore = KeyStore.getInstance("jks");
try (InputStream is = Files.newInputStream(keyStorePath)) {
keystore.load(is, keyStorePass.toCharArray());
truststore.load(is, keyStorePass.toCharArray());
}
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200))
SSLContextBuilder sslBuilder = SSLContexts.custom().loadTrustMaterial(truststore, null);
final SSLContext sslContext = sslBuilder.build();
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "https"))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {

View File

@ -0,0 +1 @@
eb21a035c66ad307e66ec8fce37f5d50fd62d039

View File

@ -1 +0,0 @@
2ef7b1cc34de149600f5e75bc2d5bf40de894e60

View File

@ -27,12 +27,16 @@ import org.elasticsearch.client.RestClientBuilder;
import java.io.Closeable;
import java.io.IOException;
import java.security.AccessController;
import java.security.PrivilegedAction;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
/**
* Class responsible for sniffing nodes from some source (default is elasticsearch itself) and setting them to a provided instance of
@ -45,6 +49,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
public class Sniffer implements Closeable {
private static final Log logger = LogFactory.getLog(Sniffer.class);
private static final String SNIFFER_THREAD_NAME = "es_rest_client_sniffer";
private final Task task;
@ -79,7 +84,8 @@ public class Sniffer implements Closeable {
this.restClient = restClient;
this.sniffIntervalMillis = sniffIntervalMillis;
this.sniffAfterFailureDelayMillis = sniffAfterFailureDelayMillis;
this.scheduledExecutorService = Executors.newScheduledThreadPool(1);
SnifferThreadFactory threadFactory = new SnifferThreadFactory(SNIFFER_THREAD_NAME);
this.scheduledExecutorService = Executors.newScheduledThreadPool(1, threadFactory);
scheduleNextRun(0);
}
@ -151,4 +157,34 @@ public class Sniffer implements Closeable {
public static SnifferBuilder builder(RestClient restClient) {
return new SnifferBuilder(restClient);
}
private static class SnifferThreadFactory implements ThreadFactory {
private final AtomicInteger threadNumber = new AtomicInteger(1);
private final String namePrefix;
private final ThreadFactory originalThreadFactory;
private SnifferThreadFactory(String namePrefix) {
this.namePrefix = namePrefix;
this.originalThreadFactory = AccessController.doPrivileged(new PrivilegedAction<ThreadFactory>() {
@Override
public ThreadFactory run() {
return Executors.defaultThreadFactory();
}
});
}
@Override
public Thread newThread(final Runnable r) {
return AccessController.doPrivileged(new PrivilegedAction<Thread>() {
@Override
public Thread run() {
Thread t = originalThreadFactory.newThread(r);
t.setName(namePrefix + "[T#" + threadNumber.getAndIncrement() + "]");
t.setDaemon(true);
return t;
}
});
}
}
}

View File

@ -55,6 +55,7 @@ import java.util.Set;
import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.CoreMatchers.startsWith;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.fail;
@ -128,7 +129,9 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
} catch(ResponseException e) {
Response response = e.getResponse();
if (sniffResponse.isFailure) {
assertThat(e.getMessage(), containsString("GET " + httpHost + "/_nodes/http?timeout=" + sniffRequestTimeout + "ms"));
final String errorPrefix = "method [GET], host [" + httpHost + "], URI [/_nodes/http?timeout=" + sniffRequestTimeout
+ "ms], status line [HTTP/1.1";
assertThat(e.getMessage(), startsWith(errorPrefix));
assertThat(e.getMessage(), containsString(Integer.toString(sniffResponse.nodesInfoResponseCode)));
assertThat(response.getHost(), equalTo(httpHost));
assertThat(response.getStatusLine().getStatusCode(), equalTo(sniffResponse.nodesInfoResponseCode));

View File

@ -20,7 +20,6 @@
package org.elasticsearch.transport.client;
import com.carrotsearch.randomizedtesting.RandomizedTest;
import org.apache.lucene.util.Constants;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
@ -52,7 +51,8 @@ public class PreBuiltTransportClientTests extends RandomizedTest {
@Test
public void testInstallPluginTwice() {
for (Class<? extends Plugin> plugin :
Arrays.asList(ParentJoinPlugin.class, ReindexPlugin.class, PercolatorPlugin.class, MustachePlugin.class)) {
Arrays.asList(ParentJoinPlugin.class, ReindexPlugin.class, PercolatorPlugin.class,
MustachePlugin.class)) {
try {
new PreBuiltTransportClient(Settings.EMPTY, plugin);
fail("exception expected");

View File

@ -58,7 +58,7 @@ dependencies {
compile 'org.elasticsearch:securesm:1.1'
// utilities
compile 'net.sf.jopt-simple:jopt-simple:5.0.2'
compile "org.elasticsearch:elasticsearch-cli:${version}"
compile 'com.carrotsearch:hppc:0.7.1'
// time handling, remove with java 8 time
@ -265,6 +265,12 @@ if (JavaVersion.current() > JavaVersion.VERSION_1_8) {
dependencyLicenses {
mapping from: /lucene-.*/, to: 'lucene'
mapping from: /jackson-.*/, to: 'jackson'
dependencies = project.configurations.runtime.fileCollection {
it.group.startsWith('org.elasticsearch') == false ||
// keep the following org.elasticsearch jars in
(it.name == 'jna' ||
it.name == 'securesm')
}
}
if (isEclipse == false || project.path == ":core-tests") {

36
core/cli/build.gradle Normal file
View File

@ -0,0 +1,36 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import org.elasticsearch.gradle.precommit.PrecommitTasks
apply plugin: 'elasticsearch.build'
archivesBaseName = 'elasticsearch-cli'
dependencies {
compile 'net.sf.jopt-simple:jopt-simple:5.0.2'
}
test.enabled = false
// Since CLI does not depend on :core, it cannot run the jarHell task
jarHell.enabled = false
forbiddenApisMain {
signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')]
}

View File

@ -23,11 +23,6 @@ import joptsimple.OptionException;
import joptsimple.OptionParser;
import joptsimple.OptionSet;
import joptsimple.OptionSpec;
import org.apache.logging.log4j.Level;
import org.apache.lucene.util.SetOnce;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.logging.LogConfigurator;
import org.elasticsearch.common.settings.Settings;
import java.io.Closeable;
import java.io.IOException;
@ -55,12 +50,13 @@ public abstract class Command implements Closeable {
this.description = description;
}
final SetOnce<Thread> shutdownHookThread = new SetOnce<>();
private Thread shutdownHookThread;
/** Parses options for this command from args and executes it. */
public final int main(String[] args, Terminal terminal) throws Exception {
if (addShutdownHook()) {
shutdownHookThread.set(new Thread(() -> {
shutdownHookThread = new Thread(() -> {
try {
this.close();
} catch (final IOException e) {
@ -75,16 +71,11 @@ public abstract class Command implements Closeable {
throw new AssertionError(impossible);
}
}
}));
Runtime.getRuntime().addShutdownHook(shutdownHookThread.get());
});
Runtime.getRuntime().addShutdownHook(shutdownHookThread);
}
if (shouldConfigureLoggingWithoutConfig()) {
// initialize default for es.logger.level because we will not read the log4j2.properties
final String loggerLevel = System.getProperty("es.logger.level", Level.INFO.name());
final Settings settings = Settings.builder().put("logger.level", loggerLevel).build();
LogConfigurator.configureWithoutConfig(settings);
}
beforeExecute();
try {
mainWithoutErrorHandling(args, terminal);
@ -103,14 +94,10 @@ public abstract class Command implements Closeable {
}
/**
* Indicate whether or not logging should be configured without reading a log4j2.properties. Most commands should do this because we do
* not configure logging for CLI tools. Only commands that configure logging on their own should not do this.
*
* @return true if logging should be configured without reading a log4j2.properties file
* Setup method to be executed before parsing or execution of the command being run. Any exceptions thrown by the
* method will not be cleanly caught by the parser.
*/
protected boolean shouldConfigureLoggingWithoutConfig() {
return true;
}
protected void beforeExecute() {}
/**
* Executes the command, but all errors are thrown.
@ -166,6 +153,11 @@ public abstract class Command implements Closeable {
return true;
}
/** Gets the shutdown hook thread if it exists **/
Thread getShutdownHookThread() {
return shutdownHookThread;
}
@Override
public void close() throws IOException {

View File

@ -16,23 +16,19 @@
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cli;
package org.elasticsearch.common.settings.loader;
import org.elasticsearch.common.xcontent.XContentType;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
/**
* Settings loader that loads (parses) the settings in a json format by flattening them
* into a map.
* Annotation to suppress forbidden-apis errors inside a whole class, a method, or a field.
*/
public class JsonSettingsLoader extends XContentSettingsLoader {
public JsonSettingsLoader(boolean allowNullValues) {
super(allowNullValues);
}
@Override
public XContentType contentType() {
return XContentType.JSON;
}
@Retention(RetentionPolicy.CLASS)
@Target({ ElementType.CONSTRUCTOR, ElementType.FIELD, ElementType.METHOD, ElementType.TYPE })
public @interface SuppressForbidden {
String reason();
}

View File

@ -19,8 +19,6 @@
package org.elasticsearch.cli;
import org.elasticsearch.common.SuppressForbidden;
import java.io.BufferedReader;
import java.io.Console;
import java.io.IOException;

View File

@ -0,0 +1 @@
eb21a035c66ad307e66ec8fce37f5d50fd62d039

View File

@ -1 +0,0 @@
2ef7b1cc34de149600f5e75bc2d5bf40de894e60

View File

@ -0,0 +1 @@
1c58cc9313ddf19f0900cd61ed044874278ce320

View File

@ -1 +0,0 @@
b88721371cfa2d7242bb5e52fe70861aa061c050

View File

@ -0,0 +1 @@
e853081fadaad3e98ed801937acc3d8f77580686

View File

@ -1 +0,0 @@
71590ad45cee21249774e2f93e5eca66e446cef3

View File

@ -0,0 +1 @@
1e08caf1d787c825307d8cc6362452086020d853

View File

@ -1 +0,0 @@
8bd44d50f9a6cdff9c7578ea39d524eb519e35ab

View File

@ -1 +0,0 @@
7e2f1637394eecdc3c8cd067b3f2cf4801b1bcf6

View File

@ -0,0 +1 @@
894f96d677880d4ab834a1356f62b875e579caaa

View File

@ -1 +0,0 @@
e0dcd508dfc4864a2f5a1963d6ffad170d970375

View File

@ -0,0 +1 @@
7a2999229464e7a324aa503c0a52ec0f05efe7bd

View File

@ -1 +0,0 @@
052f6548ae1688e126c29b5dc400929dc0128615

View File

@ -0,0 +1 @@
c041978c686866ee8534f538c6220238db3bb6be

View File

@ -54,13 +54,14 @@ The KStem stemmer in
was developed by Bob Krovetz and Sergio Guzman-Lara (CIIR-UMass Amherst)
under the BSD-license.
The Arabic,Persian,Romanian,Bulgarian, and Hindi analyzers (common) come with a default
The Arabic,Persian,Romanian,Bulgarian, Hindi and Bengali analyzers (common) come with a default
stopword list that is BSD-licensed created by Jacques Savoy. These files reside in:
analysis/common/src/resources/org/apache/lucene/analysis/ar/stopwords.txt,
analysis/common/src/resources/org/apache/lucene/analysis/fa/stopwords.txt,
analysis/common/src/resources/org/apache/lucene/analysis/ro/stopwords.txt,
analysis/common/src/resources/org/apache/lucene/analysis/bg/stopwords.txt,
analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt
analysis/common/src/resources/org/apache/lucene/analysis/hi/stopwords.txt,
analysis/common/src/resources/org/apache/lucene/analysis/bn/stopwords.txt
See http://members.unine.ch/jacques.savoy/clef/index.html.
The German,Spanish,Finnish,French,Hungarian,Italian,Portuguese,Russian and Swedish light stemmers

View File

@ -1 +0,0 @@
90771b57dd0a375feb60a5c8c23332342fb0ed2e

View File

@ -0,0 +1 @@
a508bf6b580471ee568dab7d2acfedfa5aadce70

View File

@ -1 +0,0 @@
87c3a38c236913b05ccfdb54c42087fd05941e2a

View File

@ -0,0 +1 @@
804a7ce82bba3d085733486bfde4846ecb77ce01

View File

@ -1 +0,0 @@
cb83f7ec902f19449867f29335e796631b096c9b

View File

@ -0,0 +1 @@
dd291b7ebf4845483895724d2562214dc7f40049

View File

@ -1 +0,0 @@
a34efb2ea4ad317a57a3f6d85132b0a7153ee97c

View File

@ -0,0 +1 @@
0732d16c16421fca058a2a07ca4081ec7696365b

View File

@ -1 +0,0 @@
b83bd072e3fb724089427c14fbc56a40f602488f

View File

@ -0,0 +1 @@
596550daabae765ad685112e0fe7c4f0fdfccb3f

View File

@ -1 +0,0 @@
ad174c3acb38b93e9791734d36edda0662d0f469

View File

@ -0,0 +1 @@
5f26dd64c195258a81175772ef7fe105e7d60a26

View File

@ -1 +0,0 @@
05542225c4b6f8b84e78a68d040a9cd71e0dc374

View File

@ -0,0 +1 @@
3ef64c58d0c09ca40d848efa96b585b7476271f2

View File

@ -1 +0,0 @@
069588dcc08cd04a08e39c814357d5bcf53b2ce4

View File

@ -0,0 +1 @@
1496ee5fa62206ee5ddf51042a340d6a9ee3b5de

View File

@ -1 +0,0 @@
2da58b69d82b0d2ad1d62c9cd989c5b55d72ae2a

View File

@ -0,0 +1 @@
1554920ab207a3245fa408d022a5c90ad3a1fea3

View File

@ -1 +0,0 @@
93268b47c1ecb056f84ada2e407055bf1a241489

View File

@ -0,0 +1 @@
5767c15c5ee97926829fd8a4337e434fa95f3c08

View File

@ -1 +0,0 @@
1b811e757a33bcd7b723b9502f2e52c73f563947

View File

@ -0,0 +1 @@
691f7b9ac05f3ad2ac7e80733ef70247904bd3ae

View File

@ -1 +0,0 @@
8a43d85b9a588b57de45884ddfbbb02c1c91d84f

View File

@ -0,0 +1 @@
6c64c04d802badb800516a8a574cb993929c3805

View File

@ -1 +0,0 @@
0e169a0172db942444cca3763f9867869db36550

View File

@ -0,0 +1 @@
3f1bc1aada8f06b176b782da24b9d7ad9641c41a

View File

@ -1 +0,0 @@
8afba460307d94daa1d79c6b981a2a4f70ee167d

View File

@ -0,0 +1 @@
8ded650aed23efb775f17be496e3e3870214e23b

View File

@ -1 +0,0 @@
445605ca56bfd9a44d59cc4d6b43c6723c785db8

View File

@ -0,0 +1 @@
8d0ed1589ebdccf34e888c6efc0134a13a238c85

View File

@ -1 +0,0 @@
3b132bea69e8ee099f416044970997bde80f4ea6

View File

@ -0,0 +1 @@
7a27ea250c5130b2922b86dea63cbb1cc10a660c

View File

@ -26,6 +26,7 @@ import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import java.net.URL;
import java.security.CodeSource;
import java.util.jar.JarInputStream;
import java.util.jar.Manifest;
@ -45,8 +46,8 @@ public class Build {
final boolean isSnapshot;
final String esPrefix = "elasticsearch-" + Version.CURRENT;
final URL url = getElasticsearchCodebase();
final String urlStr = url.toString();
final URL url = getElasticsearchCodeSourceLocation();
final String urlStr = url == null ? "" : url.toString();
if (urlStr.startsWith("file:/") && (urlStr.endsWith(esPrefix + ".jar") || urlStr.endsWith(esPrefix + "-SNAPSHOT.jar"))) {
try (JarInputStream jar = new JarInputStream(FileSystemUtils.openFileURLStream(url))) {
Manifest manifest = jar.getManifest();
@ -88,10 +89,13 @@ public class Build {
private final boolean isSnapshot;
/**
* Returns path to elasticsearch codebase path
* The location of the code source for Elasticsearch
*
* @return the location of the code source for Elasticsearch which may be null
*/
static URL getElasticsearchCodebase() {
return Build.class.getProtectionDomain().getCodeSource().getLocation();
static URL getElasticsearchCodeSourceLocation() {
final CodeSource codeSource = Build.class.getProtectionDomain().getCodeSource();
return codeSource == null ? null : codeSource.getLocation();
}
private final String shortHash;

View File

@ -28,6 +28,11 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.monitor.jvm.JvmInfo;
import java.io.IOException;
import java.lang.reflect.Field;
import java.lang.reflect.Modifier;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Version implements Comparable<Version> {
/*
@ -93,8 +98,15 @@ public class Version implements Comparable<Version> {
public static final int V_5_6_0_ID = 5060099;
public static final Version V_5_6_0 = new Version(V_5_6_0_ID, org.apache.lucene.util.Version.LUCENE_6_6_0);
public static final int V_5_6_1_ID = 5060199;
// use proper Lucene constant once we are on a Lucene snapshot that knows about 6.6.1
public static final Version V_5_6_1 = new Version(V_5_6_1_ID, org.apache.lucene.util.Version.fromBits(6, 6, 1));
public static final Version V_5_6_1 = new Version(V_5_6_1_ID, org.apache.lucene.util.Version.LUCENE_6_6_1);
public static final int V_5_6_2_ID = 5060299;
public static final Version V_5_6_2 = new Version(V_5_6_2_ID, org.apache.lucene.util.Version.LUCENE_6_6_1);
public static final int V_5_6_3_ID = 5060399;
public static final Version V_5_6_3 = new Version(V_5_6_3_ID, org.apache.lucene.util.Version.LUCENE_6_6_1);
public static final int V_5_6_4_ID = 5060499;
public static final Version V_5_6_4 = new Version(V_5_6_4_ID, org.apache.lucene.util.Version.LUCENE_6_6_1);
public static final int V_5_6_5_ID = 5060599;
public static final Version V_5_6_5 = new Version(V_5_6_5_ID, org.apache.lucene.util.Version.LUCENE_6_6_1);
public static final int V_6_0_0_alpha1_ID = 6000001;
public static final Version V_6_0_0_alpha1 =
new Version(V_6_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_7_0_0);
@ -110,16 +122,23 @@ public class Version implements Comparable<Version> {
public static final int V_6_0_0_rc1_ID = 6000051;
public static final Version V_6_0_0_rc1 =
new Version(V_6_0_0_rc1_ID, org.apache.lucene.util.Version.LUCENE_7_0_0);
public static final int V_6_0_0_rc2_ID = 6000052;
public static final Version V_6_0_0_rc2 =
new Version(V_6_0_0_rc2_ID, org.apache.lucene.util.Version.LUCENE_7_0_1);
public static final int V_6_0_0_ID = 6000099;
public static final Version V_6_0_0 =
new Version(V_6_0_0_ID, org.apache.lucene.util.Version.LUCENE_7_0_1);
public static final int V_6_0_1_ID = 6000199;
public static final Version V_6_0_1 =
new Version(V_6_0_1_ID, org.apache.lucene.util.Version.LUCENE_7_0_1);
public static final int V_6_1_0_ID = 6010099;
public static final Version V_6_1_0 =
new Version(V_6_1_0_ID, org.apache.lucene.util.Version.LUCENE_7_0_0);
new Version(V_6_1_0_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
public static final int V_7_0_0_alpha1_ID = 7000001;
public static final Version V_7_0_0_alpha1 =
new Version(V_7_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_7_0_0);
new Version(V_7_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
public static final Version CURRENT = V_7_0_0_alpha1;
// unreleased versions must be added to the above list with the suffix _UNRELEASED (with the exception of CURRENT)
static {
assert CURRENT.luceneVersion.equals(org.apache.lucene.util.Version.LATEST) : "Version must be upgraded to ["
+ org.apache.lucene.util.Version.LATEST + "] is still set to [" + CURRENT.luceneVersion + "]";
@ -135,6 +154,12 @@ public class Version implements Comparable<Version> {
return V_7_0_0_alpha1;
case V_6_1_0_ID:
return V_6_1_0;
case V_6_0_1_ID:
return V_6_0_1;
case V_6_0_0_ID:
return V_6_0_0;
case V_6_0_0_rc2_ID:
return V_6_0_0_rc2;
case V_6_0_0_beta2_ID:
return V_6_0_0_beta2;
case V_6_0_0_rc1_ID:
@ -145,6 +170,14 @@ public class Version implements Comparable<Version> {
return V_6_0_0_alpha2;
case V_6_0_0_alpha1_ID:
return V_6_0_0_alpha1;
case V_5_6_5_ID:
return V_5_6_5;
case V_5_6_4_ID:
return V_5_6_4;
case V_5_6_3_ID:
return V_5_6_3;
case V_5_6_2_ID:
return V_5_6_2;
case V_5_6_1_ID:
return V_5_6_1;
case V_5_6_0_ID:
@ -338,19 +371,23 @@ public class Version implements Comparable<Version> {
* is a beta or RC release then the version itself is returned.
*/
public Version minimumCompatibilityVersion() {
final int bwcMajor;
final int bwcMinor;
// TODO: remove this entirely, making it static for each version
if (major == 6) { // we only specialize for current major here
bwcMajor = Version.V_5_6_0.major;
bwcMinor = Version.V_5_6_0.minor;
} else if (major == 7) { // we only specialize for current major here
return V_6_1_0;
} else {
bwcMajor = major;
bwcMinor = 0;
if (major >= 6) {
// all major versions from 6 onwards are compatible with last minor series of the previous major
final List<Version> declaredVersions = getDeclaredVersions(getClass());
Version bwcVersion = null;
for (int i = declaredVersions.size() - 1; i >= 0; i--) {
final Version candidateVersion = declaredVersions.get(i);
if (candidateVersion.major == major - 1 && candidateVersion.isRelease() && after(candidateVersion)) {
if (bwcVersion != null && candidateVersion.minor < bwcVersion.minor) {
break;
}
bwcVersion = candidateVersion;
}
}
return bwcVersion == null ? this : bwcVersion;
}
return Version.min(this, fromId(bwcMajor * 1000000 + bwcMinor * 10000 + 99));
return Version.min(this, fromId((int) major * 1000000 + 0 * 10000 + 99));
}
/**
@ -460,4 +497,34 @@ public class Version implements Comparable<Version> {
public boolean isRelease() {
return build == 99;
}
/**
* Extracts a sorted list of declared version constants from a class.
* The argument would normally be Version.class but is exposed for
* testing with other classes-containing-version-constants.
*/
public static List<Version> getDeclaredVersions(final Class<?> versionClass) {
final Field[] fields = versionClass.getFields();
final List<Version> versions = new ArrayList<>(fields.length);
for (final Field field : fields) {
final int mod = field.getModifiers();
if (false == Modifier.isStatic(mod) && Modifier.isFinal(mod) && Modifier.isPublic(mod)) {
continue;
}
if (field.getType() != Version.class) {
continue;
}
if ("CURRENT".equals(field.getName())) {
continue;
}
assert field.getName().matches("V(_\\d+)+(_(alpha|beta|rc)\\d+)?") : field.getName();
try {
versions.add(((Version) field.get(null)));
} catch (final IllegalAccessException e) {
throw new RuntimeException(e);
}
}
Collections.sort(versions);
return versions;
}
}

View File

@ -24,6 +24,7 @@ import org.elasticsearch.common.CheckedConsumer;
import java.util.ArrayList;
import java.util.List;
import java.util.function.BiConsumer;
import java.util.function.Consumer;
/**
@ -69,6 +70,42 @@ public interface ActionListener<Response> {
};
}
/**
* Creates a listener that listens for a response (or failure) and executes the
* corresponding runnable when the response (or failure) is received.
*
* @param runnable the runnable that will be called in event of success or failure
* @param <Response> the type of the response
* @return a listener that listens for responses and invokes the runnable when received
*/
static <Response> ActionListener<Response> wrap(Runnable runnable) {
return wrap(r -> runnable.run(), e -> runnable.run());
}
/**
* Converts a listener to a {@link BiConsumer} for compatibility with the {@link java.util.concurrent.CompletableFuture}
* api.
*
* @param listener that will be wrapped
* @param <Response> the type of the response
* @return a bi consumer that will complete the wrapped listener
*/
static <Response> BiConsumer<Response, Throwable> toBiConsumer(ActionListener<Response> listener) {
return (response, throwable) -> {
if (throwable == null) {
listener.onResponse(response);
} else {
if (throwable instanceof Exception) {
listener.onFailure((Exception) throwable);
} else if (throwable instanceof Error) {
throw (Error) throwable;
} else {
throw new AssertionError("Should have been either Error or Exception", throwable);
}
}
};
}
/**
* Notifies every given listener with the response passed to {@link #onResponse(Object)}. If a listener itself throws an exception
* the exception is forwarded to {@link #onFailure(Exception)}. If in turn {@link #onFailure(Exception)} fails all remaining

View File

@ -128,7 +128,9 @@ import org.elasticsearch.action.admin.indices.settings.put.TransportUpdateSettin
import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsAction;
import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresAction;
import org.elasticsearch.action.admin.indices.shards.TransportIndicesShardStoresAction;
import org.elasticsearch.action.admin.indices.shrink.ResizeAction;
import org.elasticsearch.action.admin.indices.shrink.ShrinkAction;
import org.elasticsearch.action.admin.indices.shrink.TransportResizeAction;
import org.elasticsearch.action.admin.indices.shrink.TransportShrinkAction;
import org.elasticsearch.action.admin.indices.stats.IndicesStatsAction;
import org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction;
@ -181,7 +183,6 @@ import org.elasticsearch.action.search.TransportClearScrollAction;
import org.elasticsearch.action.search.TransportMultiSearchAction;
import org.elasticsearch.action.search.TransportSearchAction;
import org.elasticsearch.action.search.TransportSearchScrollAction;
import org.elasticsearch.action.support.ActionFilter;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.AutoCreateIndex;
import org.elasticsearch.action.support.DestructiveOperations;
@ -199,7 +200,6 @@ import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.common.NamedRegistry;
import org.elasticsearch.common.inject.AbstractModule;
import org.elasticsearch.common.inject.multibindings.MapBinder;
import org.elasticsearch.common.inject.multibindings.Multibinder;
import org.elasticsearch.common.logging.ESLoggerFactory;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.IndexScopedSettings;
@ -271,6 +271,7 @@ import org.elasticsearch.rest.action.admin.indices.RestRecoveryAction;
import org.elasticsearch.rest.action.admin.indices.RestRefreshAction;
import org.elasticsearch.rest.action.admin.indices.RestRolloverIndexAction;
import org.elasticsearch.rest.action.admin.indices.RestShrinkIndexAction;
import org.elasticsearch.rest.action.admin.indices.RestSplitIndexAction;
import org.elasticsearch.rest.action.admin.indices.RestSyncedFlushAction;
import org.elasticsearch.rest.action.admin.indices.RestUpdateSettingsAction;
import org.elasticsearch.rest.action.admin.indices.RestUpgradeAction;
@ -315,6 +316,7 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.usage.UsageService;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;
@ -323,7 +325,6 @@ import java.util.function.Supplier;
import java.util.function.UnaryOperator;
import java.util.stream.Collectors;
import static java.util.Collections.unmodifiableList;
import static java.util.Collections.unmodifiableMap;
/**
@ -341,7 +342,7 @@ public class ActionModule extends AbstractModule {
private final SettingsFilter settingsFilter;
private final List<ActionPlugin> actionPlugins;
private final Map<String, ActionHandler<?, ?>> actions;
private final List<Class<? extends ActionFilter>> actionFilters;
private final ActionFilters actionFilters;
private final AutoCreateIndex autoCreateIndex;
private final DestructiveOperations destructiveOperations;
private final RestController restController;
@ -437,6 +438,7 @@ public class ActionModule extends AbstractModule {
actions.register(IndicesShardStoresAction.INSTANCE, TransportIndicesShardStoresAction.class);
actions.register(CreateIndexAction.INSTANCE, TransportCreateIndexAction.class);
actions.register(ShrinkAction.INSTANCE, TransportShrinkAction.class);
actions.register(ResizeAction.INSTANCE, TransportResizeAction.class);
actions.register(RolloverAction.INSTANCE, TransportRolloverAction.class);
actions.register(DeleteIndexAction.INSTANCE, TransportDeleteIndexAction.class);
actions.register(GetIndexAction.INSTANCE, TransportGetIndexAction.class);
@ -503,8 +505,9 @@ public class ActionModule extends AbstractModule {
return unmodifiableMap(actions.getRegistry());
}
private List<Class<? extends ActionFilter>> setupActionFilters(List<ActionPlugin> actionPlugins) {
return unmodifiableList(actionPlugins.stream().flatMap(p -> p.getActionFilters().stream()).collect(Collectors.toList()));
private ActionFilters setupActionFilters(List<ActionPlugin> actionPlugins) {
return new ActionFilters(
Collections.unmodifiableSet(actionPlugins.stream().flatMap(p -> p.getActionFilters().stream()).collect(Collectors.toSet())));
}
public void initRestHandlers(Supplier<DiscoveryNodes> nodesInCluster) {
@ -552,6 +555,7 @@ public class ActionModule extends AbstractModule {
registerHandler.accept(new RestIndicesAliasesAction(settings, restController));
registerHandler.accept(new RestCreateIndexAction(settings, restController));
registerHandler.accept(new RestShrinkIndexAction(settings, restController));
registerHandler.accept(new RestSplitIndexAction(settings, restController));
registerHandler.accept(new RestRolloverIndexAction(settings, restController));
registerHandler.accept(new RestDeleteIndexAction(settings, restController));
registerHandler.accept(new RestCloseIndexAction(settings, restController));
@ -649,11 +653,7 @@ public class ActionModule extends AbstractModule {
@Override
protected void configure() {
Multibinder<ActionFilter> actionFilterMultibinder = Multibinder.newSetBinder(binder(), ActionFilter.class);
for (Class<? extends ActionFilter> actionFilter : actionFilters) {
actionFilterMultibinder.addBinding().to(actionFilter);
}
bind(ActionFilters.class).asEagerSingleton();
bind(ActionFilters.class).toInstance(actionFilters);
bind(DestructiveOperations.class).toInstance(destructiveOperations);
if (false == transportClient) {
@ -676,6 +676,10 @@ public class ActionModule extends AbstractModule {
}
}
public ActionFilters getActionFilters() {
return actionFilters;
}
public RestController getRestController() {
return restController;
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.stats;
import org.elasticsearch.Version;
import org.elasticsearch.action.support.nodes.BaseNodeResponse;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.Nullable;
@ -36,6 +37,7 @@ import org.elasticsearch.monitor.fs.FsInfo;
import org.elasticsearch.monitor.jvm.JvmStats;
import org.elasticsearch.monitor.os.OsStats;
import org.elasticsearch.monitor.process.ProcessStats;
import org.elasticsearch.node.AdaptiveSelectionStats;
import org.elasticsearch.script.ScriptStats;
import org.elasticsearch.threadpool.ThreadPoolStats;
import org.elasticsearch.transport.TransportStats;
@ -86,6 +88,9 @@ public class NodeStats extends BaseNodeResponse implements ToXContentFragment {
@Nullable
private IngestStats ingestStats;
@Nullable
private AdaptiveSelectionStats adaptiveSelectionStats;
NodeStats() {
}
@ -95,7 +100,8 @@ public class NodeStats extends BaseNodeResponse implements ToXContentFragment {
@Nullable AllCircuitBreakerStats breaker,
@Nullable ScriptStats scriptStats,
@Nullable DiscoveryStats discoveryStats,
@Nullable IngestStats ingestStats) {
@Nullable IngestStats ingestStats,
@Nullable AdaptiveSelectionStats adaptiveSelectionStats) {
super(node);
this.timestamp = timestamp;
this.indices = indices;
@ -110,6 +116,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContentFragment {
this.scriptStats = scriptStats;
this.discoveryStats = discoveryStats;
this.ingestStats = ingestStats;
this.adaptiveSelectionStats = adaptiveSelectionStats;
}
public long getTimestamp() {
@ -199,6 +206,11 @@ public class NodeStats extends BaseNodeResponse implements ToXContentFragment {
return ingestStats;
}
@Nullable
public AdaptiveSelectionStats getAdaptiveSelectionStats() {
return adaptiveSelectionStats;
}
public static NodeStats readNodeStats(StreamInput in) throws IOException {
NodeStats nodeInfo = new NodeStats();
nodeInfo.readFrom(in);
@ -223,6 +235,11 @@ public class NodeStats extends BaseNodeResponse implements ToXContentFragment {
scriptStats = in.readOptionalWriteable(ScriptStats::new);
discoveryStats = in.readOptionalWriteable(DiscoveryStats::new);
ingestStats = in.readOptionalWriteable(IngestStats::new);
if (in.getVersion().onOrAfter(Version.V_6_1_0)) {
adaptiveSelectionStats = in.readOptionalWriteable(AdaptiveSelectionStats::new);
} else {
adaptiveSelectionStats = null;
}
}
@Override
@ -246,6 +263,9 @@ public class NodeStats extends BaseNodeResponse implements ToXContentFragment {
out.writeOptionalWriteable(scriptStats);
out.writeOptionalWriteable(discoveryStats);
out.writeOptionalWriteable(ingestStats);
if (out.getVersion().onOrAfter(Version.V_6_1_0)) {
out.writeOptionalWriteable(adaptiveSelectionStats);
}
}
@Override
@ -306,6 +326,9 @@ public class NodeStats extends BaseNodeResponse implements ToXContentFragment {
if (getIngestStats() != null) {
getIngestStats().toXContent(builder, params);
}
if (getAdaptiveSelectionStats() != null) {
getAdaptiveSelectionStats().toXContent(builder, params);
}
return builder;
}
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.stats;
import org.elasticsearch.Version;
import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags;
import org.elasticsearch.action.support.nodes.BaseNodesRequest;
import org.elasticsearch.common.io.stream.StreamInput;
@ -43,6 +44,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
private boolean script;
private boolean discovery;
private boolean ingest;
private boolean adaptiveSelection;
public NodesStatsRequest() {
}
@ -71,6 +73,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
this.script = true;
this.discovery = true;
this.ingest = true;
this.adaptiveSelection = true;
return this;
}
@ -90,6 +93,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
this.script = false;
this.discovery = false;
this.ingest = false;
this.adaptiveSelection = false;
return this;
}
@ -265,6 +269,18 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
return this;
}
public boolean adaptiveSelection() {
return adaptiveSelection;
}
/**
* Should adaptiveSelection statistics be returned.
*/
public NodesStatsRequest adaptiveSelection(boolean adaptiveSelection) {
this.adaptiveSelection = adaptiveSelection;
return this;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
@ -280,6 +296,11 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
script = in.readBoolean();
discovery = in.readBoolean();
ingest = in.readBoolean();
if (in.getVersion().onOrAfter(Version.V_6_1_0)) {
adaptiveSelection = in.readBoolean();
} else {
adaptiveSelection = false;
}
}
@Override
@ -297,5 +318,8 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
out.writeBoolean(script);
out.writeBoolean(discovery);
out.writeBoolean(ingest);
if (out.getVersion().onOrAfter(Version.V_6_1_0)) {
out.writeBoolean(adaptiveSelection);
}
}
}

View File

@ -73,7 +73,7 @@ public class TransportNodesStatsAction extends TransportNodesAction<NodesStatsRe
NodesStatsRequest request = nodeStatsRequest.request;
return nodeService.stats(request.indices(), request.os(), request.process(), request.jvm(), request.threadPool(),
request.fs(), request.transport(), request.http(), request.breaker(), request.script(), request.discovery(),
request.ingest());
request.ingest(), request.adaptiveSelection());
}
public static class NodeStatsRequest extends BaseNodeRequest {

View File

@ -51,6 +51,17 @@ public class GetRepositoriesRequest extends MasterNodeReadRequest<GetRepositorie
this.repositories = repositories;
}
public GetRepositoriesRequest(StreamInput in) throws IOException {
super(in);
repositories = in.readStringArray();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeStringArray(repositories);
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
@ -85,13 +96,6 @@ public class GetRepositoriesRequest extends MasterNodeReadRequest<GetRepositorie
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
repositories = in.readStringArray();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeStringArray(repositories);
throw new UnsupportedOperationException("usage of Streamable is to be replaced by Writeable");
}
}

Some files were not shown because too many files have changed in this diff Show More