Merge branch 'master' into zen2

This commit is contained in:
David Turner 2018-10-03 08:36:34 +01:00
commit a9eae1d068
1274 changed files with 30328 additions and 12988 deletions

View File

@ -350,26 +350,26 @@ and running elasticsearch distributions works correctly on supported operating s
These tests should really only be run in vagrant vms because they're destructive.
. Install Virtual Box and Vagrant.
+
. (Optional) Install https://github.com/fgrehm/vagrant-cachier[vagrant-cachier] to squeeze
a bit more performance out of the process:
+
--------------------------------------
vagrant plugin install vagrant-cachier
--------------------------------------
+
. Validate your installed dependencies:
+
-------------------------------------
./gradlew :qa:vagrant:vagrantCheckVersion
-------------------------------------
+
. Download and smoke test the VMs with `./gradlew vagrantSmokeTest` or
`./gradlew -Pvagrant.boxes=all vagrantSmokeTest`. The first time you run this it will
download the base images and provision the boxes and immediately quit. Downloading all
the images may take a long time. After the images are already on your machine, they won't
be downloaded again unless they have been updated to a new version.
+
. Run the tests with `./gradlew packagingTest`. This will cause Gradle to build
the tar, zip, and deb packages and all the plugins. It will then run the tests
on ubuntu-1404 and centos-7. We chose those two distributions as the default
@ -402,6 +402,7 @@ These are the linux flavors supported, all of which we provide images for
* ubuntu-1404 aka trusty
* ubuntu-1604 aka xenial
* ubuntu-1804 aka bionic beaver
* debian-8 aka jessie
* debian-9 aka stretch, the current debian stable distribution
* centos-6

9
Vagrantfile vendored
View File

@ -61,6 +61,15 @@ Vagrant.configure(2) do |config|
SHELL
end
end
'ubuntu-1804'.tap do |box|
config.vm.define box, define_opts do |config|
config.vm.box = 'elastic/ubuntu-18.04-x86_64'
deb_common config, box, extra: <<-SHELL
# Install Jayatana so we can work around it being present.
[ -f /usr/share/java/jayatanaag.jar ] || install jayatana
SHELL
end
end
# Wheezy's backports don't contain Openjdk 8 and the backflips
# required to get the sun jdk on there just aren't worth it. We have
# jessie and stretch for testing debian and it works fine.

View File

@ -31,7 +31,8 @@ class VagrantTestPlugin implements Plugin<Project> {
'opensuse-42',
'sles-12',
'ubuntu-1404',
'ubuntu-1604'
'ubuntu-1604',
'ubuntu-1804'
])
/** All Windows boxes that we test, which may or may not be supplied **/

View File

@ -12,11 +12,25 @@
<!-- Checks Java files and forbids empty Javadoc comments -->
<module name="RegexpMultiline">
<property name="id" value="EmptyJavadoc"/>
<property name="format" value="\/\*[\s\*]*\*\/"/>
<property name="fileExtensions" value="java"/>
<property name="message" value="Empty javadoc comments are forbidden"/>
</module>
<!--
We include snippets that are wrapped in `// tag` and `// end` into the
docs, stripping the leading spaces. If the context is wider than 76
characters then it'll need to scroll. This fails the build if it sees
such snippets.
-->
<module name="RegexpMultiline">
<property name="id" value="SnippetLength"/>
<property name="format" value="^( *)\/\/\s*tag(.+)\s*\n(.*\n)*\1.{77,}\n(.*\n)*\1\/\/\s*end\2\s*$"/>
<property name="fileExtensions" value="java"/>
<property name="message" value="Code snippets longer than 76 characters get cut off when rendered in the docs"/>
</module>
<module name="TreeWalker">
<!-- Its our official line length! See checkstyle_suppressions.xml for the files that don't pass this. For now we
suppress the check there but enforce it everywhere else. This prevents the list from getting longer even if it is
@ -35,6 +49,8 @@
<module name="OuterTypeFilename" />
<!-- No line wraps inside of import and package statements. -->
<module name="NoLineWrap" />
<!-- only one statement per line should be allowed -->
<module name="OneStatementPerLine"/>
<!-- Each java file has only one outer class -->
<module name="OneTopLevelClass" />
<!-- The suffix L is preferred, because the letter l (ell) is often

View File

@ -21,6 +21,29 @@
configuration of classes that aren't in packages. -->
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]Dummy.java" checks="PackageDeclaration" />
<!--
Truly temporary suppressions suppression of snippets included in
documentation that are so wide that they scroll.
-->
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]CRUDDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]ClusterClientDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]GraphDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]IndicesClientDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]IngestClientDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]LicensingDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]MigrationDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]MigrationClientDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]MiscellaneousDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]MlClientDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]RollupDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]SearchDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]SecurityDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]SnapshotClientDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]StoredScriptsDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]TasksClientDocumentationIT.java" id="SnippetLength" />
<suppress files="client[/\\]rest-high-level[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]WatcherDocumentationIT.java" id="SnippetLength" />
<suppress files="modules[/\\]reindex[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]documentation[/\\]ReindexDocumentationIT.jav" id="SnippetLength" />
<!-- Hopefully temporary suppression of LineLength on files that don't pass it. We should remove these when we the
files start to pass. -->
<suppress files="client[/\\]rest[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]HeapBufferedAsyncResponseConsumerTests.java" checks="LineLength" />
@ -138,7 +161,6 @@
<suppress files="server[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]AcknowledgedRequestBuilder.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeOperationRequestBuilder.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeReadOperationRequestBuilder.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeAction.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequest.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequestBuilder.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]TransportClusterInfoAction.java" checks="LineLength" />

View File

@ -19,13 +19,13 @@
package org.elasticsearch.client.benchmark.ops.bulk;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.client.benchmark.BenchmarkTask;
import org.elasticsearch.client.benchmark.metrics.Sample;
import org.elasticsearch.client.benchmark.metrics.SampleRecorder;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.io.PathUtils;
import org.elasticsearch.common.logging.ESLoggerFactory;
import java.io.BufferedReader;
import java.io.IOException;
@ -135,7 +135,7 @@ public class BulkBenchmarkTask implements BenchmarkTask {
private static final class BulkIndexer implements Runnable {
private static final Logger logger = ESLoggerFactory.getLogger(BulkIndexer.class.getName());
private static final Logger logger = LogManager.getLogger(BulkIndexer.class);
private final BlockingQueue<List<String>> bulkData;
private final int warmupIterations;

View File

@ -48,6 +48,8 @@ import org.elasticsearch.client.ml.PostDataRequest;
import org.elasticsearch.client.ml.PutCalendarRequest;
import org.elasticsearch.client.ml.PutDatafeedRequest;
import org.elasticsearch.client.ml.PutJobRequest;
import org.elasticsearch.client.ml.StartDatafeedRequest;
import org.elasticsearch.client.ml.StopDatafeedRequest;
import org.elasticsearch.client.ml.UpdateJobRequest;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
@ -231,6 +233,32 @@ final class MLRequestConverters {
return request;
}
static Request startDatafeed(StartDatafeedRequest startDatafeedRequest) throws IOException {
String endpoint = new EndpointBuilder()
.addPathPartAsIs("_xpack")
.addPathPartAsIs("ml")
.addPathPartAsIs("datafeeds")
.addPathPart(startDatafeedRequest.getDatafeedId())
.addPathPartAsIs("_start")
.build();
Request request = new Request(HttpPost.METHOD_NAME, endpoint);
request.setEntity(createEntity(startDatafeedRequest, REQUEST_BODY_CONTENT_TYPE));
return request;
}
static Request stopDatafeed(StopDatafeedRequest stopDatafeedRequest) throws IOException {
String endpoint = new EndpointBuilder()
.addPathPartAsIs("_xpack")
.addPathPartAsIs("ml")
.addPathPartAsIs("datafeeds")
.addPathPart(Strings.collectionToCommaDelimitedString(stopDatafeedRequest.getDatafeedIds()))
.addPathPartAsIs("_stop")
.build();
Request request = new Request(HttpPost.METHOD_NAME, endpoint);
request.setEntity(createEntity(stopDatafeedRequest, REQUEST_BODY_CONTENT_TYPE));
return request;
}
static Request deleteForecast(DeleteForecastRequest deleteForecastRequest) {
String endpoint = new EndpointBuilder()
.addPathPartAsIs("_xpack")

View File

@ -58,6 +58,10 @@ import org.elasticsearch.client.ml.PutDatafeedRequest;
import org.elasticsearch.client.ml.PutDatafeedResponse;
import org.elasticsearch.client.ml.PutJobRequest;
import org.elasticsearch.client.ml.PutJobResponse;
import org.elasticsearch.client.ml.StartDatafeedRequest;
import org.elasticsearch.client.ml.StartDatafeedResponse;
import org.elasticsearch.client.ml.StopDatafeedRequest;
import org.elasticsearch.client.ml.StopDatafeedResponse;
import org.elasticsearch.client.ml.UpdateJobRequest;
import org.elasticsearch.client.ml.job.stats.JobStats;
@ -565,6 +569,86 @@ public final class MachineLearningClient {
Collections.emptySet());
}
/**
* Starts the given Machine Learning Datafeed
* <p>
* For additional info
* see <a href="http://www.elastic.co/guide/en/elasticsearch/reference/current/ml-start-datafeed.html">
* ML Start Datafeed documentation</a>
*
* @param request The request to start the datafeed
* @param options Additional request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return action acknowledgement
* @throws IOException when there is a serialization issue sending the request or receiving the response
*/
public StartDatafeedResponse startDatafeed(StartDatafeedRequest request, RequestOptions options) throws IOException {
return restHighLevelClient.performRequestAndParseEntity(request,
MLRequestConverters::startDatafeed,
options,
StartDatafeedResponse::fromXContent,
Collections.emptySet());
}
/**
* Starts the given Machine Learning Datafeed asynchronously and notifies the listener on completion
* <p>
* For additional info
* see <a href="http://www.elastic.co/guide/en/elasticsearch/reference/current/ml-start-datafeed.html">
* ML Start Datafeed documentation</a>
*
* @param request The request to start the datafeed
* @param options Additional request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener Listener to be notified upon request completion
*/
public void startDatafeedAsync(StartDatafeedRequest request, RequestOptions options, ActionListener<StartDatafeedResponse> listener) {
restHighLevelClient.performRequestAsyncAndParseEntity(request,
MLRequestConverters::startDatafeed,
options,
StartDatafeedResponse::fromXContent,
listener,
Collections.emptySet());
}
/**
* Stops the given Machine Learning Datafeed
* <p>
* For additional info
* see <a href="http://www.elastic.co/guide/en/elasticsearch/reference/current/ml-stop-datafeed.html">
* ML Stop Datafeed documentation</a>
*
* @param request The request to stop the datafeed
* @param options Additional request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return action acknowledgement
* @throws IOException when there is a serialization issue sending the request or receiving the response
*/
public StopDatafeedResponse stopDatafeed(StopDatafeedRequest request, RequestOptions options) throws IOException {
return restHighLevelClient.performRequestAndParseEntity(request,
MLRequestConverters::stopDatafeed,
options,
StopDatafeedResponse::fromXContent,
Collections.emptySet());
}
/**
* Stops the given Machine Learning Datafeed asynchronously and notifies the listener on completion
* <p>
* For additional info
* see <a href="http://www.elastic.co/guide/en/elasticsearch/reference/current/ml-stop-datafeed.html">
* ML Stop Datafeed documentation</a>
*
* @param request The request to stop the datafeed
* @param options Additional request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener Listener to be notified upon request completion
*/
public void stopDatafeedAsync(StopDatafeedRequest request, RequestOptions options, ActionListener<StopDatafeedResponse> listener) {
restHighLevelClient.performRequestAsyncAndParseEntity(request,
MLRequestConverters::stopDatafeed,
options,
StopDatafeedResponse::fromXContent,
listener,
Collections.emptySet());
}
/**
* Updates a Machine Learning {@link org.elasticsearch.client.ml.job.config.Job}
* <p>

View File

@ -465,7 +465,8 @@ final class RequestConverters {
Params params = new Params(request)
.withRefresh(reindexRequest.isRefresh())
.withTimeout(reindexRequest.getTimeout())
.withWaitForActiveShards(reindexRequest.getWaitForActiveShards());
.withWaitForActiveShards(reindexRequest.getWaitForActiveShards())
.withRequestsPerSecond(reindexRequest.getRequestsPerSecond());
if (reindexRequest.getScrollTime() != null) {
params.putParam("scroll", reindexRequest.getScrollTime());
@ -484,6 +485,7 @@ final class RequestConverters {
.withRefresh(updateByQueryRequest.isRefresh())
.withTimeout(updateByQueryRequest.getTimeout())
.withWaitForActiveShards(updateByQueryRequest.getWaitForActiveShards())
.withRequestsPerSecond(updateByQueryRequest.getRequestsPerSecond())
.withIndicesOptions(updateByQueryRequest.indicesOptions());
if (updateByQueryRequest.isAbortOnVersionConflict() == false) {
params.putParam("conflicts", "proceed");
@ -510,6 +512,7 @@ final class RequestConverters {
.withRefresh(deleteByQueryRequest.isRefresh())
.withTimeout(deleteByQueryRequest.getTimeout())
.withWaitForActiveShards(deleteByQueryRequest.getWaitForActiveShards())
.withRequestsPerSecond(deleteByQueryRequest.getRequestsPerSecond())
.withIndicesOptions(deleteByQueryRequest.indicesOptions());
if (deleteByQueryRequest.isAbortOnVersionConflict() == false) {
params.putParam("conflicts", "proceed");
@ -527,6 +530,29 @@ final class RequestConverters {
return request;
}
static Request rethrottleReindex(RethrottleRequest rethrottleRequest) {
return rethrottle(rethrottleRequest, "_reindex");
}
static Request rethrottleUpdateByQuery(RethrottleRequest rethrottleRequest) {
return rethrottle(rethrottleRequest, "_update_by_query");
}
static Request rethrottleDeleteByQuery(RethrottleRequest rethrottleRequest) {
return rethrottle(rethrottleRequest, "_delete_by_query");
}
private static Request rethrottle(RethrottleRequest rethrottleRequest, String firstPathPart) {
String endpoint = new EndpointBuilder().addPathPart(firstPathPart).addPathPart(rethrottleRequest.getTaskId().toString())
.addPathPart("_rethrottle").build();
Request request = new Request(HttpPost.METHOD_NAME, endpoint);
Params params = new Params(request)
.withRequestsPerSecond(rethrottleRequest.getRequestsPerSecond());
// we set "group_by" to "none" because this is the response format we can parse back
params.putParam("group_by", "none");
return request;
}
static Request putScript(PutStoredScriptRequest putStoredScriptRequest) throws IOException {
String endpoint = new EndpointBuilder().addPathPartAsIs("_scripts").addPathPart(putStoredScriptRequest.id()).build();
Request request = new Request(HttpPost.METHOD_NAME, endpoint);
@ -714,6 +740,16 @@ final class RequestConverters {
return this;
}
Params withRequestsPerSecond(float requestsPerSecond) {
// the default in AbstractBulkByScrollRequest is Float.POSITIVE_INFINITY,
// but we don't want to add that to the URL parameters, instead we use -1
if (Float.isFinite(requestsPerSecond)) {
return putParam(RethrottleRequest.REQUEST_PER_SECOND_PARAMETER, Float.toString(requestsPerSecond));
} else {
return putParam(RethrottleRequest.REQUEST_PER_SECOND_PARAMETER, "-1");
}
}
Params withRetryOnConflict(int retryOnConflict) {
if (retryOnConflict > 0) {
return putParam("retry_on_conflict", String.valueOf(retryOnConflict));
@ -958,7 +994,7 @@ final class RequestConverters {
private static String encodePart(String pathPart) {
try {
//encode each part (e.g. index, type and id) separately before merging them into the path
//we prepend "/" to the path part to make this pate absolute, otherwise there can be issues with
//we prepend "/" to the path part to make this path absolute, otherwise there can be issues with
//paths that start with `-` or contain `:`
URI uri = new URI(null, null, null, -1, "/" + pathPart, null, null);
//manually encode any slash that each part may contain

View File

@ -25,6 +25,7 @@ import org.elasticsearch.ElasticsearchStatusException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest;
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest;
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse;
@ -474,13 +475,14 @@ public class RestHighLevelClient implements Closeable {
* Asynchronously executes an update by query request.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html">
* Update By Query API on elastic.co</a>
* @param updateByQueryRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public final void updateByQueryAsync(UpdateByQueryRequest reindexRequest, RequestOptions options,
ActionListener<BulkByScrollResponse> listener) {
public final void updateByQueryAsync(UpdateByQueryRequest updateByQueryRequest, RequestOptions options,
ActionListener<BulkByScrollResponse> listener) {
performRequestAsyncAndParseEntity(
reindexRequest, RequestConverters::updateByQuery, options, BulkByScrollResponse::fromXContent, listener, emptySet()
updateByQueryRequest, RequestConverters::updateByQuery, options, BulkByScrollResponse::fromXContent, listener, emptySet()
);
}
@ -503,16 +505,103 @@ public class RestHighLevelClient implements Closeable {
* Asynchronously executes a delete by query request.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html">
* Delete By Query API on elastic.co</a>
* @param deleteByQueryRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public final void deleteByQueryAsync(DeleteByQueryRequest reindexRequest, RequestOptions options,
public final void deleteByQueryAsync(DeleteByQueryRequest deleteByQueryRequest, RequestOptions options,
ActionListener<BulkByScrollResponse> listener) {
performRequestAsyncAndParseEntity(
reindexRequest, RequestConverters::deleteByQuery, options, BulkByScrollResponse::fromXContent, listener, emptySet()
deleteByQueryRequest, RequestConverters::deleteByQuery, options, BulkByScrollResponse::fromXContent, listener, emptySet()
);
}
/**
* Executes a delete by query rethrottle request.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html">
* Delete By Query API on elastic.co</a>
* @param rethrottleRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response
* @throws IOException in case there is a problem sending the request or parsing back the response
*/
public final ListTasksResponse deleteByQueryRethrottle(RethrottleRequest rethrottleRequest, RequestOptions options) throws IOException {
return performRequestAndParseEntity(rethrottleRequest, RequestConverters::rethrottleDeleteByQuery, options,
ListTasksResponse::fromXContent, emptySet());
}
/**
* Asynchronously execute an delete by query rethrottle request.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html">
* Delete By Query API on elastic.co</a>
* @param rethrottleRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public final void deleteByQueryRethrottleAsync(RethrottleRequest rethrottleRequest, RequestOptions options,
ActionListener<ListTasksResponse> listener) {
performRequestAsyncAndParseEntity(rethrottleRequest, RequestConverters::rethrottleDeleteByQuery, options,
ListTasksResponse::fromXContent, listener, emptySet());
}
/**
* Executes a update by query rethrottle request.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html">
* Update By Query API on elastic.co</a>
* @param rethrottleRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response
* @throws IOException in case there is a problem sending the request or parsing back the response
*/
public final ListTasksResponse updateByQueryRethrottle(RethrottleRequest rethrottleRequest, RequestOptions options) throws IOException {
return performRequestAndParseEntity(rethrottleRequest, RequestConverters::rethrottleUpdateByQuery, options,
ListTasksResponse::fromXContent, emptySet());
}
/**
* Asynchronously execute an update by query rethrottle request.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html">
* Update By Query API on elastic.co</a>
* @param rethrottleRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public final void updateByQueryRethrottleAsync(RethrottleRequest rethrottleRequest, RequestOptions options,
ActionListener<ListTasksResponse> listener) {
performRequestAsyncAndParseEntity(rethrottleRequest, RequestConverters::rethrottleUpdateByQuery, options,
ListTasksResponse::fromXContent, listener, emptySet());
}
/**
* Executes a reindex rethrottling request.
* See the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html#docs-reindex-rethrottle">
* Reindex rethrottling API on elastic.co</a>
*
* @param rethrottleRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response
* @throws IOException in case there is a problem sending the request or parsing back the response
*/
public final ListTasksResponse reindexRethrottle(RethrottleRequest rethrottleRequest, RequestOptions options) throws IOException {
return performRequestAndParseEntity(rethrottleRequest, RequestConverters::rethrottleReindex, options,
ListTasksResponse::fromXContent, emptySet());
}
/**
* Executes a reindex rethrottling request.
* See the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html#docs-reindex-rethrottle">
* Reindex rethrottling API on elastic.co</a>
*
* @param rethrottleRequest the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public final void reindexRethrottleAsync(RethrottleRequest rethrottleRequest, RequestOptions options,
ActionListener<ListTasksResponse> listener) {
performRequestAsyncAndParseEntity(rethrottleRequest, RequestConverters::rethrottleReindex, options, ListTasksResponse::fromXContent,
listener, emptySet());
}
/**
* Pings the remote Elasticsearch cluster and returns true if the ping succeeded, false otherwise
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized

View File

@ -0,0 +1,77 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.elasticsearch.tasks.TaskId;
import java.util.Objects;
/**
* A request changing throttling of a task.
*/
public class RethrottleRequest implements Validatable {
static final String REQUEST_PER_SECOND_PARAMETER = "requests_per_second";
private final TaskId taskId;
private final float requestsPerSecond;
/**
* Create a new {@link RethrottleRequest} which disables any throttling for the given taskId.
* @param taskId the task for which throttling will be disabled
*/
public RethrottleRequest(TaskId taskId) {
this.taskId = taskId;
this.requestsPerSecond = Float.POSITIVE_INFINITY;
}
/**
* Create a new {@link RethrottleRequest} which changes the throttling for the given taskId.
* @param taskId the task that throttling changes will be applied to
* @param requestsPerSecond the number of requests per second that the task should perform. This needs to be a positive value.
*/
public RethrottleRequest(TaskId taskId, float requestsPerSecond) {
Objects.requireNonNull(taskId, "taskId cannot be null");
if (requestsPerSecond <= 0) {
throw new IllegalArgumentException("requestsPerSecond needs to be positive value but was [" + requestsPerSecond+"]");
}
this.taskId = taskId;
this.requestsPerSecond = requestsPerSecond;
}
/**
* @return the task Id
*/
public TaskId getTaskId() {
return taskId;
}
/**
* @return the requests per seconds value
*/
public float getRequestsPerSecond() {
return requestsPerSecond;
}
@Override
public String toString() {
return "RethrottleRequest: taskID = " + taskId +"; reqestsPerSecond = " + requestsPerSecond;
}
}

View File

@ -20,6 +20,8 @@
package org.elasticsearch.client;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.client.rollup.GetRollupJobRequest;
import org.elasticsearch.client.rollup.GetRollupJobResponse;
import org.elasticsearch.client.rollup.PutRollupJobRequest;
import org.elasticsearch.client.rollup.PutRollupJobResponse;
@ -73,4 +75,37 @@ public class RollupClient {
PutRollupJobResponse::fromXContent,
listener, Collections.emptySet());
}
/**
* Get a rollup job from the cluster.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-put-job.html">
* the docs</a> for more.
* @param request the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response
* @throws IOException in case there is a problem sending the request or parsing back the response
*/
public GetRollupJobResponse getRollupJob(GetRollupJobRequest request, RequestOptions options) throws IOException {
return restHighLevelClient.performRequestAndParseEntity(request,
RollupRequestConverters::getJob,
options,
GetRollupJobResponse::fromXContent,
Collections.emptySet());
}
/**
* Asynchronously get a rollup job from the cluster.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-put-job.html">
* the docs</a> for more.
* @param request the request
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public void getRollupJobAsync(GetRollupJobRequest request, RequestOptions options, ActionListener<GetRollupJobResponse> listener) {
restHighLevelClient.performRequestAsyncAndParseEntity(request,
RollupRequestConverters::getJob,
options,
GetRollupJobResponse::fromXContent,
listener, Collections.emptySet());
}
}

View File

@ -18,7 +18,9 @@
*/
package org.elasticsearch.client;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPut;
import org.elasticsearch.client.rollup.GetRollupJobRequest;
import org.elasticsearch.client.rollup.PutRollupJobRequest;
import java.io.IOException;
@ -42,4 +44,14 @@ final class RollupRequestConverters {
request.setEntity(createEntity(putRollupJobRequest, REQUEST_BODY_CONTENT_TYPE));
return request;
}
static Request getJob(final GetRollupJobRequest getRollupJobRequest) {
String endpoint = new RequestConverters.EndpointBuilder()
.addPathPartAsIs("_xpack")
.addPathPartAsIs("rollup")
.addPathPartAsIs("job")
.addPathPart(getRollupJobRequest.getJobId())
.build();
return new Request(HttpGet.METHOD_NAME, endpoint);
}
}

View File

@ -25,6 +25,7 @@ import org.elasticsearch.client.security.EnableUserRequest;
import org.elasticsearch.client.security.PutUserRequest;
import org.elasticsearch.client.security.PutUserResponse;
import org.elasticsearch.client.security.EmptyResponse;
import org.elasticsearch.client.security.ChangePasswordRequest;
import java.io.IOException;
@ -47,6 +48,7 @@ public final class SecurityClient {
* Create/update a user in the native realm synchronously.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-users.html">
* the docs</a> for more.
*
* @param request the request with the user's information
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response from the put user call
@ -61,8 +63,9 @@ public final class SecurityClient {
* Asynchronously create/update a user in the native realm.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-users.html">
* the docs</a> for more.
* @param request the request with the user's information
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
*
* @param request the request with the user's information
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public void putUserAsync(PutUserRequest request, RequestOptions options, ActionListener<PutUserResponse> listener) {
@ -74,6 +77,7 @@ public final class SecurityClient {
* Enable a native realm or built-in user synchronously.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-enable-user.html">
* the docs</a> for more.
*
* @param request the request with the user to enable
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response from the enable user call
@ -88,12 +92,13 @@ public final class SecurityClient {
* Enable a native realm or built-in user asynchronously.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-enable-user.html">
* the docs</a> for more.
* @param request the request with the user to enable
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
*
* @param request the request with the user to enable
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public void enableUserAsync(EnableUserRequest request, RequestOptions options,
ActionListener<EmptyResponse> listener) {
ActionListener<EmptyResponse> listener) {
restHighLevelClient.performRequestAsyncAndParseEntity(request, SecurityRequestConverters::enableUser, options,
EmptyResponse::fromXContent, listener, emptySet());
}
@ -102,6 +107,7 @@ public final class SecurityClient {
* Disable a native realm or built-in user synchronously.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-disable-user.html">
* the docs</a> for more.
*
* @param request the request with the user to disable
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response from the enable user call
@ -116,13 +122,44 @@ public final class SecurityClient {
* Disable a native realm or built-in user asynchronously.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-disable-user.html">
* the docs</a> for more.
* @param request the request with the user to disable
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
*
* @param request the request with the user to disable
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public void disableUserAsync(DisableUserRequest request, RequestOptions options,
ActionListener<EmptyResponse> listener) {
ActionListener<EmptyResponse> listener) {
restHighLevelClient.performRequestAsyncAndParseEntity(request, SecurityRequestConverters::disableUser, options,
EmptyResponse::fromXContent, listener, emptySet());
}
/**
* Change the password of a user of a native realm or built-in user synchronously.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-change-password.html">
* the docs</a> for more.
*
* @param request the request with the user's new password
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @return the response from the change user password call
* @throws IOException in case there is a problem sending the request or parsing back the response
*/
public EmptyResponse changePassword(ChangePasswordRequest request, RequestOptions options) throws IOException {
return restHighLevelClient.performRequestAndParseEntity(request, SecurityRequestConverters::changePassword, options,
EmptyResponse::fromXContent, emptySet());
}
/**
* Change the password of a user of a native realm or built-in user asynchronously.
* See <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-change-password.html">
* the docs</a> for more.
*
* @param request the request with the user's new password
* @param options the request options (e.g. headers), use {@link RequestOptions#DEFAULT} if nothing needs to be customized
* @param listener the listener to be notified upon request completion
*/
public void changePasswordAsync(ChangePasswordRequest request, RequestOptions options,
ActionListener<EmptyResponse> listener) {
restHighLevelClient.performRequestAsyncAndParseEntity(request, SecurityRequestConverters::changePassword, options,
EmptyResponse::fromXContent, listener, emptySet());
}
}

View File

@ -19,9 +19,11 @@
package org.elasticsearch.client;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.methods.HttpPut;
import org.elasticsearch.client.security.DisableUserRequest;
import org.elasticsearch.client.security.EnableUserRequest;
import org.elasticsearch.client.security.ChangePasswordRequest;
import org.elasticsearch.client.security.PutUserRequest;
import org.elasticsearch.client.security.SetUserEnabledRequest;
@ -34,6 +36,19 @@ final class SecurityRequestConverters {
private SecurityRequestConverters() {}
static Request changePassword(ChangePasswordRequest changePasswordRequest) throws IOException {
String endpoint = new RequestConverters.EndpointBuilder()
.addPathPartAsIs("_xpack/security/user")
.addPathPart(changePasswordRequest.getUsername())
.addPathPartAsIs("_password")
.build();
Request request = new Request(HttpPost.METHOD_NAME, endpoint);
request.setEntity(createEntity(changePasswordRequest, REQUEST_BODY_CONTENT_TYPE));
RequestConverters.Params params = new RequestConverters.Params(request);
params.withRefreshPolicy(changePasswordRequest.getRefreshPolicy());
return request;
}
static Request putUser(PutUserRequest putUserRequest) throws IOException {
String endpoint = new RequestConverters.EndpointBuilder()
.addPathPartAsIs("_xpack/security/user")

View File

@ -0,0 +1,160 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.client.ml.datafeed.DatafeedConfig;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
import java.util.Objects;
/**
* Request to start a Datafeed
*/
public class StartDatafeedRequest extends ActionRequest implements ToXContentObject {
public static final ParseField START = new ParseField("start");
public static final ParseField END = new ParseField("end");
public static final ParseField TIMEOUT = new ParseField("timeout");
public static ConstructingObjectParser<StartDatafeedRequest, Void> PARSER =
new ConstructingObjectParser<>("start_datafeed_request", a -> new StartDatafeedRequest((String)a[0]));
static {
PARSER.declareString(ConstructingObjectParser.constructorArg(), DatafeedConfig.ID);
PARSER.declareString(StartDatafeedRequest::setStart, START);
PARSER.declareString(StartDatafeedRequest::setEnd, END);
PARSER.declareString((params, val) ->
params.setTimeout(TimeValue.parseTimeValue(val, TIMEOUT.getPreferredName())), TIMEOUT);
}
private final String datafeedId;
private String start;
private String end;
private TimeValue timeout;
/**
* Create a new StartDatafeedRequest for the given DatafeedId
*
* @param datafeedId non-null existing Datafeed ID
*/
public StartDatafeedRequest(String datafeedId) {
this.datafeedId = Objects.requireNonNull(datafeedId, "[datafeed_id] must not be null");
}
public String getDatafeedId() {
return datafeedId;
}
public String getStart() {
return start;
}
/**
* The time that the datafeed should begin. This value is inclusive.
*
* If you specify a start value that is earlier than the timestamp of the latest processed record,
* the datafeed continues from 1 millisecond after the timestamp of the latest processed record.
*
* If you do not specify a start time and the datafeed is associated with a new job,
* the analysis starts from the earliest time for which data is available.
*
* @param start String representation of a timestamp; may be an epoch seconds, epoch millis or an ISO 8601 string
*/
public void setStart(String start) {
this.start = start;
}
public String getEnd() {
return end;
}
/**
* The time that the datafeed should end. This value is exclusive.
* If you do not specify an end time, the datafeed runs continuously.
*
* @param end String representation of a timestamp; may be an epoch seconds, epoch millis or an ISO 8601 string
*/
public void setEnd(String end) {
this.end = end;
}
public TimeValue getTimeout() {
return timeout;
}
/**
* Indicates how long to wait for the cluster to respond to the request.
*
* @param timeout TimeValue for how long to wait for a response from the cluster
*/
public void setTimeout(TimeValue timeout) {
this.timeout = timeout;
}
@Override
public ActionRequestValidationException validate() {
return null;
}
@Override
public int hashCode() {
return Objects.hash(datafeedId, start, end, timeout);
}
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null || obj.getClass() != getClass()) {
return false;
}
StartDatafeedRequest other = (StartDatafeedRequest) obj;
return Objects.equals(datafeedId, other.datafeedId) &&
Objects.equals(start, other.start) &&
Objects.equals(end, other.end) &&
Objects.equals(timeout, other.timeout);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(DatafeedConfig.ID.getPreferredName(), datafeedId);
if (start != null) {
builder.field(START.getPreferredName(), start);
}
if (end != null) {
builder.field(END.getPreferredName(), end);
}
if (timeout != null) {
builder.field(TIMEOUT.getPreferredName(), timeout.getStringRep());
}
builder.endObject();
return builder;
}
}

View File

@ -0,0 +1,93 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.Objects;
/**
* Response indicating if the Machine Learning Datafeed is now started or not
*/
public class StartDatafeedResponse extends ActionResponse implements ToXContentObject {
private static final ParseField STARTED = new ParseField("started");
public static final ConstructingObjectParser<StartDatafeedResponse, Void> PARSER =
new ConstructingObjectParser<>(
"start_datafeed_response",
true,
(a) -> new StartDatafeedResponse((Boolean)a[0]));
static {
PARSER.declareBoolean(ConstructingObjectParser.constructorArg(), STARTED);
}
private final boolean started;
public StartDatafeedResponse(boolean started) {
this.started = started;
}
public static StartDatafeedResponse fromXContent(XContentParser parser) throws IOException {
return PARSER.parse(parser, null);
}
/**
* Has the Datafeed started or not
*
* @return boolean value indicating the Datafeed started status
*/
public boolean isStarted() {
return started;
}
@Override
public boolean equals(Object other) {
if (this == other) {
return true;
}
if (other == null || getClass() != other.getClass()) {
return false;
}
StartDatafeedResponse that = (StartDatafeedResponse) other;
return isStarted() == that.isStarted();
}
@Override
public int hashCode() {
return Objects.hash(isStarted());
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(STARTED.getPreferredName(), started);
builder.endObject();
return builder;
}
}

View File

@ -0,0 +1,195 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.client.ml.datafeed.DatafeedConfig;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
import java.security.InvalidParameterException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
/**
* Request to stop Machine Learning Datafeeds
*/
public class StopDatafeedRequest extends ActionRequest implements ToXContentObject {
public static final ParseField TIMEOUT = new ParseField("timeout");
public static final ParseField FORCE = new ParseField("force");
public static final ParseField ALLOW_NO_DATAFEEDS = new ParseField("allow_no_datafeeds");
@SuppressWarnings("unchecked")
public static final ConstructingObjectParser<StopDatafeedRequest, Void> PARSER = new ConstructingObjectParser<>(
"stop_datafeed_request",
a -> new StopDatafeedRequest((List<String>) a[0]));
static {
PARSER.declareField(ConstructingObjectParser.constructorArg(),
p -> Arrays.asList(Strings.commaDelimitedListToStringArray(p.text())),
DatafeedConfig.ID, ObjectParser.ValueType.STRING_ARRAY);
PARSER.declareString((obj, val) -> obj.setTimeout(TimeValue.parseTimeValue(val, TIMEOUT.getPreferredName())), TIMEOUT);
PARSER.declareBoolean(StopDatafeedRequest::setForce, FORCE);
PARSER.declareBoolean(StopDatafeedRequest::setAllowNoDatafeeds, ALLOW_NO_DATAFEEDS);
}
private static final String ALL_DATAFEEDS = "_all";
private final List<String> datafeedIds;
private TimeValue timeout;
private Boolean force;
private Boolean allowNoDatafeeds;
/**
* Explicitly stop all datafeeds
*
* @return a {@link StopDatafeedRequest} for all existing datafeeds
*/
public static StopDatafeedRequest stopAllDatafeedsRequest(){
return new StopDatafeedRequest(ALL_DATAFEEDS);
}
StopDatafeedRequest(List<String> datafeedIds) {
if (datafeedIds.isEmpty()) {
throw new InvalidParameterException("datafeedIds must not be empty");
}
if (datafeedIds.stream().anyMatch(Objects::isNull)) {
throw new NullPointerException("datafeedIds must not contain null values");
}
this.datafeedIds = new ArrayList<>(datafeedIds);
}
/**
* Close the specified Datafeeds via their unique datafeedIds
*
* @param datafeedIds must be non-null and non-empty and each datafeedId must be non-null
*/
public StopDatafeedRequest(String... datafeedIds) {
this(Arrays.asList(datafeedIds));
}
/**
* All the datafeedIds to be stopped
*/
public List<String> getDatafeedIds() {
return datafeedIds;
}
public TimeValue getTimeout() {
return timeout;
}
/**
* How long to wait for the stop request to complete before timing out.
*
* @param timeout Default value: 30 minutes
*/
public void setTimeout(TimeValue timeout) {
this.timeout = timeout;
}
public Boolean isForce() {
return force;
}
/**
* Should the stopping be forced.
*
* Use to forcefully stop a datafeed
*
* @param force When {@code true} forcefully stop the datafeed. Defaults to {@code false}
*/
public void setForce(boolean force) {
this.force = force;
}
public Boolean isAllowNoDatafeeds() {
return this.allowNoDatafeeds;
}
/**
* Whether to ignore if a wildcard expression matches no datafeeds.
*
* This includes {@code _all} string.
*
* @param allowNoDatafeeds When {@code true} ignore if wildcard or {@code _all} matches no datafeeds. Defaults to {@code true}
*/
public void setAllowNoDatafeeds(boolean allowNoDatafeeds) {
this.allowNoDatafeeds = allowNoDatafeeds;
}
@Override
public ActionRequestValidationException validate() {
return null;
}
@Override
public int hashCode() {
return Objects.hash(datafeedIds, timeout, force, allowNoDatafeeds);
}
@Override
public boolean equals(Object other) {
if (this == other) {
return true;
}
if (other == null || getClass() != other.getClass()) {
return false;
}
StopDatafeedRequest that = (StopDatafeedRequest) other;
return Objects.equals(datafeedIds, that.datafeedIds) &&
Objects.equals(timeout, that.timeout) &&
Objects.equals(force, that.force) &&
Objects.equals(allowNoDatafeeds, that.allowNoDatafeeds);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(DatafeedConfig.ID.getPreferredName(), Strings.collectionToCommaDelimitedString(datafeedIds));
if (timeout != null) {
builder.field(TIMEOUT.getPreferredName(), timeout.getStringRep());
}
if (force != null) {
builder.field(FORCE.getPreferredName(), force);
}
if (allowNoDatafeeds != null) {
builder.field(ALLOW_NO_DATAFEEDS.getPreferredName(), allowNoDatafeeds);
}
builder.endObject();
return builder;
}
@Override
public String toString() {
return Strings.toString(this);
}
}

View File

@ -0,0 +1,93 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.Objects;
/**
* Response indicating if the Machine Learning Datafeed is now stopped or not
*/
public class StopDatafeedResponse extends ActionResponse implements ToXContentObject {
private static final ParseField STOPPED = new ParseField("stopped");
public static final ConstructingObjectParser<StopDatafeedResponse, Void> PARSER =
new ConstructingObjectParser<>(
"stop_datafeed_response",
true,
(a) -> new StopDatafeedResponse((Boolean)a[0]));
static {
PARSER.declareBoolean(ConstructingObjectParser.constructorArg(), STOPPED);
}
private final boolean stopped;
public StopDatafeedResponse(boolean stopped) {
this.stopped = stopped;
}
public static StopDatafeedResponse fromXContent(XContentParser parser) throws IOException {
return PARSER.parse(parser, null);
}
/**
* Has the Datafeed stopped or not
*
* @return boolean value indicating the Datafeed stopped status
*/
public boolean isStopped() {
return stopped;
}
@Override
public boolean equals(Object other) {
if (this == other) {
return true;
}
if (other == null || getClass() != other.getClass()) {
return false;
}
StopDatafeedResponse that = (StopDatafeedResponse) other;
return isStopped() == that.isStopped();
}
@Override
public int hashCode() {
return Objects.hash(isStopped());
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(STOPPED.getPreferredName(), stopped);
builder.endObject();
return builder;
}
}

View File

@ -0,0 +1,77 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.rollup;
import org.elasticsearch.client.Validatable;
import org.elasticsearch.client.ValidationException;
import java.util.Objects;
import java.util.Optional;
/**
* Request to fetch rollup jobs.
*/
public class GetRollupJobRequest implements Validatable {
private final String jobId;
/**
* Create a requets .
* @param jobId id of the job to return or {@code _all} to return all jobs
*/
public GetRollupJobRequest(final String jobId) {
Objects.requireNonNull(jobId, "jobId is required");
if ("_all".equals(jobId)) {
throw new IllegalArgumentException("use the default ctor to ask for all jobs");
}
this.jobId = jobId;
}
/**
* Create a request to load all rollup jobs.
*/
public GetRollupJobRequest() {
this.jobId = "_all";
}
/**
* ID of the job to return.
*/
public String getJobId() {
return jobId;
}
@Override
public Optional<ValidationException> validate() {
return Optional.empty();
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
final GetRollupJobRequest that = (GetRollupJobRequest) o;
return jobId.equals(that.jobId);
}
@Override
public int hashCode() {
return Objects.hash(jobId);
}
}

View File

@ -0,0 +1,374 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.rollup;
import org.elasticsearch.client.rollup.job.config.RollupJobConfig;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.xcontent.ConstructingObjectParser;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.Objects;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg;
import static org.elasticsearch.common.xcontent.ConstructingObjectParser.optionalConstructorArg;
import static java.util.Collections.unmodifiableList;
import static java.util.stream.Collectors.joining;
/**
* Response from rollup's get jobs api.
*/
public class GetRollupJobResponse {
static final ParseField JOBS = new ParseField("jobs");
static final ParseField CONFIG = new ParseField("config");
static final ParseField STATS = new ParseField("stats");
static final ParseField STATUS = new ParseField("status");
static final ParseField NUM_PAGES = new ParseField("pages_processed");
static final ParseField NUM_INPUT_DOCUMENTS = new ParseField("documents_processed");
static final ParseField NUM_OUTPUT_DOCUMENTS = new ParseField("rollups_indexed");
static final ParseField NUM_INVOCATIONS = new ParseField("trigger_count");
static final ParseField STATE = new ParseField("job_state");
static final ParseField CURRENT_POSITION = new ParseField("current_position");
static final ParseField UPGRADED_DOC_ID = new ParseField("upgraded_doc_id");
private List<JobWrapper> jobs;
GetRollupJobResponse(final List<JobWrapper> jobs) {
this.jobs = Objects.requireNonNull(jobs, "jobs is required");
}
/**
* Jobs returned by the request.
*/
public List<JobWrapper> getJobs() {
return jobs;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
final GetRollupJobResponse that = (GetRollupJobResponse) o;
return jobs.equals(that.jobs);
}
@Override
public int hashCode() {
return Objects.hash(jobs);
}
private static final ConstructingObjectParser<GetRollupJobResponse, Void> PARSER = new ConstructingObjectParser<>(
"get_rollup_job_response",
true,
args -> {
@SuppressWarnings("unchecked") // We're careful about the type in the list
List<JobWrapper> jobs = (List<JobWrapper>) args[0];
return new GetRollupJobResponse(unmodifiableList(jobs));
});
static {
PARSER.declareObjectArray(constructorArg(), JobWrapper.PARSER::apply, JOBS);
}
public static GetRollupJobResponse fromXContent(final XContentParser parser) throws IOException {
return PARSER.parse(parser, null);
}
@Override
public final String toString() {
return "{jobs=" + jobs.stream().map(Object::toString).collect(joining("\n")) + "\n}";
}
public static class JobWrapper {
private final RollupJobConfig job;
private final RollupIndexerJobStats stats;
private final RollupJobStatus status;
JobWrapper(RollupJobConfig job, RollupIndexerJobStats stats, RollupJobStatus status) {
this.job = job;
this.stats = stats;
this.status = status;
}
/**
* Configuration of the job.
*/
public RollupJobConfig getJob() {
return job;
}
/**
* Statistics about the execution of the job.
*/
public RollupIndexerJobStats getStats() {
return stats;
}
/**
* Current state of the job.
*/
public RollupJobStatus getStatus() {
return status;
}
private static final ConstructingObjectParser<JobWrapper, Void> PARSER = new ConstructingObjectParser<>(
"job",
true,
a -> new JobWrapper((RollupJobConfig) a[0], (RollupIndexerJobStats) a[1], (RollupJobStatus) a[2]));
static {
PARSER.declareObject(ConstructingObjectParser.constructorArg(), (p, c) -> RollupJobConfig.fromXContent(p, null), CONFIG);
PARSER.declareObject(ConstructingObjectParser.constructorArg(), RollupIndexerJobStats.PARSER::apply, STATS);
PARSER.declareObject(ConstructingObjectParser.constructorArg(), RollupJobStatus.PARSER::apply, STATUS);
}
@Override
public boolean equals(Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
JobWrapper other = (JobWrapper) obj;
return Objects.equals(job, other.job)
&& Objects.equals(stats, other.stats)
&& Objects.equals(status, other.status);
}
@Override
public int hashCode() {
return Objects.hash(job, stats, status);
}
@Override
public final String toString() {
return "{job=" + job
+ ", stats=" + stats
+ ", status=" + status + "}";
}
}
/**
* The Rollup specialization of stats for the AsyncTwoPhaseIndexer.
* Note: instead of `documents_indexed`, this XContent show `rollups_indexed`
*/
public static class RollupIndexerJobStats {
private final long numPages;
private final long numInputDocuments;
private final long numOuputDocuments;
private final long numInvocations;
RollupIndexerJobStats(long numPages, long numInputDocuments, long numOuputDocuments, long numInvocations) {
this.numPages = numPages;
this.numInputDocuments = numInputDocuments;
this.numOuputDocuments = numOuputDocuments;
this.numInvocations = numInvocations;
}
/**
* The number of pages read from the input indices.
*/
public long getNumPages() {
return numPages;
}
/**
* The number of documents read from the input indices.
*/
public long getNumDocuments() {
return numInputDocuments;
}
/**
* Number of times that the job woke up to write documents.
*/
public long getNumInvocations() {
return numInvocations;
}
/**
* Number of documents written to the result indices.
*/
public long getOutputDocuments() {
return numOuputDocuments;
}
private static final ConstructingObjectParser<RollupIndexerJobStats, Void> PARSER = new ConstructingObjectParser<>(
STATS.getPreferredName(),
true,
args -> new RollupIndexerJobStats((long) args[0], (long) args[1], (long) args[2], (long) args[3]));
static {
PARSER.declareLong(constructorArg(), NUM_PAGES);
PARSER.declareLong(constructorArg(), NUM_INPUT_DOCUMENTS);
PARSER.declareLong(constructorArg(), NUM_OUTPUT_DOCUMENTS);
PARSER.declareLong(constructorArg(), NUM_INVOCATIONS);
}
@Override
public boolean equals(Object other) {
if (this == other) return true;
if (other == null || getClass() != other.getClass()) return false;
RollupIndexerJobStats that = (RollupIndexerJobStats) other;
return Objects.equals(this.numPages, that.numPages)
&& Objects.equals(this.numInputDocuments, that.numInputDocuments)
&& Objects.equals(this.numOuputDocuments, that.numOuputDocuments)
&& Objects.equals(this.numInvocations, that.numInvocations);
}
@Override
public int hashCode() {
return Objects.hash(numPages, numInputDocuments, numOuputDocuments, numInvocations);
}
@Override
public final String toString() {
return "{pages=" + numPages
+ ", input_docs=" + numInputDocuments
+ ", output_docs=" + numOuputDocuments
+ ", invocations=" + numInvocations + "}";
}
}
/**
* Status of the rollup job.
*/
public static class RollupJobStatus {
private final IndexerState state;
private final Map<String, Object> currentPosition;
private final boolean upgradedDocumentId;
RollupJobStatus(IndexerState state, Map<String, Object> position, boolean upgradedDocumentId) {
this.state = state;
this.currentPosition = position;
this.upgradedDocumentId = upgradedDocumentId;
}
/**
* The state of the writer.
*/
public IndexerState getState() {
return state;
}
/**
* The current position of the writer.
*/
public Map<String, Object> getCurrentPosition() {
return currentPosition;
}
/**
* Flag holds the state of the ID scheme, e.g. if it has been upgraded
* to the concatenation scheme.
*/
public boolean getUpgradedDocumentId() {
return upgradedDocumentId;
}
private static final ConstructingObjectParser<RollupJobStatus, Void> PARSER = new ConstructingObjectParser<>(
STATUS.getPreferredName(),
true,
args -> {
IndexerState state = (IndexerState) args[0];
@SuppressWarnings("unchecked") // We're careful of the contents
Map<String, Object> currentPosition = (Map<String, Object>) args[1];
Boolean upgradedDocumentId = (Boolean) args[2];
return new RollupJobStatus(state, currentPosition, upgradedDocumentId == null ? false : upgradedDocumentId);
});
static {
PARSER.declareField(constructorArg(), p -> IndexerState.fromString(p.text()), STATE, ObjectParser.ValueType.STRING);
PARSER.declareField(optionalConstructorArg(), p -> {
if (p.currentToken() == XContentParser.Token.START_OBJECT) {
return p.map();
}
if (p.currentToken() == XContentParser.Token.VALUE_NULL) {
return null;
}
throw new IllegalArgumentException("Unsupported token [" + p.currentToken() + "]");
}, CURRENT_POSITION, ObjectParser.ValueType.VALUE_OBJECT_ARRAY);
// Optional to accommodate old versions of state
PARSER.declareBoolean(ConstructingObjectParser.optionalConstructorArg(), UPGRADED_DOC_ID);
}
@Override
public boolean equals(Object other) {
if (this == other) return true;
if (other == null || getClass() != other.getClass()) return false;
RollupJobStatus that = (RollupJobStatus) other;
return Objects.equals(state, that.state)
&& Objects.equals(currentPosition, that.currentPosition)
&& upgradedDocumentId == that.upgradedDocumentId;
}
@Override
public int hashCode() {
return Objects.hash(state, currentPosition, upgradedDocumentId);
}
@Override
public final String toString() {
return "{stats=" + state
+ ", currentPosition=" + currentPosition
+ ", upgradedDocumentId=" + upgradedDocumentId + "}";
}
}
/**
* IndexerState represents the internal state of the indexer. It
* is also persistent when changing from started/stopped in case the allocated
* task is restarted elsewhere.
*/
public enum IndexerState {
/** Indexer is running, but not actively indexing data (e.g. it's idle). */
STARTED,
/** Indexer is actively indexing data. */
INDEXING,
/**
* Transition state to where an indexer has acknowledged the stop
* but is still in process of halting.
*/
STOPPING,
/** Indexer is "paused" and ignoring scheduled triggers. */
STOPPED,
/**
* Something (internal or external) has requested the indexer abort
* and shutdown.
*/
ABORTING;
static IndexerState fromString(String name) {
return valueOf(name.trim().toUpperCase(Locale.ROOT));
}
String value() {
return name().toLowerCase(Locale.ROOT);
}
}
}

View File

@ -0,0 +1,76 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security;
import org.elasticsearch.client.Validatable;
import org.elasticsearch.common.CharArrays;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
import java.util.Arrays;
import java.util.Objects;
/**
* Request object to change the password of a user of a native realm or a built-in user.
*/
public final class ChangePasswordRequest implements Validatable, ToXContentObject {
private final String username;
private final char[] password;
private final RefreshPolicy refreshPolicy;
/**
* @param username The username of the user whose password should be changed or null for the current user.
* @param password The new password. The password array is not cleared by the {@link ChangePasswordRequest} object so the
* calling code must clear it after receiving the response.
* @param refreshPolicy The refresh policy for the request.
*/
public ChangePasswordRequest(@Nullable String username, char[] password, RefreshPolicy refreshPolicy) {
this.username = username;
this.password = Objects.requireNonNull(password, "password is required");
this.refreshPolicy = refreshPolicy == null ? RefreshPolicy.getDefault() : refreshPolicy;
}
public String getUsername() {
return username;
}
public char[] getPassword() {
return password;
}
public RefreshPolicy getRefreshPolicy() {
return refreshPolicy;
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
byte[] charBytes = CharArrays.toUtf8Bytes(password);
try {
return builder.startObject()
.field("password").utf8Value(charBytes, 0, charBytes.length)
.endObject();
} finally {
Arrays.fill(charBytes, (byte) 0);
}
}
}

View File

@ -25,7 +25,6 @@ import org.elasticsearch.common.CharArrays;
import org.elasticsearch.common.xcontent.ToXContentObject;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.Closeable;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
@ -37,7 +36,7 @@ import java.util.Optional;
/**
* Request object to create or update a user in the native realm.
*/
public final class PutUserRequest implements Validatable, Closeable, ToXContentObject {
public final class PutUserRequest implements Validatable, ToXContentObject {
private final String username;
private final List<String> roles;
@ -48,6 +47,20 @@ public final class PutUserRequest implements Validatable, Closeable, ToXContentO
private final boolean enabled;
private final RefreshPolicy refreshPolicy;
/**
* Creates a new request that is used to create or update a user in the native realm.
*
* @param username the username of the user to be created or updated
* @param password the password of the user. The password array is not modified by this class.
* It is the responsibility of the caller to clear the password after receiving
* a response.
* @param roles the roles that this user is assigned
* @param fullName the full name of the user that may be used for display purposes
* @param email the email address of the user
* @param enabled true if the user is enabled and allowed to access elasticsearch
* @param metadata a map of additional user attributes that may be used in templating roles
* @param refreshPolicy the refresh policy for the request.
*/
public PutUserRequest(String username, char[] password, List<String> roles, String fullName, String email, boolean enabled,
Map<String, Object> metadata, RefreshPolicy refreshPolicy) {
this.username = Objects.requireNonNull(username, "username is required");
@ -114,13 +127,6 @@ public final class PutUserRequest implements Validatable, Closeable, ToXContentO
return result;
}
@Override
public void close() {
if (password != null) {
Arrays.fill(password, (char) 0);
}
}
@Override
public Optional<ValidationException> validate() {
if (metadata != null && metadata.keySet().stream().anyMatch(s -> s.startsWith("_"))) {
@ -137,7 +143,11 @@ public final class PutUserRequest implements Validatable, Closeable, ToXContentO
builder.field("username", username);
if (password != null) {
byte[] charBytes = CharArrays.toUtf8Bytes(password);
builder.field("password").utf8Value(charBytes, 0, charBytes.length);
try {
builder.field("password").utf8Value(charBytes, 0, charBytes.length);
} finally {
Arrays.fill(charBytes, (byte) 0);
}
}
if (roles != null) {
builder.field("roles", roles);

View File

@ -17,33 +17,14 @@
* under the License.
*/
package org.elasticsearch.script;
package org.elasticsearch.client.security.support.expressiondsl;
import java.util.Map;
import org.elasticsearch.common.xcontent.ToXContentObject;
/**
* An executable script, can't be used concurrently.
* Implementations of this interface represent an expression used for user role mapping
* that can later be resolved to a boolean value.
*/
public interface ExecutableScript {
public interface RoleMapperExpression extends ToXContentObject {
/**
* Sets a runtime script parameter.
* <p>
* Note that this method may be slow, involving put() and get() calls
* to a hashmap or similar.
* @param name parameter name
* @param value parameter value
*/
void setNextVar(String name, Object value);
/**
* Executes the script.
*/
Object run();
interface Factory {
ExecutableScript newInstance(Map<String, Object> params);
}
ScriptContext<Factory> CONTEXT = new ScriptContext<>("executable", Factory.class);
}

View File

@ -0,0 +1,55 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.expressions;
import org.elasticsearch.client.security.support.expressiondsl.RoleMapperExpression;
import java.util.ArrayList;
import java.util.List;
/**
* An expression that evaluates to <code>true</code> if-and-only-if all its children
* evaluate to <code>true</code>.
* An <em>all</em> expression with no children is always <code>true</code>.
*/
public final class AllRoleMapperExpression extends CompositeRoleMapperExpression {
private AllRoleMapperExpression(String name, RoleMapperExpression[] elements) {
super(name, elements);
}
public static Builder builder() {
return new Builder();
}
public static final class Builder {
private List<RoleMapperExpression> elements = new ArrayList<>();
public Builder addExpression(final RoleMapperExpression expression) {
assert expression != null : "expression cannot be null";
elements.add(expression);
return this;
}
public AllRoleMapperExpression build() {
return new AllRoleMapperExpression(CompositeType.ALL.getName(), elements.toArray(new RoleMapperExpression[0]));
}
}
}

View File

@ -0,0 +1,55 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.expressions;
import org.elasticsearch.client.security.support.expressiondsl.RoleMapperExpression;
import java.util.ArrayList;
import java.util.List;
/**
* An expression that evaluates to <code>true</code> if at least one of its children
* evaluate to <code>true</code>.
* An <em>any</em> expression with no children is never <code>true</code>.
*/
public final class AnyRoleMapperExpression extends CompositeRoleMapperExpression {
private AnyRoleMapperExpression(String name, RoleMapperExpression[] elements) {
super(name, elements);
}
public static Builder builder() {
return new Builder();
}
public static final class Builder {
private List<RoleMapperExpression> elements = new ArrayList<>();
public Builder addExpression(final RoleMapperExpression expression) {
assert expression != null : "expression cannot be null";
elements.add(expression);
return this;
}
public AnyRoleMapperExpression build() {
return new AnyRoleMapperExpression(CompositeType.ANY.getName(), elements.toArray(new RoleMapperExpression[0]));
}
}
}

View File

@ -0,0 +1,100 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.expressions;
import org.elasticsearch.client.security.support.expressiondsl.RoleMapperExpression;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
/**
* Expression of role mapper expressions which can be combined by operators like AND, OR
* <p>
* Expression builder example:
* <pre>
* {@code
* final RoleMapperExpression allExpression = AllRoleMapperExpression.builder()
.addExpression(AnyRoleMapperExpression.builder()
.addExpression(FieldRoleMapperExpression.ofUsername("user1@example.org"))
.addExpression(FieldRoleMapperExpression.ofUsername("user2@example.org"))
.build())
.addExpression(FieldRoleMapperExpression.ofMetadata("metadata.location", "AMER"))
.addExpression(new ExceptRoleMapperExpression(FieldRoleMapperExpression.ofUsername("user3@example.org")))
.build();
* }
* </pre>
*/
public abstract class CompositeRoleMapperExpression implements RoleMapperExpression {
private final String name;
private final List<RoleMapperExpression> elements;
CompositeRoleMapperExpression(final String name, final RoleMapperExpression... elements) {
assert name != null : "field name cannot be null";
assert elements != null : "at least one field expression is required";
this.name = name;
this.elements = Collections.unmodifiableList(Arrays.asList(elements));
}
public String getName() {
return this.getName();
}
public List<RoleMapperExpression> getElements() {
return elements;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
final CompositeRoleMapperExpression that = (CompositeRoleMapperExpression) o;
if (Objects.equals(this.getName(), that.getName()) == false) {
return false;
}
return Objects.equals(this.getElements(), that.getElements());
}
@Override
public int hashCode() {
return Objects.hash(name, elements);
}
@Override
public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException {
builder.startObject();
builder.startArray(name);
for (RoleMapperExpression e : elements) {
e.toXContent(builder, params);
}
builder.endArray();
return builder.endObject();
}
}

View File

@ -0,0 +1,59 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.expressions;
import org.elasticsearch.common.ParseField;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
public enum CompositeType {
ANY("any"), ALL("all"), EXCEPT("except");
private static Map<String, CompositeType> nameToType = Collections.unmodifiableMap(initialize());
private ParseField field;
CompositeType(String name) {
this.field = new ParseField(name);
}
public String getName() {
return field.getPreferredName();
}
public ParseField getParseField() {
return field;
}
public static CompositeType fromName(String name) {
return nameToType.get(name);
}
private static Map<String, CompositeType> initialize() {
Map<String, CompositeType> map = new HashMap<>();
for (CompositeType field : values()) {
map.put(field.getName(), field);
}
return map;
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.expressions;
import org.elasticsearch.client.security.support.expressiondsl.RoleMapperExpression;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
/**
* A negating expression. That is, this expression evaluates to <code>true</code> if-and-only-if
* its delegate expression evaluate to <code>false</code>.
* Syntactically, <em>except</em> expressions are intended to be children of <em>all</em>
* expressions ({@link AllRoleMapperExpression}).
*/
public final class ExceptRoleMapperExpression extends CompositeRoleMapperExpression {
public ExceptRoleMapperExpression(final RoleMapperExpression expression) {
super(CompositeType.EXCEPT.getName(), expression);
}
@Override
public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException {
builder.startObject();
builder.field(CompositeType.EXCEPT.getName());
builder.value(getElements().get(0));
return builder.endObject();
}
}

View File

@ -0,0 +1,122 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.fields;
import org.elasticsearch.client.security.support.expressiondsl.RoleMapperExpression;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
/**
* An expression that evaluates to <code>true</code> if a field (map element) matches
* the provided values. A <em>field</em> expression may have more than one provided value, in which
* case the expression is true if <em>any</em> of the values are matched.
* <p>
* Expression builder example:
* <pre>
* {@code
* final RoleMapperExpression usernameExpression = FieldRoleMapperExpression.ofUsername("user1@example.org");
* }
* </pre>
*/
public class FieldRoleMapperExpression implements RoleMapperExpression {
private final String field;
private final List<Object> values;
public FieldRoleMapperExpression(final String field, final Object... values) {
if (field == null || field.isEmpty()) {
throw new IllegalArgumentException("null or empty field name (" + field + ")");
}
if (values == null || values.length == 0) {
throw new IllegalArgumentException("null or empty values (" + values + ")");
}
this.field = field;
this.values = Collections.unmodifiableList(Arrays.asList(values));
}
public String getField() {
return field;
}
public List<Object> getValues() {
return values;
}
@Override
public boolean equals(Object o) {
if (this == o)
return true;
if (o == null || getClass() != o.getClass())
return false;
final FieldRoleMapperExpression that = (FieldRoleMapperExpression) o;
return Objects.equals(this.getField(), that.getField()) && Objects.equals(this.getValues(), that.getValues());
}
@Override
public int hashCode() {
int result = field.hashCode();
result = 31 * result + values.hashCode();
return result;
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.startObject("field");
builder.startArray(this.field);
for (Object value : values) {
builder.value(value);
}
builder.endArray();
builder.endObject();
return builder.endObject();
}
public static FieldRoleMapperExpression ofUsername(Object... values) {
return ofKeyValues("username", values);
}
public static FieldRoleMapperExpression ofGroups(Object... values) {
return ofKeyValues("groups", values);
}
public static FieldRoleMapperExpression ofDN(Object... values) {
return ofKeyValues("dn", values);
}
public static FieldRoleMapperExpression ofMetadata(String key, Object... values) {
if (key.startsWith("metadata.") == false) {
throw new IllegalArgumentException("metadata key must have prefix 'metadata.'");
}
return ofKeyValues(key, values);
}
public static FieldRoleMapperExpression ofKeyValues(String key, Object... values) {
return new FieldRoleMapperExpression(key, values);
}
}

View File

@ -0,0 +1,180 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.parser;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.client.security.support.expressiondsl.RoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.expressions.AllRoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.expressions.AnyRoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.expressions.CompositeType;
import org.elasticsearch.client.security.support.expressiondsl.expressions.ExceptRoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.fields.FieldRoleMapperExpression;
import org.elasticsearch.common.CheckedFunction;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
/**
* Parses the JSON (XContent) based boolean expression DSL into a tree of
* {@link RoleMapperExpression} objects.
* Note: As this is client side parser, it mostly validates the structure of
* DSL being parsed it does not enforce rules
* like allowing "except" within "except" or "any" expressions.
*/
public final class RoleMapperExpressionParser {
public static final ParseField FIELD = new ParseField("field");
/**
* @param name The name of the expression tree within its containing object.
* Used to provide descriptive error messages.
* @param parser A parser over the XContent (typically JSON) DSL
* representation of the expression
*/
public RoleMapperExpression parse(final String name, final XContentParser parser) throws IOException {
return parseRulesObject(name, parser);
}
private RoleMapperExpression parseRulesObject(final String objectName, final XContentParser parser)
throws IOException {
// find the start of the DSL object
final XContentParser.Token token;
if (parser.currentToken() == null) {
token = parser.nextToken();
} else {
token = parser.currentToken();
}
if (token != XContentParser.Token.START_OBJECT) {
throw new ElasticsearchParseException("failed to parse rules expression. expected [{}] to be an object but found [{}] instead",
objectName, token);
}
final String fieldName = fieldName(objectName, parser);
final RoleMapperExpression expr = parseExpression(parser, fieldName, objectName);
if (parser.nextToken() != XContentParser.Token.END_OBJECT) {
throw new ElasticsearchParseException("failed to parse rules expression. object [{}] contains multiple fields", objectName);
}
return expr;
}
private RoleMapperExpression parseExpression(XContentParser parser, String field, String objectName)
throws IOException {
if (CompositeType.ANY.getParseField().match(field, parser.getDeprecationHandler())) {
final AnyRoleMapperExpression.Builder builder = AnyRoleMapperExpression.builder();
parseExpressionArray(CompositeType.ANY.getParseField(), parser).forEach(builder::addExpression);
return builder.build();
} else if (CompositeType.ALL.getParseField().match(field, parser.getDeprecationHandler())) {
final AllRoleMapperExpression.Builder builder = AllRoleMapperExpression.builder();
parseExpressionArray(CompositeType.ALL.getParseField(), parser).forEach(builder::addExpression);
return builder.build();
} else if (FIELD.match(field, parser.getDeprecationHandler())) {
return parseFieldExpression(parser);
} else if (CompositeType.EXCEPT.getParseField().match(field, parser.getDeprecationHandler())) {
return parseExceptExpression(parser);
} else {
throw new ElasticsearchParseException("failed to parse rules expression. field [{}] is not recognised in object [{}]", field,
objectName);
}
}
private RoleMapperExpression parseFieldExpression(XContentParser parser) throws IOException {
checkStartObject(parser);
final String fieldName = fieldName(FIELD.getPreferredName(), parser);
final List<Object> values;
if (parser.nextToken() == XContentParser.Token.START_ARRAY) {
values = parseArray(FIELD, parser, this::parseFieldValue);
} else {
values = Collections.singletonList(parseFieldValue(parser));
}
if (parser.nextToken() != XContentParser.Token.END_OBJECT) {
throw new ElasticsearchParseException("failed to parse rules expression. object [{}] contains multiple fields",
FIELD.getPreferredName());
}
return FieldRoleMapperExpression.ofKeyValues(fieldName, values.toArray());
}
private RoleMapperExpression parseExceptExpression(XContentParser parser) throws IOException {
checkStartObject(parser);
return new ExceptRoleMapperExpression(parseRulesObject(CompositeType.EXCEPT.getName(), parser));
}
private void checkStartObject(XContentParser parser) throws IOException {
final XContentParser.Token token = parser.nextToken();
if (token != XContentParser.Token.START_OBJECT) {
throw new ElasticsearchParseException("failed to parse rules expression. expected an object but found [{}] instead", token);
}
}
private String fieldName(String objectName, XContentParser parser) throws IOException {
if (parser.nextToken() != XContentParser.Token.FIELD_NAME) {
throw new ElasticsearchParseException("failed to parse rules expression. object [{}] does not contain any fields", objectName);
}
String parsedFieldName = parser.currentName();
return parsedFieldName;
}
private List<RoleMapperExpression> parseExpressionArray(ParseField field, XContentParser parser)
throws IOException {
parser.nextToken(); // parseArray requires that the parser is positioned
// at the START_ARRAY token
return parseArray(field, parser, p -> parseRulesObject(field.getPreferredName(), p));
}
private <T> List<T> parseArray(ParseField field, XContentParser parser, CheckedFunction<XContentParser, T, IOException> elementParser)
throws IOException {
final XContentParser.Token token = parser.currentToken();
if (token == XContentParser.Token.START_ARRAY) {
List<T> list = new ArrayList<>();
while (parser.nextToken() != XContentParser.Token.END_ARRAY) {
list.add(elementParser.apply(parser));
}
return list;
} else {
throw new ElasticsearchParseException("failed to parse rules expression. field [{}] requires an array", field);
}
}
private Object parseFieldValue(XContentParser parser) throws IOException {
switch (parser.currentToken()) {
case VALUE_STRING:
return parser.text();
case VALUE_BOOLEAN:
return parser.booleanValue();
case VALUE_NUMBER:
return parser.longValue();
case VALUE_NULL:
return null;
default:
throw new ElasticsearchParseException("failed to parse rules expression. expected a field value but found [{}] instead", parser
.currentToken());
}
}
}

View File

@ -21,8 +21,12 @@ package org.elasticsearch.client;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchStatusException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequest;
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;
import org.elasticsearch.action.admin.cluster.node.tasks.list.TaskGroup;
import org.elasticsearch.action.admin.indices.get.GetIndexRequest;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
@ -51,13 +55,18 @@ import org.elasticsearch.index.VersionType;
import org.elasticsearch.index.get.GetResult;
import org.elasticsearch.index.query.IdsQueryBuilder;
import org.elasticsearch.index.reindex.BulkByScrollResponse;
import org.elasticsearch.index.reindex.DeleteByQueryAction;
import org.elasticsearch.index.reindex.DeleteByQueryRequest;
import org.elasticsearch.index.reindex.ReindexAction;
import org.elasticsearch.index.reindex.ReindexRequest;
import org.elasticsearch.index.reindex.UpdateByQueryAction;
import org.elasticsearch.index.reindex.UpdateByQueryRequest;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.script.Script;
import org.elasticsearch.script.ScriptType;
import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
import org.elasticsearch.tasks.RawTaskStatus;
import org.elasticsearch.tasks.TaskId;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.joda.time.format.DateTimeFormat;
@ -65,9 +74,15 @@ import org.joda.time.format.DateTimeFormat;
import java.io.IOException;
import java.util.Collections;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import static java.util.Collections.singletonMap;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.lessThan;
public class CrudIT extends ESRestHighLevelClientTestCase {
@ -631,7 +646,7 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest);
}
public void testReindex() throws IOException {
public void testReindex() throws Exception {
final String sourceIndex = "source1";
final String destinationIndex = "dest";
{
@ -642,15 +657,14 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
.build();
createIndex(sourceIndex, settings);
createIndex(destinationIndex, settings);
BulkRequest bulkRequest = new BulkRequest()
.add(new IndexRequest(sourceIndex, "type", "1").source(Collections.singletonMap("foo", "bar"), XContentType.JSON))
.add(new IndexRequest(sourceIndex, "type", "2").source(Collections.singletonMap("foo2", "bar2"), XContentType.JSON))
.setRefreshPolicy(RefreshPolicy.IMMEDIATE);
assertEquals(
RestStatus.OK,
highLevelClient().bulk(
new BulkRequest()
.add(new IndexRequest(sourceIndex, "type", "1")
.source(Collections.singletonMap("foo", "bar"), XContentType.JSON))
.add(new IndexRequest(sourceIndex, "type", "2")
.source(Collections.singletonMap("foo2", "bar2"), XContentType.JSON))
.setRefreshPolicy(RefreshPolicy.IMMEDIATE),
bulkRequest,
RequestOptions.DEFAULT
).status()
);
@ -692,9 +706,74 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
assertEquals(0, bulkResponse.getBulkFailures().size());
assertEquals(0, bulkResponse.getSearchFailures().size());
}
{
// test reindex rethrottling
ReindexRequest reindexRequest = new ReindexRequest();
reindexRequest.setSourceIndices(sourceIndex);
reindexRequest.setDestIndex(destinationIndex);
// this following settings are supposed to halt reindexing after first document
reindexRequest.setSourceBatchSize(1);
reindexRequest.setRequestsPerSecond(0.00001f);
final CountDownLatch reindexTaskFinished = new CountDownLatch(1);
highLevelClient().reindexAsync(reindexRequest, RequestOptions.DEFAULT, new ActionListener<BulkByScrollResponse>() {
@Override
public void onResponse(BulkByScrollResponse response) {
reindexTaskFinished.countDown();
}
@Override
public void onFailure(Exception e) {
fail(e.toString());
}
});
TaskId taskIdToRethrottle = findTaskToRethrottle(ReindexAction.NAME);
float requestsPerSecond = 1000f;
ListTasksResponse response = execute(new RethrottleRequest(taskIdToRethrottle, requestsPerSecond),
highLevelClient()::reindexRethrottle, highLevelClient()::reindexRethrottleAsync);
assertThat(response.getTasks(), hasSize(1));
assertEquals(taskIdToRethrottle, response.getTasks().get(0).getTaskId());
assertThat(response.getTasks().get(0).getStatus(), instanceOf(RawTaskStatus.class));
assertEquals(Float.toString(requestsPerSecond),
((RawTaskStatus) response.getTasks().get(0).getStatus()).toMap().get("requests_per_second").toString());
reindexTaskFinished.await(2, TimeUnit.SECONDS);
// any rethrottling after the reindex is done performed with the same taskId should result in a failure
response = execute(new RethrottleRequest(taskIdToRethrottle, requestsPerSecond),
highLevelClient()::reindexRethrottle, highLevelClient()::reindexRethrottleAsync);
assertTrue(response.getTasks().isEmpty());
assertFalse(response.getNodeFailures().isEmpty());
assertEquals(1, response.getNodeFailures().size());
assertEquals("Elasticsearch exception [type=resource_not_found_exception, reason=task [" + taskIdToRethrottle + "] is missing]",
response.getNodeFailures().get(0).getCause().getMessage());
}
}
public void testUpdateByQuery() throws IOException {
private TaskId findTaskToRethrottle(String actionName) throws IOException {
long start = System.nanoTime();
ListTasksRequest request = new ListTasksRequest();
request.setActions(actionName);
request.setDetailed(true);
do {
ListTasksResponse list = highLevelClient().tasks().list(request, RequestOptions.DEFAULT);
list.rethrowFailures("Finding tasks to rethrottle");
assertThat("tasks are left over from the last execution of this test",
list.getTaskGroups(), hasSize(lessThan(2)));
if (0 == list.getTaskGroups().size()) {
// The parent task hasn't started yet
continue;
}
TaskGroup taskGroup = list.getTaskGroups().get(0);
assertThat(taskGroup.getChildTasks(), empty());
return taskGroup.getTaskInfo().getTaskId();
} while (System.nanoTime() - start < TimeUnit.SECONDS.toNanos(10));
throw new AssertionError("Couldn't find tasks to rethrottle. Here are the running tasks " +
highLevelClient().tasks().list(request, RequestOptions.DEFAULT));
}
public void testUpdateByQuery() throws Exception {
final String sourceIndex = "source1";
{
// Prepare
@ -758,9 +837,53 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
.getSourceAsMap().get("foo"))
);
}
{
// test update-by-query rethrottling
UpdateByQueryRequest updateByQueryRequest = new UpdateByQueryRequest();
updateByQueryRequest.indices(sourceIndex);
updateByQueryRequest.setQuery(new IdsQueryBuilder().addIds("1").types("type"));
updateByQueryRequest.setRefresh(true);
// this following settings are supposed to halt reindexing after first document
updateByQueryRequest.setBatchSize(1);
updateByQueryRequest.setRequestsPerSecond(0.00001f);
final CountDownLatch taskFinished = new CountDownLatch(1);
highLevelClient().updateByQueryAsync(updateByQueryRequest, RequestOptions.DEFAULT, new ActionListener<BulkByScrollResponse>() {
@Override
public void onResponse(BulkByScrollResponse response) {
taskFinished.countDown();
}
@Override
public void onFailure(Exception e) {
fail(e.toString());
}
});
TaskId taskIdToRethrottle = findTaskToRethrottle(UpdateByQueryAction.NAME);
float requestsPerSecond = 1000f;
ListTasksResponse response = execute(new RethrottleRequest(taskIdToRethrottle, requestsPerSecond),
highLevelClient()::updateByQueryRethrottle, highLevelClient()::updateByQueryRethrottleAsync);
assertThat(response.getTasks(), hasSize(1));
assertEquals(taskIdToRethrottle, response.getTasks().get(0).getTaskId());
assertThat(response.getTasks().get(0).getStatus(), instanceOf(RawTaskStatus.class));
assertEquals(Float.toString(requestsPerSecond),
((RawTaskStatus) response.getTasks().get(0).getStatus()).toMap().get("requests_per_second").toString());
taskFinished.await(2, TimeUnit.SECONDS);
// any rethrottling after the update-by-query is done performed with the same taskId should result in a failure
response = execute(new RethrottleRequest(taskIdToRethrottle, requestsPerSecond),
highLevelClient()::updateByQueryRethrottle, highLevelClient()::updateByQueryRethrottleAsync);
assertTrue(response.getTasks().isEmpty());
assertFalse(response.getNodeFailures().isEmpty());
assertEquals(1, response.getNodeFailures().size());
assertEquals("Elasticsearch exception [type=resource_not_found_exception, reason=task [" + taskIdToRethrottle + "] is missing]",
response.getNodeFailures().get(0).getCause().getMessage());
}
}
public void testDeleteByQuery() throws IOException {
public void testDeleteByQuery() throws Exception {
final String sourceIndex = "source1";
{
// Prepare
@ -777,6 +900,8 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
.source(Collections.singletonMap("foo", 1), XContentType.JSON))
.add(new IndexRequest(sourceIndex, "type", "2")
.source(Collections.singletonMap("foo", 2), XContentType.JSON))
.add(new IndexRequest(sourceIndex, "type", "3")
.source(Collections.singletonMap("foo", 3), XContentType.JSON))
.setRefreshPolicy(RefreshPolicy.IMMEDIATE),
RequestOptions.DEFAULT
).status()
@ -800,10 +925,54 @@ public class CrudIT extends ESRestHighLevelClientTestCase {
assertEquals(0, bulkResponse.getBulkFailures().size());
assertEquals(0, bulkResponse.getSearchFailures().size());
assertEquals(
1,
2,
highLevelClient().search(new SearchRequest(sourceIndex), RequestOptions.DEFAULT).getHits().totalHits
);
}
{
// test delete-by-query rethrottling
DeleteByQueryRequest deleteByQueryRequest = new DeleteByQueryRequest();
deleteByQueryRequest.indices(sourceIndex);
deleteByQueryRequest.setQuery(new IdsQueryBuilder().addIds("2", "3").types("type"));
deleteByQueryRequest.setRefresh(true);
// this following settings are supposed to halt reindexing after first document
deleteByQueryRequest.setBatchSize(1);
deleteByQueryRequest.setRequestsPerSecond(0.00001f);
final CountDownLatch taskFinished = new CountDownLatch(1);
highLevelClient().deleteByQueryAsync(deleteByQueryRequest, RequestOptions.DEFAULT, new ActionListener<BulkByScrollResponse>() {
@Override
public void onResponse(BulkByScrollResponse response) {
taskFinished.countDown();
}
@Override
public void onFailure(Exception e) {
fail(e.toString());
}
});
TaskId taskIdToRethrottle = findTaskToRethrottle(DeleteByQueryAction.NAME);
float requestsPerSecond = 1000f;
ListTasksResponse response = execute(new RethrottleRequest(taskIdToRethrottle, requestsPerSecond),
highLevelClient()::deleteByQueryRethrottle, highLevelClient()::deleteByQueryRethrottleAsync);
assertThat(response.getTasks(), hasSize(1));
assertEquals(taskIdToRethrottle, response.getTasks().get(0).getTaskId());
assertThat(response.getTasks().get(0).getStatus(), instanceOf(RawTaskStatus.class));
assertEquals(Float.toString(requestsPerSecond),
((RawTaskStatus) response.getTasks().get(0).getStatus()).toMap().get("requests_per_second").toString());
taskFinished.await(2, TimeUnit.SECONDS);
// any rethrottling after the delete-by-query is done performed with the same taskId should result in a failure
response = execute(new RethrottleRequest(taskIdToRethrottle, requestsPerSecond),
highLevelClient()::deleteByQueryRethrottle, highLevelClient()::deleteByQueryRethrottleAsync);
assertTrue(response.getTasks().isEmpty());
assertFalse(response.getNodeFailures().isEmpty());
assertEquals(1, response.getNodeFailures().size());
assertEquals("Elasticsearch exception [type=resource_not_found_exception, reason=task [" + taskIdToRethrottle + "] is missing]",
response.getNodeFailures().get(0).getCause().getMessage());
}
}
public void testBulkProcessorIntegration() throws IOException {

View File

@ -44,6 +44,9 @@ import org.elasticsearch.client.ml.PostDataRequest;
import org.elasticsearch.client.ml.PutCalendarRequest;
import org.elasticsearch.client.ml.PutDatafeedRequest;
import org.elasticsearch.client.ml.PutJobRequest;
import org.elasticsearch.client.ml.StartDatafeedRequest;
import org.elasticsearch.client.ml.StartDatafeedRequestTests;
import org.elasticsearch.client.ml.StopDatafeedRequest;
import org.elasticsearch.client.ml.UpdateJobRequest;
import org.elasticsearch.client.ml.calendars.Calendar;
import org.elasticsearch.client.ml.calendars.CalendarTests;
@ -261,6 +264,35 @@ public class MLRequestConvertersTests extends ESTestCase {
assertEquals(Boolean.toString(true), request.getParameters().get("force"));
}
public void testStartDatafeed() throws Exception {
String datafeedId = DatafeedConfigTests.randomValidDatafeedId();
StartDatafeedRequest datafeedRequest = StartDatafeedRequestTests.createRandomInstance(datafeedId);
Request request = MLRequestConverters.startDatafeed(datafeedRequest);
assertEquals(HttpPost.METHOD_NAME, request.getMethod());
assertEquals("/_xpack/ml/datafeeds/" + datafeedId + "/_start", request.getEndpoint());
try (XContentParser parser = createParser(JsonXContent.jsonXContent, request.getEntity().getContent())) {
StartDatafeedRequest parsedDatafeedRequest = StartDatafeedRequest.PARSER.apply(parser, null);
assertThat(parsedDatafeedRequest, equalTo(datafeedRequest));
}
}
public void testStopDatafeed() throws Exception {
StopDatafeedRequest datafeedRequest = new StopDatafeedRequest("datafeed_1", "datafeed_2");
datafeedRequest.setForce(true);
datafeedRequest.setTimeout(TimeValue.timeValueMinutes(10));
datafeedRequest.setAllowNoDatafeeds(true);
Request request = MLRequestConverters.stopDatafeed(datafeedRequest);
assertEquals(HttpPost.METHOD_NAME, request.getMethod());
assertEquals("/_xpack/ml/datafeeds/" +
Strings.collectionToCommaDelimitedString(datafeedRequest.getDatafeedIds()) +
"/_stop", request.getEndpoint());
try (XContentParser parser = createParser(JsonXContent.jsonXContent, request.getEntity().getContent())) {
StopDatafeedRequest parsedDatafeedRequest = StopDatafeedRequest.PARSER.apply(parser, null);
assertThat(parsedDatafeedRequest, equalTo(datafeedRequest));
}
}
public void testDeleteForecast() {
String jobId = randomAlphaOfLength(10);
DeleteForecastRequest deleteForecastRequest = new DeleteForecastRequest(jobId);

View File

@ -147,7 +147,7 @@ public class MachineLearningGetResultsIT extends ESRestHighLevelClientTestCase {
@After
public void deleteJob() throws IOException {
new MlRestTestStateCleaner(logger, client()).clearMlMetadata();
new MlTestStateCleaner(logger, highLevelClient().machineLearning()).clearMlMetadata();
}
public void testGetCategories() throws IOException {

View File

@ -20,8 +20,12 @@ package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;
import org.elasticsearch.ElasticsearchStatusException;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.client.ml.CloseJobRequest;
import org.elasticsearch.client.ml.CloseJobResponse;
@ -51,6 +55,10 @@ import org.elasticsearch.client.ml.PutDatafeedRequest;
import org.elasticsearch.client.ml.PutDatafeedResponse;
import org.elasticsearch.client.ml.PutJobRequest;
import org.elasticsearch.client.ml.PutJobResponse;
import org.elasticsearch.client.ml.StartDatafeedRequest;
import org.elasticsearch.client.ml.StartDatafeedResponse;
import org.elasticsearch.client.ml.StopDatafeedRequest;
import org.elasticsearch.client.ml.StopDatafeedResponse;
import org.elasticsearch.client.ml.UpdateJobRequest;
import org.elasticsearch.client.ml.calendars.Calendar;
import org.elasticsearch.client.ml.calendars.CalendarTests;
@ -63,6 +71,7 @@ import org.elasticsearch.client.ml.job.config.JobState;
import org.elasticsearch.client.ml.job.config.JobUpdate;
import org.elasticsearch.client.ml.job.stats.JobStats;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.rest.RestStatus;
import org.junit.After;
@ -83,7 +92,7 @@ public class MachineLearningIT extends ESRestHighLevelClientTestCase {
@After
public void cleanUp() throws IOException {
new MlRestTestStateCleaner(logger, client()).clearMlMetadata();
new MlTestStateCleaner(logger, highLevelClient().machineLearning()).clearMlMetadata();
}
public void testPutJob() throws Exception {
@ -416,6 +425,145 @@ public class MachineLearningIT extends ESRestHighLevelClientTestCase {
assertTrue(response.isAcknowledged());
}
public void testStartDatafeed() throws Exception {
String jobId = "test-start-datafeed";
String indexName = "start_data_1";
// Set up the index and docs
CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName);
createIndexRequest.mapping("doc", "timestamp", "type=date", "total", "type=long");
highLevelClient().indices().create(createIndexRequest, RequestOptions.DEFAULT);
BulkRequest bulk = new BulkRequest();
bulk.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
long now = (System.currentTimeMillis()/1000)*1000;
long thePast = now - 60000;
int i = 0;
long pastCopy = thePast;
while(pastCopy < now) {
IndexRequest doc = new IndexRequest();
doc.index(indexName);
doc.type("doc");
doc.id("id" + i);
doc.source("{\"total\":" +randomInt(1000) + ",\"timestamp\":"+ pastCopy +"}", XContentType.JSON);
bulk.add(doc);
pastCopy += 1000;
i++;
}
highLevelClient().bulk(bulk, RequestOptions.DEFAULT);
final long totalDocCount = i;
// create the job and the datafeed
Job job = buildJob(jobId);
MachineLearningClient machineLearningClient = highLevelClient().machineLearning();
machineLearningClient.putJob(new PutJobRequest(job), RequestOptions.DEFAULT);
machineLearningClient.openJob(new OpenJobRequest(jobId), RequestOptions.DEFAULT);
String datafeedId = jobId + "-feed";
DatafeedConfig datafeed = DatafeedConfig.builder(datafeedId, jobId)
.setIndices(indexName)
.setQueryDelay(TimeValue.timeValueSeconds(1))
.setTypes(Arrays.asList("doc"))
.setFrequency(TimeValue.timeValueSeconds(1)).build();
machineLearningClient.putDatafeed(new PutDatafeedRequest(datafeed), RequestOptions.DEFAULT);
StartDatafeedRequest startDatafeedRequest = new StartDatafeedRequest(datafeedId);
startDatafeedRequest.setStart(String.valueOf(thePast));
// Should only process two documents
startDatafeedRequest.setEnd(String.valueOf(thePast + 2000));
StartDatafeedResponse response = execute(startDatafeedRequest,
machineLearningClient::startDatafeed,
machineLearningClient::startDatafeedAsync);
assertTrue(response.isStarted());
assertBusy(() -> {
JobStats stats = machineLearningClient.getJobStats(new GetJobStatsRequest(jobId), RequestOptions.DEFAULT).jobStats().get(0);
assertEquals(2L, stats.getDataCounts().getInputRecordCount());
assertEquals(JobState.CLOSED, stats.getState());
}, 30, TimeUnit.SECONDS);
machineLearningClient.openJob(new OpenJobRequest(jobId), RequestOptions.DEFAULT);
StartDatafeedRequest wholeDataFeed = new StartDatafeedRequest(datafeedId);
// Process all documents and end the stream
wholeDataFeed.setEnd(String.valueOf(now));
StartDatafeedResponse wholeResponse = execute(wholeDataFeed,
machineLearningClient::startDatafeed,
machineLearningClient::startDatafeedAsync);
assertTrue(wholeResponse.isStarted());
assertBusy(() -> {
JobStats stats = machineLearningClient.getJobStats(new GetJobStatsRequest(jobId), RequestOptions.DEFAULT).jobStats().get(0);
assertEquals(totalDocCount, stats.getDataCounts().getInputRecordCount());
assertEquals(JobState.CLOSED, stats.getState());
}, 30, TimeUnit.SECONDS);
}
public void testStopDatafeed() throws Exception {
String jobId1 = "test-stop-datafeed1";
String jobId2 = "test-stop-datafeed2";
String jobId3 = "test-stop-datafeed3";
String indexName = "stop_data_1";
// Set up the index
CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName);
createIndexRequest.mapping("doc", "timestamp", "type=date", "total", "type=long");
highLevelClient().indices().create(createIndexRequest, RequestOptions.DEFAULT);
// create the job and the datafeed
Job job1 = buildJob(jobId1);
putJob(job1);
openJob(job1);
Job job2 = buildJob(jobId2);
putJob(job2);
openJob(job2);
Job job3 = buildJob(jobId3);
putJob(job3);
openJob(job3);
String datafeedId1 = createAndPutDatafeed(jobId1, indexName);
String datafeedId2 = createAndPutDatafeed(jobId2, indexName);
String datafeedId3 = createAndPutDatafeed(jobId3, indexName);
MachineLearningClient machineLearningClient = highLevelClient().machineLearning();
machineLearningClient.startDatafeed(new StartDatafeedRequest(datafeedId1), RequestOptions.DEFAULT);
machineLearningClient.startDatafeed(new StartDatafeedRequest(datafeedId2), RequestOptions.DEFAULT);
machineLearningClient.startDatafeed(new StartDatafeedRequest(datafeedId3), RequestOptions.DEFAULT);
{
StopDatafeedRequest request = new StopDatafeedRequest(datafeedId1);
request.setAllowNoDatafeeds(false);
StopDatafeedResponse stopDatafeedResponse = execute(request,
machineLearningClient::stopDatafeed,
machineLearningClient::stopDatafeedAsync);
assertTrue(stopDatafeedResponse.isStopped());
}
{
StopDatafeedRequest request = new StopDatafeedRequest(datafeedId2, datafeedId3);
request.setAllowNoDatafeeds(false);
StopDatafeedResponse stopDatafeedResponse = execute(request,
machineLearningClient::stopDatafeed,
machineLearningClient::stopDatafeedAsync);
assertTrue(stopDatafeedResponse.isStopped());
}
{
StopDatafeedResponse stopDatafeedResponse = execute(new StopDatafeedRequest("datafeed_that_doesnot_exist*"),
machineLearningClient::stopDatafeed,
machineLearningClient::stopDatafeedAsync);
assertTrue(stopDatafeedResponse.isStopped());
}
{
StopDatafeedRequest request = new StopDatafeedRequest("datafeed_that_doesnot_exist*");
request.setAllowNoDatafeeds(false);
ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class,
() -> execute(request, machineLearningClient::stopDatafeed, machineLearningClient::stopDatafeedAsync));
assertThat(exception.status().getStatus(), equalTo(404));
}
}
public void testDeleteForecast() throws Exception {
String jobId = "test-delete-forecast";
@ -551,7 +699,8 @@ public class MachineLearningIT extends ESRestHighLevelClientTestCase {
.setDetectorDescription(randomAlphaOfLength(10))
.build();
AnalysisConfig.Builder configBuilder = new AnalysisConfig.Builder(Arrays.asList(detector));
configBuilder.setBucketSpan(new TimeValue(randomIntBetween(1, 10), TimeUnit.SECONDS));
//should not be random, see:https://github.com/elastic/ml-cpp/issues/208
configBuilder.setBucketSpan(new TimeValue(5, TimeUnit.SECONDS));
builder.setAnalysisConfig(configBuilder);
DataDescription.Builder dataDescription = new DataDescription.Builder();
@ -561,4 +710,23 @@ public class MachineLearningIT extends ESRestHighLevelClientTestCase {
return builder.build();
}
private void putJob(Job job) throws IOException {
highLevelClient().machineLearning().putJob(new PutJobRequest(job), RequestOptions.DEFAULT);
}
private void openJob(Job job) throws IOException {
highLevelClient().machineLearning().openJob(new OpenJobRequest(job.getId()), RequestOptions.DEFAULT);
}
private String createAndPutDatafeed(String jobId, String indexName) throws IOException {
String datafeedId = jobId + "-feed";
DatafeedConfig datafeed = DatafeedConfig.builder(datafeedId, jobId)
.setIndices(indexName)
.setQueryDelay(TimeValue.timeValueSeconds(1))
.setTypes(Arrays.asList("doc"))
.setFrequency(TimeValue.timeValueSeconds(1)).build();
highLevelClient().machineLearning().putDatafeed(new PutDatafeedRequest(datafeed), RequestOptions.DEFAULT);
return datafeedId;
}
}

View File

@ -1,109 +0,0 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.xcontent.support.XContentMapValues;
import org.elasticsearch.test.rest.ESRestTestCase;
import java.io.IOException;
import java.util.List;
import java.util.Map;
/**
* This is temporarily duplicated from the server side.
* @TODO Replace with an implementation using the HLRC once
* the APIs for managing datafeeds are implemented.
*/
public class MlRestTestStateCleaner {
private final Logger logger;
private final RestClient adminClient;
public MlRestTestStateCleaner(Logger logger, RestClient adminClient) {
this.logger = logger;
this.adminClient = adminClient;
}
public void clearMlMetadata() throws IOException {
deleteAllDatafeeds();
deleteAllJobs();
// indices will be deleted by the ESRestTestCase class
}
@SuppressWarnings("unchecked")
private void deleteAllDatafeeds() throws IOException {
final Request datafeedsRequest = new Request("GET", "/_xpack/ml/datafeeds");
datafeedsRequest.addParameter("filter_path", "datafeeds");
final Response datafeedsResponse = adminClient.performRequest(datafeedsRequest);
final List<Map<String, Object>> datafeeds =
(List<Map<String, Object>>) XContentMapValues.extractValue("datafeeds", ESRestTestCase.entityAsMap(datafeedsResponse));
if (datafeeds == null) {
return;
}
try {
adminClient.performRequest(new Request("POST", "/_xpack/ml/datafeeds/_all/_stop"));
} catch (Exception e1) {
logger.warn("failed to stop all datafeeds. Forcing stop", e1);
try {
adminClient.performRequest(new Request("POST", "/_xpack/ml/datafeeds/_all/_stop?force=true"));
} catch (Exception e2) {
logger.warn("Force-closing all data feeds failed", e2);
}
throw new RuntimeException(
"Had to resort to force-stopping datafeeds, something went wrong?", e1);
}
for (Map<String, Object> datafeed : datafeeds) {
String datafeedId = (String) datafeed.get("datafeed_id");
adminClient.performRequest(new Request("DELETE", "/_xpack/ml/datafeeds/" + datafeedId));
}
}
private void deleteAllJobs() throws IOException {
final Request jobsRequest = new Request("GET", "/_xpack/ml/anomaly_detectors");
jobsRequest.addParameter("filter_path", "jobs");
final Response response = adminClient.performRequest(jobsRequest);
@SuppressWarnings("unchecked")
final List<Map<String, Object>> jobConfigs =
(List<Map<String, Object>>) XContentMapValues.extractValue("jobs", ESRestTestCase.entityAsMap(response));
if (jobConfigs == null) {
return;
}
try {
adminClient.performRequest(new Request("POST", "/_xpack/ml/anomaly_detectors/_all/_close"));
} catch (Exception e1) {
logger.warn("failed to close all jobs. Forcing closed", e1);
try {
adminClient.performRequest(new Request("POST", "/_xpack/ml/anomaly_detectors/_all/_close?force=true"));
} catch (Exception e2) {
logger.warn("Force-closing all jobs failed", e2);
}
throw new RuntimeException("Had to resort to force-closing jobs, something went wrong?",
e1);
}
for (Map<String, Object> jobConfig : jobConfigs) {
String jobId = (String) jobConfig.get("job_id");
adminClient.performRequest(new Request("DELETE", "/_xpack/ml/anomaly_detectors/" + jobId));
}
}
}

View File

@ -0,0 +1,102 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.client.ml.CloseJobRequest;
import org.elasticsearch.client.ml.DeleteDatafeedRequest;
import org.elasticsearch.client.ml.DeleteJobRequest;
import org.elasticsearch.client.ml.GetDatafeedRequest;
import org.elasticsearch.client.ml.GetDatafeedResponse;
import org.elasticsearch.client.ml.GetJobRequest;
import org.elasticsearch.client.ml.GetJobResponse;
import org.elasticsearch.client.ml.StopDatafeedRequest;
import org.elasticsearch.client.ml.datafeed.DatafeedConfig;
import org.elasticsearch.client.ml.job.config.Job;
import java.io.IOException;
/**
* Cleans up and ML resources created during tests
*/
public class MlTestStateCleaner {
private final Logger logger;
private final MachineLearningClient mlClient;
public MlTestStateCleaner(Logger logger, MachineLearningClient mlClient) {
this.logger = logger;
this.mlClient = mlClient;
}
public void clearMlMetadata() throws IOException {
deleteAllDatafeeds();
deleteAllJobs();
}
private void deleteAllDatafeeds() throws IOException {
stopAllDatafeeds();
GetDatafeedResponse getDatafeedResponse = mlClient.getDatafeed(GetDatafeedRequest.getAllDatafeedsRequest(), RequestOptions.DEFAULT);
for (DatafeedConfig datafeed : getDatafeedResponse.datafeeds()) {
mlClient.deleteDatafeed(new DeleteDatafeedRequest(datafeed.getId()), RequestOptions.DEFAULT);
}
}
private void stopAllDatafeeds() {
StopDatafeedRequest stopAllDatafeedsRequest = StopDatafeedRequest.stopAllDatafeedsRequest();
try {
mlClient.stopDatafeed(stopAllDatafeedsRequest, RequestOptions.DEFAULT);
} catch (Exception e1) {
logger.warn("failed to stop all datafeeds. Forcing stop", e1);
try {
stopAllDatafeedsRequest.setForce(true);
mlClient.stopDatafeed(stopAllDatafeedsRequest, RequestOptions.DEFAULT);
} catch (Exception e2) {
logger.warn("Force-closing all data feeds failed", e2);
}
throw new RuntimeException("Had to resort to force-stopping datafeeds, something went wrong?", e1);
}
}
private void deleteAllJobs() throws IOException {
closeAllJobs();
GetJobResponse getJobResponse = mlClient.getJob(GetJobRequest.getAllJobsRequest(), RequestOptions.DEFAULT);
for (Job job : getJobResponse.jobs()) {
mlClient.deleteJob(new DeleteJobRequest(job.getId()), RequestOptions.DEFAULT);
}
}
private void closeAllJobs() {
CloseJobRequest closeAllJobsRequest = CloseJobRequest.closeAllJobsRequest();
try {
mlClient.closeJob(closeAllJobsRequest, RequestOptions.DEFAULT);
} catch (Exception e1) {
logger.warn("failed to close all jobs. Forcing closed", e1);
closeAllJobsRequest.setForce(true);
try {
mlClient.closeJob(closeAllJobsRequest, RequestOptions.DEFAULT);
} catch (Exception e2) {
logger.warn("Force-closing all jobs failed", e2);
}
throw new RuntimeException("Had to resort to force-closing jobs, something went wrong?", e1);
}
}
}

View File

@ -90,7 +90,7 @@ public class RankEvalIT extends ESRestHighLevelClientTestCase {
if (id.equals("berlin") || id.equals("amsterdam5")) {
assertFalse(hit.getRating().isPresent());
} else {
assertEquals(1, hit.getRating().get().intValue());
assertEquals(1, hit.getRating().getAsInt());
}
}
EvalQueryQuality berlinQueryQuality = partialResults.get("berlin_query");
@ -100,7 +100,7 @@ public class RankEvalIT extends ESRestHighLevelClientTestCase {
for (RatedSearchHit hit : hitsAndRatings) {
String id = hit.getSearchHit().getId();
if (id.equals("berlin")) {
assertEquals(1, hit.getRating().get().intValue());
assertEquals(1, hit.getRating().getAsInt());
} else {
assertFalse(hit.getRating().isPresent());
}

View File

@ -59,6 +59,7 @@ import org.elasticsearch.common.CheckedBiConsumer;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.io.Streams;
import org.elasticsearch.common.lucene.uid.Versions;
import org.elasticsearch.common.unit.TimeValue;
@ -95,6 +96,7 @@ import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.rescore.QueryRescorerBuilder;
import org.elasticsearch.search.suggest.SuggestBuilder;
import org.elasticsearch.search.suggest.completion.CompletionSuggestionBuilder;
import org.elasticsearch.tasks.TaskId;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.RandomObjects;
@ -317,6 +319,13 @@ public class RequestConvertersTests extends ESTestCase {
if (randomBoolean()) {
reindexRequest.setDestPipeline("my_pipeline");
}
if (randomBoolean()) {
float requestsPerSecond = (float) randomDoubleBetween(0.0, 10.0, false);
expectedParams.put(RethrottleRequest.REQUEST_PER_SECOND_PARAMETER, Float.toString(requestsPerSecond));
reindexRequest.setRequestsPerSecond(requestsPerSecond);
} else {
expectedParams.put(RethrottleRequest.REQUEST_PER_SECOND_PARAMETER, "-1");
}
if (randomBoolean()) {
reindexRequest.setDestRouting("=cat");
}
@ -359,6 +368,13 @@ public class RequestConvertersTests extends ESTestCase {
updateByQueryRequest.setPipeline("my_pipeline");
expectedParams.put("pipeline", "my_pipeline");
}
if (randomBoolean()) {
float requestsPerSecond = (float) randomDoubleBetween(0.0, 10.0, false);
expectedParams.put("requests_per_second", Float.toString(requestsPerSecond));
updateByQueryRequest.setRequestsPerSecond(requestsPerSecond);
} else {
expectedParams.put("requests_per_second", "-1");
}
if (randomBoolean()) {
updateByQueryRequest.setRouting("=cat");
expectedParams.put("routing", "=cat");
@ -430,6 +446,13 @@ public class RequestConvertersTests extends ESTestCase {
if (randomBoolean()) {
deleteByQueryRequest.setQuery(new TermQueryBuilder("foo", "fooval"));
}
if (randomBoolean()) {
float requestsPerSecond = (float) randomDoubleBetween(0.0, 10.0, false);
expectedParams.put("requests_per_second", Float.toString(requestsPerSecond));
deleteByQueryRequest.setRequestsPerSecond(requestsPerSecond);
} else {
expectedParams.put("requests_per_second", "-1");
}
setRandomIndicesOptions(deleteByQueryRequest::setIndicesOptions, deleteByQueryRequest::indicesOptions, expectedParams);
setRandomTimeout(deleteByQueryRequest::setTimeout, ReplicationRequest.DEFAULT_TIMEOUT, expectedParams);
Request request = RequestConverters.deleteByQuery(deleteByQueryRequest);
@ -444,6 +467,43 @@ public class RequestConvertersTests extends ESTestCase {
assertToXContentBody(deleteByQueryRequest, request.getEntity());
}
public void testRethrottle() {
TaskId taskId = new TaskId(randomAlphaOfLength(10), randomIntBetween(1, 100));
RethrottleRequest rethrottleRequest;
Float requestsPerSecond;
Map<String, String> expectedParams = new HashMap<>();
if (frequently()) {
requestsPerSecond = (float) randomDoubleBetween(0.0, 100.0, true);
rethrottleRequest = new RethrottleRequest(taskId, requestsPerSecond);
expectedParams.put(RethrottleRequest.REQUEST_PER_SECOND_PARAMETER, Float.toString(requestsPerSecond));
} else {
rethrottleRequest = new RethrottleRequest(taskId);
expectedParams.put(RethrottleRequest.REQUEST_PER_SECOND_PARAMETER, "-1");
}
expectedParams.put("group_by", "none");
List<Tuple<String, Supplier<Request>>> variants = new ArrayList<>();
variants.add(new Tuple<String, Supplier<Request>>("_reindex", () -> RequestConverters.rethrottleReindex(rethrottleRequest)));
variants.add(new Tuple<String, Supplier<Request>>("_update_by_query",
() -> RequestConverters.rethrottleUpdateByQuery(rethrottleRequest)));
variants.add(new Tuple<String, Supplier<Request>>("_delete_by_query",
() -> RequestConverters.rethrottleDeleteByQuery(rethrottleRequest)));
for (Tuple<String, Supplier<Request>> variant : variants) {
Request request = variant.v2().get();
assertEquals("/" + variant.v1() + "/" + taskId + "/_rethrottle", request.getEndpoint());
assertEquals(HttpPost.METHOD_NAME, request.getMethod());
assertEquals(expectedParams, request.getParameters());
assertNull(request.getEntity());
}
// test illegal RethrottleRequest values
Exception e = expectThrows(NullPointerException.class, () -> new RethrottleRequest(null, 1.0f));
assertEquals("taskId cannot be null", e.getMessage());
e = expectThrows(IllegalArgumentException.class, () -> new RethrottleRequest(new TaskId("taskId", 1), -5.0f));
assertEquals("requestsPerSecond needs to be positive value but was [-5.0]", e.getMessage());
}
public void testIndex() throws IOException {
String index = randomAlphaOfLengthBetween(3, 10);
String type = randomAlphaOfLengthBetween(3, 10);

View File

@ -20,6 +20,7 @@
package org.elasticsearch.client;
import com.fasterxml.jackson.core.JsonParseException;
import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.HttpResponse;
@ -658,7 +659,6 @@ public class RestHighLevelClientTests extends ESTestCase {
"indices.get_upgrade",
"indices.put_alias",
"mtermvectors",
"reindex_rethrottle",
"render_search_template",
"scripts_painless_execute",
"tasks.get",

View File

@ -27,6 +27,10 @@ import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.client.rollup.GetRollupJobRequest;
import org.elasticsearch.client.rollup.GetRollupJobResponse;
import org.elasticsearch.client.rollup.GetRollupJobResponse.IndexerState;
import org.elasticsearch.client.rollup.GetRollupJobResponse.JobWrapper;
import org.elasticsearch.client.rollup.PutRollupJobRequest;
import org.elasticsearch.client.rollup.PutRollupJobResponse;
import org.elasticsearch.client.rollup.job.config.DateHistogramGroupConfig;
@ -50,6 +54,13 @@ import java.util.Locale;
import java.util.Map;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.either;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.greaterThan;
import static org.hamcrest.Matchers.hasKey;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.lessThan;
public class RollupIT extends ESRestHighLevelClientTestCase {
@ -57,7 +68,7 @@ public class RollupIT extends ESRestHighLevelClientTestCase {
SumAggregationBuilder.NAME, AvgAggregationBuilder.NAME, ValueCountAggregationBuilder.NAME);
@SuppressWarnings("unchecked")
public void testPutRollupJob() throws Exception {
public void testPutAndGetRollupJob() throws Exception {
double sum = 0.0d;
int max = Integer.MIN_VALUE;
int min = Integer.MAX_VALUE;
@ -90,7 +101,7 @@ public class RollupIT extends ESRestHighLevelClientTestCase {
BulkResponse bulkResponse = highLevelClient().bulk(bulkRequest, RequestOptions.DEFAULT);
assertEquals(RestStatus.OK, bulkResponse.status());
if (bulkResponse.hasFailures()) {
if (bulkResponse.hasFailures()) {
for (BulkItemResponse itemResponse : bulkResponse.getItems()) {
if (itemResponse.isFailed()) {
logger.fatal(itemResponse.getFailureMessage());
@ -158,5 +169,26 @@ public class RollupIT extends ESRestHighLevelClientTestCase {
}
}
});
// TODO when we move cleaning rollup into ESTestCase we can randomly choose the _all version of this request
GetRollupJobRequest getRollupJobRequest = new GetRollupJobRequest(id);
GetRollupJobResponse getResponse = execute(getRollupJobRequest, rollupClient::getRollupJob, rollupClient::getRollupJobAsync);
assertThat(getResponse.getJobs(), hasSize(1));
JobWrapper job = getResponse.getJobs().get(0);
assertEquals(putRollupJobRequest.getConfig(), job.getJob());
assertThat(job.getStats().getNumPages(), lessThan(10L));
assertEquals(numDocs, job.getStats().getNumDocuments());
assertThat(job.getStats().getNumInvocations(), greaterThan(0L));
assertEquals(1, job.getStats().getOutputDocuments());
assertThat(job.getStatus().getState(), either(equalTo(IndexerState.STARTED)).or(equalTo(IndexerState.INDEXING)));
assertThat(job.getStatus().getCurrentPosition(), hasKey("date.date_histogram"));
assertEquals(true, job.getStatus().getUpgradedDocumentId());
}
public void testGetMissingRollupJob() throws Exception {
GetRollupJobRequest getRollupJobRequest = new GetRollupJobRequest("missing");
RollupClient rollupClient = highLevelClient().rollup();
GetRollupJobResponse getResponse = execute(getRollupJobRequest, rollupClient::getRollupJob, rollupClient::getRollupJobAsync);
assertThat(getResponse.getJobs(), empty());
}
}

View File

@ -0,0 +1,61 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPut;
import org.elasticsearch.client.rollup.GetRollupJobRequest;
import org.elasticsearch.client.rollup.PutRollupJobRequest;
import org.elasticsearch.client.rollup.job.config.RollupJobConfig;
import org.elasticsearch.client.rollup.job.config.RollupJobConfigTests;
import org.elasticsearch.test.ESTestCase;
import java.io.IOException;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.nullValue;
public class RollupRequestConvertersTests extends ESTestCase {
public void testPutJob() throws IOException {
String job = randomAlphaOfLength(5);
RollupJobConfig config = RollupJobConfigTests.randomRollupJobConfig(job);
PutRollupJobRequest put = new PutRollupJobRequest(config);
Request request = RollupRequestConverters.putJob(put);
assertThat(request.getEndpoint(), equalTo("/_xpack/rollup/job/" + job));
assertThat(HttpPut.METHOD_NAME, equalTo(request.getMethod()));
assertThat(request.getParameters().keySet(), empty());
RequestConvertersTests.assertToXContentBody(put, request.getEntity());
}
public void testGetJob() {
boolean getAll = randomBoolean();
String job = getAll ? "_all" : RequestConvertersTests.randomIndicesNames(1, 1)[0];
GetRollupJobRequest get = getAll ? new GetRollupJobRequest() : new GetRollupJobRequest(job);
Request request = RollupRequestConverters.getJob(get);
assertThat(request.getEndpoint(), equalTo("/_xpack/rollup/job/" + job));
assertThat(HttpGet.METHOD_NAME, equalTo(request.getMethod()));
assertThat(request.getParameters().keySet(), empty());
assertThat(request.getEntity(), nullValue());
}
}

View File

@ -19,9 +19,11 @@
package org.elasticsearch.client;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.methods.HttpPut;
import org.elasticsearch.client.security.DisableUserRequest;
import org.elasticsearch.client.security.EnableUserRequest;
import org.elasticsearch.client.security.ChangePasswordRequest;
import org.elasticsearch.client.security.PutUserRequest;
import org.elasticsearch.client.security.RefreshPolicy;
import org.elasticsearch.test.ESTestCase;
@ -91,9 +93,34 @@ public class SecurityRequestConvertersTests extends ESTestCase {
private static Map<String, String> getExpectedParamsFromRefreshPolicy(RefreshPolicy refreshPolicy) {
if (refreshPolicy != RefreshPolicy.NONE) {
return Collections.singletonMap("refresh", refreshPolicy.getValue());
return Collections.singletonMap("refresh", refreshPolicy.getValue());
} else {
return Collections.emptyMap();
}
}
public void testChangePassword() throws IOException {
final String username = randomAlphaOfLengthBetween(4, 12);
final char[] password = randomAlphaOfLengthBetween(8, 12).toCharArray();
final RefreshPolicy refreshPolicy = randomFrom(RefreshPolicy.values());
final Map<String, String> expectedParams = getExpectedParamsFromRefreshPolicy(refreshPolicy);
ChangePasswordRequest changePasswordRequest = new ChangePasswordRequest(username, password, refreshPolicy);
Request request = SecurityRequestConverters.changePassword(changePasswordRequest);
assertEquals(HttpPost.METHOD_NAME, request.getMethod());
assertEquals("/_xpack/security/user/" + changePasswordRequest.getUsername() + "/_password", request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
assertToXContentBody(changePasswordRequest, request.getEntity());
}
public void testSelfChangePassword() throws IOException {
final char[] password = randomAlphaOfLengthBetween(8, 12).toCharArray();
final RefreshPolicy refreshPolicy = randomFrom(RefreshPolicy.values());
final Map<String, String> expectedParams = getExpectedParamsFromRefreshPolicy(refreshPolicy);
ChangePasswordRequest changePasswordRequest = new ChangePasswordRequest(null, password, refreshPolicy);
Request request = SecurityRequestConverters.changePassword(changePasswordRequest);
assertEquals(HttpPost.METHOD_NAME, request.getMethod());
assertEquals("/_xpack/security/user/_password", request.getEndpoint());
assertEquals(expectedParams, request.getParameters());
assertToXContentBody(changePasswordRequest, request.getEntity());
}
}

View File

@ -24,6 +24,7 @@ import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.LatchedActionListener;
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;
import org.elasticsearch.action.bulk.BackoffPolicy;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
@ -50,6 +51,7 @@ import org.elasticsearch.client.Request;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.RethrottleRequest;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.settings.Settings;
@ -75,6 +77,7 @@ import org.elasticsearch.script.Script;
import org.elasticsearch.script.ScriptType;
import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
import org.elasticsearch.search.sort.SortOrder;
import org.elasticsearch.tasks.TaskId;
import java.util.Collections;
import java.util.Date;
@ -92,25 +95,12 @@ import static org.hamcrest.Matchers.hasKey;
import static org.hamcrest.Matchers.not;
/**
* This class is used to generate the Java CRUD API documentation.
* You need to wrap your code between two tags like:
* // tag::example
* // end::example
*
* Where example is your tag name.
*
* Then in the documentation, you can extract what is between tag and end tags with
* ["source","java",subs="attributes,callouts,macros"]
* --------------------------------------------------
* include-tagged::{doc-tests}/CRUDDocumentationIT.java[example]
* --------------------------------------------------
*
* The column width of the code block is 84. If the code contains a line longer
* than 84, the line will be cut and a horizontal scroll bar will be displayed.
* (the code indentation of the tag is not included in the width)
* Documentation for CRUD APIs in the high level java client.
* Code wrapped in {@code tag} and {@code end} tags is included in the docs.
*/
public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
@SuppressWarnings("unused")
public void testIndex() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -275,6 +265,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testUpdate() throws Exception {
RestHighLevelClient client = highLevelClient();
{
@ -543,6 +534,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testDelete() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -662,6 +654,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testBulk() throws Exception {
RestHighLevelClient client = highLevelClient();
{
@ -764,6 +757,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testReindex() throws Exception {
RestHighLevelClient client = highLevelClient();
{
@ -902,6 +896,59 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testReindexRethrottle() throws Exception {
RestHighLevelClient client = highLevelClient();
TaskId taskId = new TaskId("oTUltX4IQMOUUVeiohTt8A:124");
{
// tag::rethrottle-disable-request
RethrottleRequest request = new RethrottleRequest(taskId); // <1>
// end::rethrottle-disable-request
}
{
// tag::rethrottle-request
RethrottleRequest request = new RethrottleRequest(taskId, 100.0f); // <1>
// end::rethrottle-request
}
{
RethrottleRequest request = new RethrottleRequest(taskId);
// tag::rethrottle-request-execution
client.reindexRethrottle(request, RequestOptions.DEFAULT); // <1>
client.updateByQueryRethrottle(request, RequestOptions.DEFAULT); // <2>
client.deleteByQueryRethrottle(request, RequestOptions.DEFAULT); // <3>
// end::rethrottle-request-execution
}
// tag::rethrottle-request-async-listener
ActionListener<ListTasksResponse> listener = new ActionListener<ListTasksResponse>() {
@Override
public void onResponse(ListTasksResponse response) {
// <1>
}
@Override
public void onFailure(Exception e) {
// <2>
}
};
// end::rethrottle-request-async-listener
// Replace the empty listener by a blocking listener in test
final CountDownLatch latch = new CountDownLatch(3);
listener = new LatchedActionListener<>(listener, latch);
RethrottleRequest request = new RethrottleRequest(taskId);
// tag::rethrottle-execute-async
client.reindexRethrottleAsync(request, RequestOptions.DEFAULT, listener); // <1>
client.updateByQueryRethrottleAsync(request, RequestOptions.DEFAULT, listener); // <2>
client.deleteByQueryRethrottleAsync(request, RequestOptions.DEFAULT, listener); // <3>
// end::rethrottle-execute-async
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
@SuppressWarnings("unused")
public void testUpdateByQuery() throws Exception {
RestHighLevelClient client = highLevelClient();
{
@ -1021,6 +1068,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testDeleteByQuery() throws Exception {
RestHighLevelClient client = highLevelClient();
{
@ -1128,6 +1176,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testGet() throws Exception {
RestHighLevelClient client = highLevelClient();
{
@ -1442,6 +1491,7 @@ public class CRUDDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testMultiGet() throws Exception {
RestHighLevelClient client = highLevelClient();

View File

@ -55,22 +55,8 @@ import static org.hamcrest.Matchers.greaterThan;
import static org.hamcrest.Matchers.notNullValue;
/**
* This class is used to generate the Java Cluster API documentation.
* You need to wrap your code between two tags like:
* // tag::example
* // end::example
*
* Where example is your tag name.
*
* Then in the documentation, you can extract what is between tag and end tags with
* ["source","java",subs="attributes,callouts,macros"]
* --------------------------------------------------
* include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[example]
* --------------------------------------------------
*
* The column width of the code block is 84. If the code contains a line longer
* than 84, the line will be cut and a horizontal scroll bar will be displayed.
* (the code indentation of the tag is not included in the width)
* Documentation for Cluster APIs in the high level java client.
* Code wrapped in {@code tag} and {@code end} tags is included in the docs.
*/
public class ClusterClientDocumentationIT extends ESRestHighLevelClientTestCase {
@ -192,6 +178,7 @@ public class ClusterClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testClusterGetSettings() throws IOException {
RestHighLevelClient client = highLevelClient();
@ -257,6 +244,7 @@ public class ClusterClientDocumentationIT extends ESRestHighLevelClientTestCase
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
@SuppressWarnings("unused")
public void testClusterHealth() throws IOException {
RestHighLevelClient client = highLevelClient();
client.indices().create(new CreateIndexRequest("index"), RequestOptions.DEFAULT);

View File

@ -706,6 +706,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testGetFieldMapping() throws IOException, InterruptedException {
RestHighLevelClient client = highLevelClient();
@ -891,6 +892,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testRefreshIndex() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -959,6 +961,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testFlushIndex() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -1035,6 +1038,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testSyncedFlushIndex() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -1308,6 +1312,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
@SuppressWarnings("unused")
public void testForceMergeIndex() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -1381,6 +1386,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testClearCache() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -1527,6 +1533,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testExistsAlias() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -1590,6 +1597,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testUpdateAliases() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -1915,6 +1923,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
@SuppressWarnings("unused")
public void testGetAlias() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -1985,6 +1994,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testIndexPutSettings() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -2315,6 +2325,7 @@ public class IndicesClientDocumentationIT extends ESRestHighLevelClientTestCase
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
@SuppressWarnings("unused")
public void testValidateQuery() throws IOException, InterruptedException {
RestHighLevelClient client = highLevelClient();

View File

@ -143,6 +143,7 @@ public class IngestClientDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testGetPipeline() throws IOException {
RestHighLevelClient client = highLevelClient();

View File

@ -20,6 +20,7 @@ package org.elasticsearch.client.documentation;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.LatchedActionListener;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
@ -29,7 +30,7 @@ import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
import org.elasticsearch.client.MachineLearningGetResultsIT;
import org.elasticsearch.client.MachineLearningIT;
import org.elasticsearch.client.MlRestTestStateCleaner;
import org.elasticsearch.client.MlTestStateCleaner;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.ml.CloseJobRequest;
@ -70,6 +71,10 @@ import org.elasticsearch.client.ml.PutDatafeedRequest;
import org.elasticsearch.client.ml.PutDatafeedResponse;
import org.elasticsearch.client.ml.PutJobRequest;
import org.elasticsearch.client.ml.PutJobResponse;
import org.elasticsearch.client.ml.StartDatafeedRequest;
import org.elasticsearch.client.ml.StartDatafeedResponse;
import org.elasticsearch.client.ml.StopDatafeedRequest;
import org.elasticsearch.client.ml.StopDatafeedResponse;
import org.elasticsearch.client.ml.UpdateJobRequest;
import org.elasticsearch.client.ml.calendars.Calendar;
import org.elasticsearch.client.ml.datafeed.ChunkingConfig;
@ -121,7 +126,7 @@ public class MlClientDocumentationIT extends ESRestHighLevelClientTestCase {
@After
public void cleanUp() throws IOException {
new MlRestTestStateCleaner(logger, client()).clearMlMetadata();
new MlTestStateCleaner(logger, highLevelClient().machineLearning()).clearMlMetadata();
}
public void testCreateJob() throws Exception {
@ -703,6 +708,120 @@ public class MlClientDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
public void testStartDatafeed() throws Exception {
RestHighLevelClient client = highLevelClient();
Job job = MachineLearningIT.buildJob("start-datafeed-job");
client.machineLearning().putJob(new PutJobRequest(job), RequestOptions.DEFAULT);
String datafeedId = job.getId() + "-feed";
String indexName = "start_data_2";
CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName);
createIndexRequest.mapping("doc", "timestamp", "type=date", "total", "type=long");
highLevelClient().indices().create(createIndexRequest, RequestOptions.DEFAULT);
DatafeedConfig datafeed = DatafeedConfig.builder(datafeedId, job.getId())
.setTypes(Arrays.asList("doc"))
.setIndices(indexName)
.build();
client.machineLearning().putDatafeed(new PutDatafeedRequest(datafeed), RequestOptions.DEFAULT);
client.machineLearning().openJob(new OpenJobRequest(job.getId()), RequestOptions.DEFAULT);
{
//tag::x-pack-ml-start-datafeed-request
StartDatafeedRequest request = new StartDatafeedRequest(datafeedId); // <1>
//end::x-pack-ml-start-datafeed-request
//tag::x-pack-ml-start-datafeed-request-options
request.setEnd("2018-08-21T00:00:00Z"); // <1>
request.setStart("2018-08-20T00:00:00Z"); // <2>
request.setTimeout(TimeValue.timeValueMinutes(10)); // <3>
//end::x-pack-ml-start-datafeed-request-options
//tag::x-pack-ml-start-datafeed-execute
StartDatafeedResponse response = client.machineLearning().startDatafeed(request, RequestOptions.DEFAULT);
boolean started = response.isStarted(); // <1>
//end::x-pack-ml-start-datafeed-execute
assertTrue(started);
}
{
StartDatafeedRequest request = new StartDatafeedRequest(datafeedId);
// tag::x-pack-ml-start-datafeed-listener
ActionListener<StartDatafeedResponse> listener = new ActionListener<StartDatafeedResponse>() {
@Override
public void onResponse(StartDatafeedResponse response) {
// <1>
}
@Override
public void onFailure(Exception e) {
// <2>
}
};
// end::x-pack-ml-start-datafeed-listener
// Replace the empty listener by a blocking listener in test
final CountDownLatch latch = new CountDownLatch(1);
listener = new LatchedActionListener<>(listener, latch);
// tag::x-pack-ml-start-datafeed-execute-async
client.machineLearning().startDatafeedAsync(request, RequestOptions.DEFAULT, listener); // <1>
// end::x-pack-ml-start-datafeed-execute-async
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
}
public void testStopDatafeed() throws Exception {
RestHighLevelClient client = highLevelClient();
{
//tag::x-pack-ml-stop-datafeed-request
StopDatafeedRequest request = new StopDatafeedRequest("datafeed_id1", "datafeed_id*"); // <1>
//end::x-pack-ml-stop-datafeed-request
request = StopDatafeedRequest.stopAllDatafeedsRequest();
//tag::x-pack-ml-stop-datafeed-request-options
request.setAllowNoDatafeeds(true); // <1>
request.setForce(true); // <2>
request.setTimeout(TimeValue.timeValueMinutes(10)); // <3>
//end::x-pack-ml-stop-datafeed-request-options
//tag::x-pack-ml-stop-datafeed-execute
StopDatafeedResponse response = client.machineLearning().stopDatafeed(request, RequestOptions.DEFAULT);
boolean stopped = response.isStopped(); // <1>
//end::x-pack-ml-stop-datafeed-execute
assertTrue(stopped);
}
{
StopDatafeedRequest request = StopDatafeedRequest.stopAllDatafeedsRequest();
// tag::x-pack-ml-stop-datafeed-listener
ActionListener<StopDatafeedResponse> listener = new ActionListener<StopDatafeedResponse>() {
@Override
public void onResponse(StopDatafeedResponse response) {
// <1>
}
@Override
public void onFailure(Exception e) {
// <2>
}
};
// end::x-pack-ml-stop-datafeed-listener
// Replace the empty listener by a blocking listener in test
final CountDownLatch latch = new CountDownLatch(1);
listener = new LatchedActionListener<>(listener, latch);
// tag::x-pack-ml-stop-datafeed-execute-async
client.machineLearning().stopDatafeedAsync(request, RequestOptions.DEFAULT, listener); // <1>
// end::x-pack-ml-stop-datafeed-execute-async
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
}
public void testGetBuckets() throws IOException, InterruptedException {
RestHighLevelClient client = highLevelClient();

View File

@ -27,8 +27,15 @@ import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.support.WriteRequest;
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
import org.elasticsearch.client.Request;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.rollup.GetRollupJobRequest;
import org.elasticsearch.client.rollup.GetRollupJobResponse;
import org.elasticsearch.client.rollup.GetRollupJobResponse.JobWrapper;
import org.elasticsearch.client.rollup.GetRollupJobResponse.RollupIndexerJobStats;
import org.elasticsearch.client.rollup.GetRollupJobResponse.RollupJobStatus;
import org.elasticsearch.client.rollup.PutRollupJobRequest;
import org.elasticsearch.client.rollup.PutRollupJobResponse;
import org.elasticsearch.client.rollup.job.config.DateHistogramGroupConfig;
@ -38,19 +45,26 @@ import org.elasticsearch.client.rollup.job.config.MetricConfig;
import org.elasticsearch.client.rollup.job.config.RollupJobConfig;
import org.elasticsearch.client.rollup.job.config.TermsGroupConfig;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.support.XContentMapValues;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramInterval;
import org.junit.After;
import org.junit.Before;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
import static org.hamcrest.Matchers.hasSize;
public class RollupDocumentationIT extends ESRestHighLevelClientTestCase {
@ -160,4 +174,110 @@ public class RollupDocumentationIT extends ESRestHighLevelClientTestCase {
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
}
public void testGetRollupJob() throws Exception {
testCreateRollupJob();
RestHighLevelClient client = highLevelClient();
// tag::x-pack-rollup-get-rollup-job-request
GetRollupJobRequest getAll = new GetRollupJobRequest(); // <1>
GetRollupJobRequest getJob = new GetRollupJobRequest("job_1"); // <2>
// end::x-pack-rollup-get-rollup-job-request
// tag::x-pack-rollup-get-rollup-job-execute
GetRollupJobResponse response = client.rollup().getRollupJob(getJob, RequestOptions.DEFAULT);
// end::x-pack-rollup-get-rollup-job-execute
// tag::x-pack-rollup-get-rollup-job-response
assertThat(response.getJobs(), hasSize(1));
JobWrapper job = response.getJobs().get(0); // <1>
RollupJobConfig config = job.getJob();
RollupJobStatus status = job.getStatus();
RollupIndexerJobStats stats = job.getStats();
// end::x-pack-rollup-get-rollup-job-response
assertNotNull(config);
assertNotNull(status);
assertNotNull(status);
// tag::x-pack-rollup-get-rollup-job-execute-listener
ActionListener<GetRollupJobResponse> listener = new ActionListener<GetRollupJobResponse>() {
@Override
public void onResponse(GetRollupJobResponse response) {
// <1>
}
@Override
public void onFailure(Exception e) {
// <2>
}
};
// end::x-pack-rollup-get-rollup-job-execute-listener
// Replace the empty listener by a blocking listener in test
final CountDownLatch latch = new CountDownLatch(1);
listener = new LatchedActionListener<>(listener, latch);
// tag::x-pack-rollup-get-rollup-job-execute-async
client.rollup().getRollupJobAsync(getJob, RequestOptions.DEFAULT, listener); // <1>
// end::x-pack-rollup-get-rollup-job-execute-async
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
@After
public void wipeRollup() throws Exception {
// TODO move this to ESRestTestCase
deleteRollupJobs();
waitForPendingRollupTasks();
}
private void deleteRollupJobs() throws Exception {
Response response = adminClient().performRequest(new Request("GET", "/_xpack/rollup/job/_all"));
Map<String, Object> jobs = entityAsMap(response);
@SuppressWarnings("unchecked")
List<Map<String, Object>> jobConfigs =
(List<Map<String, Object>>) XContentMapValues.extractValue("jobs", jobs);
if (jobConfigs == null) {
return;
}
for (Map<String, Object> jobConfig : jobConfigs) {
@SuppressWarnings("unchecked")
String jobId = (String) ((Map<String, Object>) jobConfig.get("config")).get("id");
Request request = new Request("DELETE", "/_xpack/rollup/job/" + jobId);
request.addParameter("ignore", "404"); // Ignore 404s because they imply someone was racing us to delete this
adminClient().performRequest(request);
}
}
private void waitForPendingRollupTasks() throws Exception {
assertBusy(() -> {
try {
Request request = new Request("GET", "/_cat/tasks");
request.addParameter("detailed", "true");
Response response = adminClient().performRequest(request);
try (BufferedReader responseReader = new BufferedReader(
new InputStreamReader(response.getEntity().getContent(), StandardCharsets.UTF_8))) {
int activeTasks = 0;
String line;
StringBuilder tasksListString = new StringBuilder();
while ((line = responseReader.readLine()) != null) {
// We only care about Rollup jobs, otherwise this fails too easily due to unrelated tasks
if (line.startsWith("xpack/rollup/job") == true) {
activeTasks++;
tasksListString.append(line).append('\n');
}
}
assertEquals(activeTasks + " active tasks found:\n" + tasksListString, 0, activeTasks);
}
} catch (IOException e) {
// Throw an assertion error so we retry
throw new AssertionError("Error getting active tasks list", e);
}
});
}
}

View File

@ -413,6 +413,7 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testSearchRequestHighlighting() throws IOException {
RestHighLevelClient client = highLevelClient();
{
@ -831,6 +832,8 @@ public class SearchDocumentationIT extends ESRestHighLevelClientTestCase {
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
@SuppressWarnings("unused")
public void testMultiSearchTemplateWithInlineScript() throws Exception {
indexSearchTestData();
RestHighLevelClient client = highLevelClient();

View File

@ -24,6 +24,7 @@ import org.elasticsearch.action.LatchedActionListener;
import org.elasticsearch.client.ESRestHighLevelClientTestCase;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.security.ChangePasswordRequest;
import org.elasticsearch.client.security.DisableUserRequest;
import org.elasticsearch.client.security.EnableUserRequest;
import org.elasticsearch.client.security.PutUserRequest;
@ -42,7 +43,7 @@ public class SecurityDocumentationIT extends ESRestHighLevelClientTestCase {
{
//tag::put-user-execute
char[] password = new char[] { 'p', 'a', 's', 's', 'w', 'o', 'r', 'd' };
char[] password = new char[]{'p', 'a', 's', 's', 'w', 'o', 'r', 'd'};
PutUserRequest request =
new PutUserRequest("example", password, Collections.singletonList("superuser"), null, null, true, null, RefreshPolicy.NONE);
PutUserResponse response = client.security().putUser(request, RequestOptions.DEFAULT);
@ -56,7 +57,7 @@ public class SecurityDocumentationIT extends ESRestHighLevelClientTestCase {
}
{
char[] password = new char[] { 'p', 'a', 's', 's', 'w', 'o', 'r', 'd' };
char[] password = new char[]{'p', 'a', 's', 's', 'w', 'o', 'r', 'd'};
PutUserRequest request = new PutUserRequest("example2", password, Collections.singletonList("superuser"), null, null, true,
null, RefreshPolicy.NONE);
// tag::put-user-execute-listener
@ -173,4 +174,48 @@ public class SecurityDocumentationIT extends ESRestHighLevelClientTestCase {
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
}
public void testChangePassword() throws Exception {
RestHighLevelClient client = highLevelClient();
char[] password = new char[]{'p', 'a', 's', 's', 'w', 'o', 'r', 'd'};
char[] newPassword = new char[]{'n', 'e', 'w', 'p', 'a', 's', 's', 'w', 'o', 'r', 'd'};
PutUserRequest putUserRequest = new PutUserRequest("change_password_user", password, Collections.singletonList("superuser"),
null, null, true, null, RefreshPolicy.NONE);
PutUserResponse putUserResponse = client.security().putUser(putUserRequest, RequestOptions.DEFAULT);
assertTrue(putUserResponse.isCreated());
{
//tag::change-password-execute
ChangePasswordRequest request = new ChangePasswordRequest("change_password_user", newPassword, RefreshPolicy.NONE);
EmptyResponse response = client.security().changePassword(request, RequestOptions.DEFAULT);
//end::change-password-execute
assertNotNull(response);
}
{
//tag::change-password-execute-listener
ChangePasswordRequest request = new ChangePasswordRequest("change_password_user", password, RefreshPolicy.NONE);
ActionListener<EmptyResponse> listener = new ActionListener<EmptyResponse>() {
@Override
public void onResponse(EmptyResponse emptyResponse) {
// <1>
}
@Override
public void onFailure(Exception e) {
// <2>
}
};
//end::change-password-execute-listener
// Replace the empty listener by a blocking listener in test
final CountDownLatch latch = new CountDownLatch(1);
listener = new LatchedActionListener<>(listener, latch);
//tag::change-password-execute-async
client.security().changePasswordAsync(request, RequestOptions.DEFAULT, listener); // <1>
//end::change-password-execute-async
assertTrue(latch.await(30L, TimeUnit.SECONDS));
}
}
}

View File

@ -577,6 +577,7 @@ public class SnapshotClientDocumentationIT extends ESRestHighLevelClientTestCase
}
}
@SuppressWarnings("unused")
public void testSnapshotGetSnapshots() throws IOException {
RestHighLevelClient client = highLevelClient();

View File

@ -66,6 +66,7 @@ import static org.hamcrest.Matchers.equalTo;
*/
public class StoredScriptsDocumentationIT extends ESRestHighLevelClientTestCase {
@SuppressWarnings("unused")
public void testGetStoredScript() throws Exception {
RestHighLevelClient client = highLevelClient();
@ -128,6 +129,7 @@ public class StoredScriptsDocumentationIT extends ESRestHighLevelClientTestCase
}
@SuppressWarnings("unused")
public void testDeleteStoredScript() throws Exception {
RestHighLevelClient client = highLevelClient();

View File

@ -66,6 +66,7 @@ import static org.hamcrest.Matchers.notNullValue;
*/
public class TasksClientDocumentationIT extends ESRestHighLevelClientTestCase {
@SuppressWarnings("unused")
public void testListTasks() throws IOException {
RestHighLevelClient client = highLevelClient();
{
@ -149,6 +150,7 @@ public class TasksClientDocumentationIT extends ESRestHighLevelClientTestCase {
}
}
@SuppressWarnings("unused")
public void testCancelTasks() throws IOException {
RestHighLevelClient client = highLevelClient();
{

View File

@ -0,0 +1,61 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
import org.elasticsearch.client.ml.datafeed.DatafeedConfigTests;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractXContentTestCase;
import java.io.IOException;
public class StartDatafeedRequestTests extends AbstractXContentTestCase<StartDatafeedRequest> {
public static StartDatafeedRequest createRandomInstance(String datafeedId) {
StartDatafeedRequest request = new StartDatafeedRequest(datafeedId);
if (randomBoolean()) {
request.setStart(String.valueOf(randomLongBetween(1, 1000)));
}
if (randomBoolean()) {
request.setEnd(String.valueOf(randomLongBetween(1, 1000)));
}
if (randomBoolean()) {
request.setTimeout(TimeValue.timeValueMinutes(randomLongBetween(1, 1000)));
}
return request;
}
@Override
protected StartDatafeedRequest createTestInstance() {
String datafeedId = DatafeedConfigTests.randomValidDatafeedId();
return createRandomInstance(datafeedId);
}
@Override
protected StartDatafeedRequest doParseInstance(XContentParser parser) throws IOException {
return StartDatafeedRequest.PARSER.parse(parser, null);
}
@Override
protected boolean supportsUnknownFields() {
return false;
}
}

View File

@ -16,36 +16,27 @@
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
package org.elasticsearch.index.reindex;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractXContentTestCase;
import org.elasticsearch.script.ExecutableScript;
import java.io.IOException;
import java.util.Map;
import java.util.function.Consumer;
public class StartDatafeedResponseTests extends AbstractXContentTestCase<StartDatafeedResponse> {
public class SimpleExecutableScript implements ExecutableScript {
private final Consumer<Map<String, Object>> script;
private Map<String, Object> ctx;
public SimpleExecutableScript(Consumer<Map<String, Object>> script) {
this.script = script;
@Override
protected StartDatafeedResponse createTestInstance() {
return new StartDatafeedResponse(randomBoolean());
}
@Override
public Object run() {
script.accept(ctx);
return null;
protected StartDatafeedResponse doParseInstance(XContentParser parser) throws IOException {
return StartDatafeedResponse.fromXContent(parser);
}
@Override
@SuppressWarnings("unchecked")
public void setNextVar(String name, Object value) {
if ("ctx".equals(name)) {
ctx = (Map<String, Object>) value;
} else {
throw new IllegalArgumentException("Unsupported var [" + name + "]");
}
protected boolean supportsUnknownFields() {
return true;
}
}

View File

@ -0,0 +1,81 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractXContentTestCase;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class StopDatafeedRequestTests extends AbstractXContentTestCase<StopDatafeedRequest> {
public void testCloseAllDatafeedsRequest() {
StopDatafeedRequest request = StopDatafeedRequest.stopAllDatafeedsRequest();
assertEquals(request.getDatafeedIds().size(), 1);
assertEquals(request.getDatafeedIds().get(0), "_all");
}
public void testWithNullDatafeedIds() {
Exception exception = expectThrows(IllegalArgumentException.class, StopDatafeedRequest::new);
assertEquals(exception.getMessage(), "datafeedIds must not be empty");
exception = expectThrows(NullPointerException.class, () -> new StopDatafeedRequest("datafeed1", null));
assertEquals(exception.getMessage(), "datafeedIds must not contain null values");
}
@Override
protected StopDatafeedRequest createTestInstance() {
int datafeedCount = randomIntBetween(1, 10);
List<String> datafeedIds = new ArrayList<>(datafeedCount);
for (int i = 0; i < datafeedCount; i++) {
datafeedIds.add(randomAlphaOfLength(10));
}
StopDatafeedRequest request = new StopDatafeedRequest(datafeedIds.toArray(new String[0]));
if (randomBoolean()) {
request.setAllowNoDatafeeds(randomBoolean());
}
if (randomBoolean()) {
request.setTimeout(TimeValue.timeValueMinutes(randomIntBetween(1, 10)));
}
if (randomBoolean()) {
request.setForce(randomBoolean());
}
return request;
}
@Override
protected StopDatafeedRequest doParseInstance(XContentParser parser) throws IOException {
return StopDatafeedRequest.PARSER.parse(parser, null);
}
@Override
protected boolean supportsUnknownFields() {
return false;
}
}

View File

@ -16,27 +16,27 @@
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.ml;
package org.elasticsearch.index.translog;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.test.AbstractXContentTestCase;
import org.elasticsearch.cli.LoggingAwareMultiCommand;
import org.elasticsearch.cli.Terminal;
import org.elasticsearch.index.shard.RemoveCorruptedShardDataCommand;
import java.io.IOException;
/**
* Class encapsulating and dispatching commands from the {@code elasticsearch-translog} command line tool
*/
@Deprecated
public class TranslogToolCli extends LoggingAwareMultiCommand {
public class StopDatafeedResponseTests extends AbstractXContentTestCase<StopDatafeedResponse> {
private TranslogToolCli() {
// that's only for 6.x branch for bwc with elasticsearch-translog
super("A CLI tool for various Elasticsearch translog actions");
subcommands.put("truncate", new RemoveCorruptedShardDataCommand(true));
@Override
protected StopDatafeedResponse createTestInstance() {
return new StopDatafeedResponse(randomBoolean());
}
public static void main(String[] args) throws Exception {
exit(new TranslogToolCli().main(args, Terminal.DEFAULT));
@Override
protected StopDatafeedResponse doParseInstance(XContentParser parser) throws IOException {
return StopDatafeedResponse.fromXContent(parser);
}
@Override
protected boolean supportsUnknownFields() {
return true;
}
}

View File

@ -0,0 +1,33 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.rollup;
import org.elasticsearch.test.ESTestCase;
public class GetRollupJobRequestTests extends ESTestCase {
public void testRequiresJob() {
final NullPointerException e = expectThrows(NullPointerException.class, () -> new GetRollupJobRequest(null));
assertEquals("jobId is required", e.getMessage());
}
public void testDoNotUseAll() {
final IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> new GetRollupJobRequest("_all"));
assertEquals("use the default ctor to ask for all jobs", e.getMessage());
}
}

View File

@ -0,0 +1,120 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.rollup;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.client.rollup.GetRollupJobResponse.IndexerState;
import org.elasticsearch.client.rollup.GetRollupJobResponse.JobWrapper;
import org.elasticsearch.client.rollup.GetRollupJobResponse.RollupIndexerJobStats;
import org.elasticsearch.client.rollup.GetRollupJobResponse.RollupJobStatus;
import org.elasticsearch.client.rollup.job.config.RollupJobConfigTests;
import org.elasticsearch.test.ESTestCase;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import static org.elasticsearch.test.AbstractXContentTestCase.xContentTester;
public class GetRollupJobResponseTests extends ESTestCase {
public void testFromXContent() throws IOException {
xContentTester(
this::createParser,
this::createTestInstance,
this::toXContent,
GetRollupJobResponse::fromXContent)
.supportsUnknownFields(true)
.randomFieldsExcludeFilter(field ->
field.endsWith("status.current_position"))
.test();
}
private GetRollupJobResponse createTestInstance() {
int jobCount = between(1, 5);
List<JobWrapper> jobs = new ArrayList<>();
for (int j = 0; j < jobCount; j++) {
jobs.add(new JobWrapper(
RollupJobConfigTests.randomRollupJobConfig(randomAlphaOfLength(5)),
randomStats(),
randomStatus()));
}
return new GetRollupJobResponse(jobs);
}
private RollupIndexerJobStats randomStats() {
return new RollupIndexerJobStats(randomLong(), randomLong(), randomLong(), randomLong());
}
private RollupJobStatus randomStatus() {
Map<String, Object> currentPosition = new HashMap<>();
int positions = between(0, 10);
while (currentPosition.size() < positions) {
currentPosition.put(randomAlphaOfLength(2), randomAlphaOfLength(2));
}
return new RollupJobStatus(
randomFrom(IndexerState.values()),
currentPosition,
randomBoolean());
}
private void toXContent(GetRollupJobResponse response, XContentBuilder builder) throws IOException {
ToXContent.Params params = ToXContent.EMPTY_PARAMS;
builder.startObject();
builder.startArray(GetRollupJobResponse.JOBS.getPreferredName());
for (JobWrapper job : response.getJobs()) {
toXContent(job, builder, params);
}
builder.endArray();
builder.endObject();
}
private void toXContent(JobWrapper jobWrapper, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field(GetRollupJobResponse.CONFIG.getPreferredName());
jobWrapper.getJob().toXContent(builder, params);
builder.field(GetRollupJobResponse.STATUS.getPreferredName());
toXContent(jobWrapper.getStatus(), builder, params);
builder.field(GetRollupJobResponse.STATS.getPreferredName());
toXContent(jobWrapper.getStats(), builder, params);
builder.endObject();
}
public void toXContent(RollupJobStatus status, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field(GetRollupJobResponse.STATE.getPreferredName(), status.getState().value());
if (status.getCurrentPosition() != null) {
builder.field(GetRollupJobResponse.CURRENT_POSITION.getPreferredName(), status.getCurrentPosition());
}
builder.field(GetRollupJobResponse.UPGRADED_DOC_ID.getPreferredName(), status.getUpgradedDocumentId());
builder.endObject();
}
public void toXContent(RollupIndexerJobStats stats, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field(GetRollupJobResponse.NUM_PAGES.getPreferredName(), stats.getNumPages());
builder.field(GetRollupJobResponse.NUM_INPUT_DOCUMENTS.getPreferredName(), stats.getNumDocuments());
builder.field(GetRollupJobResponse.NUM_OUTPUT_DOCUMENTS.getPreferredName(), stats.getOutputDocuments());
builder.field(GetRollupJobResponse.NUM_INVOCATIONS.getPreferredName(), stats.getNumInvocations());
builder.endObject();
}
}

View File

@ -0,0 +1,97 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl;
import org.elasticsearch.client.security.support.expressiondsl.expressions.AllRoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.expressions.AnyRoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.expressions.ExceptRoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.fields.FieldRoleMapperExpression;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.test.ESTestCase;
import java.io.IOException;
import java.util.Date;
import static org.hamcrest.Matchers.equalTo;
public class RoleMapperExpressionDslTests extends ESTestCase {
public void testRoleMapperExpressionToXContentType() throws IOException {
final RoleMapperExpression allExpression = AllRoleMapperExpression.builder()
.addExpression(AnyRoleMapperExpression.builder()
.addExpression(FieldRoleMapperExpression.ofDN("*,ou=admin,dc=example,dc=com"))
.addExpression(FieldRoleMapperExpression.ofUsername("es-admin", "es-system"))
.build())
.addExpression(FieldRoleMapperExpression.ofGroups("cn=people,dc=example,dc=com"))
.addExpression(new ExceptRoleMapperExpression(FieldRoleMapperExpression.ofMetadata("metadata.terminated_date", new Date(
1537145401027L))))
.build();
final XContentBuilder builder = XContentFactory.jsonBuilder();
allExpression.toXContent(builder, ToXContent.EMPTY_PARAMS);
final String output = Strings.toString(builder);
final String expected =
"{"+
"\"all\":["+
"{"+
"\"any\":["+
"{"+
"\"field\":{"+
"\"dn\":[\"*,ou=admin,dc=example,dc=com\"]"+
"}"+
"},"+
"{"+
"\"field\":{"+
"\"username\":["+
"\"es-admin\","+
"\"es-system\""+
"]"+
"}"+
"}"+
"]"+
"},"+
"{"+
"\"field\":{"+
"\"groups\":[\"cn=people,dc=example,dc=com\"]"+
"}"+
"},"+
"{"+
"\"except\":{"+
"\"field\":{"+
"\"metadata.terminated_date\":[\"2018-09-17T00:50:01.027Z\"]"+
"}"+
"}"+
"}"+
"]"+
"}";
assertThat(expected, equalTo(output));
}
public void testFieldRoleMapperExpressionThrowsExceptionForMissingMetadataPrefix() {
final IllegalArgumentException ile = expectThrows(IllegalArgumentException.class, () -> FieldRoleMapperExpression.ofMetadata(
"terminated_date", new Date(1537145401027L)));
assertThat(ile.getMessage(), equalTo("metadata key must have prefix 'metadata.'"));
}
}

View File

@ -0,0 +1,129 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.security.support.expressiondsl.parser;
import org.elasticsearch.client.security.support.expressiondsl.RoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.expressions.CompositeRoleMapperExpression;
import org.elasticsearch.client.security.support.expressiondsl.fields.FieldRoleMapperExpression;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.xcontent.DeprecationHandler;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.test.ESTestCase;
import java.io.IOException;
import java.util.Collections;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.iterableWithSize;
public class RoleMapperExpressionParserTests extends ESTestCase {
public void testParseSimpleFieldExpression() throws Exception {
String json = "{ \"field\": { \"username\" : [\"*@shield.gov\"] } }";
FieldRoleMapperExpression field = checkExpressionType(parse(json), FieldRoleMapperExpression.class);
assertThat(field.getField(), equalTo("username"));
assertThat(field.getValues(), iterableWithSize(1));
assertThat(field.getValues().get(0), equalTo("*@shield.gov"));
assertThat(toJson(field), equalTo(json.replaceAll("\\s", "")));
}
public void testParseComplexExpression() throws Exception {
String json = "{ \"any\": [" +
" { \"field\": { \"username\" : \"*@shield.gov\" } }, " +
" { \"all\": [" +
" { \"field\": { \"username\" : \"/.*\\\\@avengers\\\\.(net|org)/\" } }, " +
" { \"field\": { \"groups\" : [ \"admin\", \"operators\" ] } }, " +
" { \"except\":" +
" { \"field\": { \"groups\" : \"disavowed\" } }" +
" }" +
" ] }" +
"] }";
final RoleMapperExpression expr = parse(json);
assertThat(expr, instanceOf(CompositeRoleMapperExpression.class));
CompositeRoleMapperExpression any = (CompositeRoleMapperExpression) expr;
assertThat(any.getElements(), iterableWithSize(2));
final FieldRoleMapperExpression fieldShield = checkExpressionType(any.getElements().get(0),
FieldRoleMapperExpression.class);
assertThat(fieldShield.getField(), equalTo("username"));
assertThat(fieldShield.getValues(), iterableWithSize(1));
assertThat(fieldShield.getValues().get(0), equalTo("*@shield.gov"));
final CompositeRoleMapperExpression all = checkExpressionType(any.getElements().get(1),
CompositeRoleMapperExpression.class);
assertThat(all.getElements(), iterableWithSize(3));
final FieldRoleMapperExpression fieldAvengers = checkExpressionType(all.getElements().get(0),
FieldRoleMapperExpression.class);
assertThat(fieldAvengers.getField(), equalTo("username"));
assertThat(fieldAvengers.getValues(), iterableWithSize(1));
assertThat(fieldAvengers.getValues().get(0), equalTo("/.*\\@avengers\\.(net|org)/"));
final FieldRoleMapperExpression fieldGroupsAdmin = checkExpressionType(all.getElements().get(1),
FieldRoleMapperExpression.class);
assertThat(fieldGroupsAdmin.getField(), equalTo("groups"));
assertThat(fieldGroupsAdmin.getValues(), iterableWithSize(2));
assertThat(fieldGroupsAdmin.getValues().get(0), equalTo("admin"));
assertThat(fieldGroupsAdmin.getValues().get(1), equalTo("operators"));
final CompositeRoleMapperExpression except = checkExpressionType(all.getElements().get(2),
CompositeRoleMapperExpression.class);
final FieldRoleMapperExpression fieldDisavowed = checkExpressionType(except.getElements().get(0),
FieldRoleMapperExpression.class);
assertThat(fieldDisavowed.getField(), equalTo("groups"));
assertThat(fieldDisavowed.getValues(), iterableWithSize(1));
assertThat(fieldDisavowed.getValues().get(0), equalTo("disavowed"));
}
private String toJson(final RoleMapperExpression expr) throws IOException {
final XContentBuilder builder = XContentFactory.jsonBuilder();
expr.toXContent(builder, ToXContent.EMPTY_PARAMS);
final String output = Strings.toString(builder);
return output;
}
private <T> T checkExpressionType(RoleMapperExpression expr, Class<T> type) {
assertThat(expr, instanceOf(type));
return type.cast(expr);
}
private RoleMapperExpression parse(String json) throws IOException {
return new RoleMapperExpressionParser().parse("rules", XContentType.JSON.xContent().createParser(new NamedXContentRegistry(
Collections.emptyList()), new DeprecationHandler() {
@Override
public void usedDeprecatedName(String usedName, String modernName) {
}
@Override
public void usedDeprecatedField(String usedName, String replacedWith) {
}
}, json));
}
}

View File

@ -26,7 +26,11 @@ import org.apache.http.HttpResponse;
import org.apache.http.RequestLine;
import org.apache.http.StatusLine;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* Holds an elasticsearch response. It wraps the {@link HttpResponse} returned and associates it with
@ -96,6 +100,46 @@ public class Response {
return response.getEntity();
}
private static final Pattern WARNING_HEADER_PATTERN = Pattern.compile(
"299 " + // warn code
"Elasticsearch-\\d+\\.\\d+\\.\\d+(?:-(?:alpha|beta|rc)\\d+)?(?:-SNAPSHOT)?-(?:[a-f0-9]{7}|Unknown) " + // warn agent
"\"((?:\t| |!|[\\x23-\\x5B]|[\\x5D-\\x7E]|[\\x80-\\xFF]|\\\\|\\\\\")*)\" " + // quoted warning value, captured
// quoted RFC 1123 date format
"\"" + // opening quote
"(?:Mon|Tue|Wed|Thu|Fri|Sat|Sun), " + // weekday
"\\d{2} " + // 2-digit day
"(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) " + // month
"\\d{4} " + // 4-digit year
"\\d{2}:\\d{2}:\\d{2} " + // (two-digit hour):(two-digit minute):(two-digit second)
"GMT" + // GMT
"\""); // closing quote
/**
* Returns a list of all warning headers returned in the response.
*/
public List<String> getWarnings() {
List<String> warnings = new ArrayList<>();
for (Header header : response.getHeaders("Warning")) {
String warning = header.getValue();
final Matcher matcher = WARNING_HEADER_PATTERN.matcher(warning);
if (matcher.matches()) {
warnings.add(matcher.group(1));
continue;
}
warnings.add(warning);
}
return warnings;
}
/**
* Returns true if there is at least one warning header returned in the
* response.
*/
public boolean hasWarnings() {
Header[] warnings = response.getHeaders("Warning");
return warnings != null && warnings.length > 0;
}
HttpResponse getHttpResponse() {
return response;
}

View File

@ -58,6 +58,10 @@ public final class ResponseException extends IOException {
response.getStatusLine().toString()
);
if (response.hasWarnings()) {
message += "\n" + "Warnings: " + response.getWarnings();
}
HttpEntity entity = response.getEntity();
if (entity != null) {
if (entity.isRepeatable() == false) {

View File

@ -110,15 +110,17 @@ public class RestClient implements Closeable {
private final FailureListener failureListener;
private final NodeSelector nodeSelector;
private volatile NodeTuple<List<Node>> nodeTuple;
private final boolean strictDeprecationMode;
RestClient(CloseableHttpAsyncClient client, long maxRetryTimeoutMillis, Header[] defaultHeaders,
List<Node> nodes, String pathPrefix, FailureListener failureListener, NodeSelector nodeSelector) {
RestClient(CloseableHttpAsyncClient client, long maxRetryTimeoutMillis, Header[] defaultHeaders, List<Node> nodes, String pathPrefix,
FailureListener failureListener, NodeSelector nodeSelector, boolean strictDeprecationMode) {
this.client = client;
this.maxRetryTimeoutMillis = maxRetryTimeoutMillis;
this.defaultHeaders = Collections.unmodifiableList(Arrays.asList(defaultHeaders));
this.failureListener = failureListener;
this.pathPrefix = pathPrefix;
this.nodeSelector = nodeSelector;
this.strictDeprecationMode = strictDeprecationMode;
setNodes(nodes);
}
@ -296,7 +298,11 @@ public class RestClient implements Closeable {
Response response = new Response(request.getRequestLine(), node.getHost(), httpResponse);
if (isSuccessfulResponse(statusCode) || ignoreErrorCodes.contains(response.getStatusLine().getStatusCode())) {
onResponse(node);
listener.onSuccess(response);
if (strictDeprecationMode && response.hasWarnings()) {
listener.onDefinitiveFailure(new ResponseException(response));
} else {
listener.onSuccess(response);
}
} else {
ResponseException responseException = new ResponseException(response);
if (isRetryStatus(statusCode)) {

View File

@ -56,6 +56,7 @@ public final class RestClientBuilder {
private RequestConfigCallback requestConfigCallback;
private String pathPrefix;
private NodeSelector nodeSelector = NodeSelector.ANY;
private boolean strictDeprecationMode = false;
/**
* Creates a new builder instance and sets the hosts that the client will send requests to.
@ -185,6 +186,15 @@ public final class RestClientBuilder {
return this;
}
/**
* Whether the REST client should return any response containing at least
* one warning header as a failure.
*/
public RestClientBuilder setStrictDeprecationMode(boolean strictDeprecationMode) {
this.strictDeprecationMode = strictDeprecationMode;
return this;
}
/**
* Creates a new {@link RestClient} based on the provided configuration.
*/
@ -199,7 +209,7 @@ public final class RestClientBuilder {
}
});
RestClient restClient = new RestClient(httpClient, maxRetryTimeout, defaultHeaders, nodes,
pathPrefix, failureListener, nodeSelector);
pathPrefix, failureListener, nodeSelector, strictDeprecationMode);
httpClient.start();
return restClient;
}

View File

@ -115,7 +115,7 @@ public class RestClientMultipleHostsTests extends RestClientTestCase {
}
nodes = Collections.unmodifiableList(nodes);
failureListener = new HostsTrackingFailureListener();
return new RestClient(httpClient, 10000, new Header[0], nodes, null, failureListener, nodeSelector);
return new RestClient(httpClient, 10000, new Header[0], nodes, null, failureListener, nodeSelector, false);
}
/**

View File

@ -148,7 +148,7 @@ public class RestClientSingleHostTests extends RestClientTestCase {
node = new Node(new HttpHost("localhost", 9200));
failureListener = new HostsTrackingFailureListener();
restClient = new RestClient(httpClient, 10000, defaultHeaders,
singletonList(node), null, failureListener, NodeSelector.ANY);
singletonList(node), null, failureListener, NodeSelector.ANY, false);
}
/**

View File

@ -57,7 +57,7 @@ public class RestClientTests extends RestClientTestCase {
public void testCloseIsIdempotent() throws IOException {
List<Node> nodes = singletonList(new Node(new HttpHost("localhost", 9200)));
CloseableHttpAsyncClient closeableHttpAsyncClient = mock(CloseableHttpAsyncClient.class);
RestClient restClient = new RestClient(closeableHttpAsyncClient, 1_000, new Header[0], nodes, null, null, null);
RestClient restClient = new RestClient(closeableHttpAsyncClient, 1_000, new Header[0], nodes, null, null, null, false);
restClient.close();
verify(closeableHttpAsyncClient, times(1)).close();
restClient.close();
@ -345,7 +345,7 @@ public class RestClientTests extends RestClientTestCase {
private static RestClient createRestClient() {
List<Node> nodes = Collections.singletonList(new Node(new HttpHost("localhost", 9200)));
return new RestClient(mock(CloseableHttpAsyncClient.class), randomLongBetween(1_000, 30_000),
new Header[] {}, nodes, null, null, null);
new Header[] {}, nodes, null, null, null, false);
}
public void testRoundRobin() throws IOException {

View File

@ -45,6 +45,7 @@ import org.elasticsearch.client.Response;
import org.elasticsearch.client.ResponseListener;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestClientBuilder.HttpClientConfigCallback;
import javax.net.ssl.SSLContext;
import java.io.IOException;
@ -93,8 +94,8 @@ public class RestClientDocumentation {
//tag::rest-client-init
RestClient restClient = RestClient.builder(
new HttpHost("localhost", 9200, "http"),
new HttpHost("localhost", 9201, "http")).build();
new HttpHost("localhost", 9200, "http"),
new HttpHost("localhost", 9201, "http")).build();
//end::rest-client-init
//tag::rest-client-close
@ -103,26 +104,30 @@ public class RestClientDocumentation {
{
//tag::rest-client-init-default-headers
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http"));
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "http"));
Header[] defaultHeaders = new Header[]{new BasicHeader("header", "value")};
builder.setDefaultHeaders(defaultHeaders); // <1>
//end::rest-client-init-default-headers
}
{
//tag::rest-client-init-max-retry-timeout
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http"));
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "http"));
builder.setMaxRetryTimeoutMillis(10000); // <1>
//end::rest-client-init-max-retry-timeout
}
{
//tag::rest-client-init-node-selector
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http"));
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "http"));
builder.setNodeSelector(NodeSelector.SKIP_DEDICATED_MASTERS); // <1>
//end::rest-client-init-node-selector
}
{
//tag::rest-client-init-allocation-aware-selector
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http"));
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "http"));
builder.setNodeSelector(new NodeSelector() { // <1>
@Override
public void select(Iterable<Node> nodes) {
@ -155,7 +160,8 @@ public class RestClientDocumentation {
}
{
//tag::rest-client-init-failure-listener
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http"));
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "http"));
builder.setFailureListener(new RestClient.FailureListener() {
@Override
public void onFailure(Node node) {
@ -166,24 +172,30 @@ public class RestClientDocumentation {
}
{
//tag::rest-client-init-request-config-callback
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http"));
builder.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
@Override
public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder requestConfigBuilder) {
return requestConfigBuilder.setSocketTimeout(10000); // <1>
}
});
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "http"));
builder.setRequestConfigCallback(
new RestClientBuilder.RequestConfigCallback() {
@Override
public RequestConfig.Builder customizeRequestConfig(
RequestConfig.Builder requestConfigBuilder) {
return requestConfigBuilder.setSocketTimeout(10000); // <1>
}
});
//end::rest-client-init-request-config-callback
}
{
//tag::rest-client-init-client-config-callback
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "http"));
builder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setProxy(new HttpHost("proxy", 9000, "http")); // <1>
}
});
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "http"));
builder.setHttpClientConfigCallback(new HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(
HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setProxy(
new HttpHost("proxy", 9000, "http")); // <1>
}
});
//end::rest-client-init-client-config-callback
}
@ -281,58 +293,74 @@ public class RestClientDocumentation {
public void testCommonConfiguration() throws Exception {
{
//tag::rest-client-config-timeouts
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200))
.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200))
.setRequestConfigCallback(
new RestClientBuilder.RequestConfigCallback() {
@Override
public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder requestConfigBuilder) {
return requestConfigBuilder.setConnectTimeout(5000)
.setSocketTimeout(60000);
public RequestConfig.Builder customizeRequestConfig(
RequestConfig.Builder requestConfigBuilder) {
return requestConfigBuilder
.setConnectTimeout(5000)
.setSocketTimeout(60000);
}
})
.setMaxRetryTimeoutMillis(60000);
.setMaxRetryTimeoutMillis(60000);
//end::rest-client-config-timeouts
}
{
//tag::rest-client-config-threads
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setDefaultIOReactorConfig(
IOReactorConfig.custom().setIoThreadCount(1).build());
}
});
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200))
.setHttpClientConfigCallback(new HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(
HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setDefaultIOReactorConfig(
IOReactorConfig.custom()
.setIoThreadCount(1)
.build());
}
});
//end::rest-client-config-threads
}
{
//tag::rest-client-config-basic-auth
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
final CredentialsProvider credentialsProvider =
new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials("user", "password"));
new UsernamePasswordCredentials("user", "password"));
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
}
});
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200))
.setHttpClientConfigCallback(new HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(
HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder
.setDefaultCredentialsProvider(credentialsProvider);
}
});
//end::rest-client-config-basic-auth
}
{
//tag::rest-client-config-disable-preemptive-auth
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
final CredentialsProvider credentialsProvider =
new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials("user", "password"));
new UsernamePasswordCredentials("user", "password"));
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
httpClientBuilder.disableAuthCaching(); // <1>
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
}
});
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200))
.setHttpClientConfigCallback(new HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(
HttpAsyncClientBuilder httpClientBuilder) {
httpClientBuilder.disableAuthCaching(); // <1>
return httpClientBuilder
.setDefaultCredentialsProvider(credentialsProvider);
}
});
//end::rest-client-config-disable-preemptive-auth
}
{
@ -343,15 +371,18 @@ public class RestClientDocumentation {
try (InputStream is = Files.newInputStream(keyStorePath)) {
truststore.load(is, keyStorePass.toCharArray());
}
SSLContextBuilder sslBuilder = SSLContexts.custom().loadTrustMaterial(truststore, null);
SSLContextBuilder sslBuilder = SSLContexts.custom()
.loadTrustMaterial(truststore, null);
final SSLContext sslContext = sslBuilder.build();
RestClientBuilder builder = RestClient.builder(new HttpHost("localhost", 9200, "https"))
.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext);
}
});
RestClientBuilder builder = RestClient.builder(
new HttpHost("localhost", 9200, "https"))
.setHttpClientConfigCallback(new HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(
HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext);
}
});
//end::rest-client-config-encrypted-communication
}
}

View File

@ -56,8 +56,8 @@ public class SnifferDocumentation {
{
//tag::sniffer-init
RestClient restClient = RestClient.builder(
new HttpHost("localhost", 9200, "http"))
.build();
new HttpHost("localhost", 9200, "http"))
.build();
Sniffer sniffer = Sniffer.builder(restClient).build();
//end::sniffer-init
@ -69,21 +69,23 @@ public class SnifferDocumentation {
{
//tag::sniffer-interval
RestClient restClient = RestClient.builder(
new HttpHost("localhost", 9200, "http"))
.build();
new HttpHost("localhost", 9200, "http"))
.build();
Sniffer sniffer = Sniffer.builder(restClient)
.setSniffIntervalMillis(60000).build();
.setSniffIntervalMillis(60000).build();
//end::sniffer-interval
}
{
//tag::sniff-on-failure
SniffOnFailureListener sniffOnFailureListener = new SniffOnFailureListener();
RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200))
.setFailureListener(sniffOnFailureListener) // <1>
.build();
SniffOnFailureListener sniffOnFailureListener =
new SniffOnFailureListener();
RestClient restClient = RestClient.builder(
new HttpHost("localhost", 9200))
.setFailureListener(sniffOnFailureListener) // <1>
.build();
Sniffer sniffer = Sniffer.builder(restClient)
.setSniffAfterFailureDelayMillis(30000) // <2>
.build();
.setSniffAfterFailureDelayMillis(30000) // <2>
.build();
sniffOnFailureListener.setSniffer(sniffer); // <3>
//end::sniff-on-failure
}
@ -103,29 +105,29 @@ public class SnifferDocumentation {
{
//tag::sniff-request-timeout
RestClient restClient = RestClient.builder(
new HttpHost("localhost", 9200, "http"))
.build();
new HttpHost("localhost", 9200, "http"))
.build();
NodesSniffer nodesSniffer = new ElasticsearchNodesSniffer(
restClient,
TimeUnit.SECONDS.toMillis(5),
ElasticsearchNodesSniffer.Scheme.HTTP);
restClient,
TimeUnit.SECONDS.toMillis(5),
ElasticsearchNodesSniffer.Scheme.HTTP);
Sniffer sniffer = Sniffer.builder(restClient)
.setNodesSniffer(nodesSniffer).build();
.setNodesSniffer(nodesSniffer).build();
//end::sniff-request-timeout
}
{
//tag::custom-nodes-sniffer
RestClient restClient = RestClient.builder(
new HttpHost("localhost", 9200, "http"))
.build();
new HttpHost("localhost", 9200, "http"))
.build();
NodesSniffer nodesSniffer = new NodesSniffer() {
@Override
public List<Node> sniff() throws IOException {
return null; // <1>
}
};
@Override
public List<Node> sniff() throws IOException {
return null; // <1>
}
};
Sniffer sniffer = Sniffer.builder(restClient)
.setNodesSniffer(nodesSniffer).build();
.setNodesSniffer(nodesSniffer).build();
//end::custom-nodes-sniffer
}
}

View File

@ -20,7 +20,6 @@
package org.elasticsearch.test.rest;
import org.apache.http.util.EntityUtils;
import org.apache.lucene.util.LuceneTestCase.AwaitsFix;
import org.elasticsearch.action.ActionFuture;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.client.Response;
@ -31,57 +30,53 @@ import org.junit.After;
import org.junit.Before;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.Locale;
import java.util.Map;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.instanceOf;
/**
* Tests that wait for refresh is fired if the index is closed.
*/
@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/33533")
public class WaitForRefreshAndCloseIT extends ESRestTestCase {
@Before
public void setupIndex() throws IOException {
try {
client().performRequest(new Request("DELETE", indexName()));
} catch (ResponseException e) {
// If we get an error, it should be because the index doesn't exist
assertEquals(404, e.getResponse().getStatusLine().getStatusCode());
}
Request request = new Request("PUT", indexName());
Request request = new Request("PUT", "/test");
request.setJsonEntity("{\"settings\":{\"refresh_interval\":-1}}");
client().performRequest(request);
}
@After
public void cleanupIndex() throws IOException {
client().performRequest(new Request("DELETE", indexName()));
}
private String indexName() {
return getTestName().toLowerCase(Locale.ROOT);
client().performRequest(new Request("DELETE", "/test"));
}
private String docPath() {
return indexName() + "/test/1";
return "test/_doc/1";
}
public void testIndexAndThenClose() throws Exception {
closeWhileListenerEngaged(start("PUT", "", "{\"test\":\"test\"}"));
Request request = new Request("PUT", docPath());
request.setJsonEntity("{\"test\":\"test\"}");
closeWhileListenerEngaged(start(request));
}
public void testUpdateAndThenClose() throws Exception {
Request request = new Request("PUT", docPath());
request.setJsonEntity("{\"test\":\"test\"}");
client().performRequest(request);
closeWhileListenerEngaged(start("POST", "/_update", "{\"doc\":{\"name\":\"test\"}}"));
Request createDoc = new Request("PUT", docPath());
createDoc.setJsonEntity("{\"test\":\"test\"}");
client().performRequest(createDoc);
Request updateDoc = new Request("POST", docPath() + "/_update");
updateDoc.setJsonEntity("{\"doc\":{\"name\":\"test\"}}");
closeWhileListenerEngaged(start(updateDoc));
}
public void testDeleteAndThenClose() throws Exception {
Request request = new Request("PUT", docPath());
request.setJsonEntity("{\"test\":\"test\"}");
client().performRequest(request);
closeWhileListenerEngaged(start("DELETE", "", null));
closeWhileListenerEngaged(start(new Request("DELETE", docPath())));
}
private void closeWhileListenerEngaged(ActionFuture<String> future) throws Exception {
@ -89,40 +84,52 @@ public class WaitForRefreshAndCloseIT extends ESRestTestCase {
assertBusy(() -> {
Map<String, Object> stats;
try {
stats = entityAsMap(client().performRequest(new Request("GET", indexName() + "/_stats/refresh")));
stats = entityAsMap(client().performRequest(new Request("GET", "/test/_stats/refresh")));
} catch (IOException e) {
throw new RuntimeException(e);
}
@SuppressWarnings("unchecked")
Map<String, Object> indices = (Map<String, Object>) stats.get("indices");
@SuppressWarnings("unchecked")
Map<String, Object> theIndex = (Map<String, Object>) indices.get(indexName());
@SuppressWarnings("unchecked")
Map<String, Object> total = (Map<String, Object>) theIndex.get("total");
@SuppressWarnings("unchecked")
Map<String, Object> refresh = (Map<String, Object>) total.get("refresh");
int listeners = (int) refresh.get("listeners");
Map<?, ?> indices = (Map<?, ?>) stats.get("indices");
Map<?, ?> theIndex = (Map<?, ?>) indices.get("test");
Map<?, ?> total = (Map<?, ?>) theIndex.get("total");
Map<?, ?> refresh = (Map<?, ?>) total.get("refresh");
int listeners = (Integer) refresh.get("listeners");
assertEquals(1, listeners);
});
// Close the index. That should flush the listener.
client().performRequest(new Request("POST", indexName() + "/_close"));
client().performRequest(new Request("POST", "/test/_close"));
// The request shouldn't fail. It certainly shouldn't hang.
future.get();
/*
* The request may fail, but we really, really, really want to make
* sure that it doesn't time out.
*/
try {
future.get(1, TimeUnit.MINUTES);
} catch (ExecutionException ee) {
/*
* If it *does* fail it should fail with a FORBIDDEN error because
* it attempts to take an action on a closed index. Again, it'd be
* nice if all requests waiting for refresh came back even though
* the index is closed and most do, but sometimes they bump into
* the index being closed. At least they don't hang forever. That'd
* be a nightmare.
*/
assertThat(ee.getCause(), instanceOf(ResponseException.class));
ResponseException re = (ResponseException) ee.getCause();
assertEquals(403, re.getResponse().getStatusLine().getStatusCode());
assertThat(EntityUtils.toString(re.getResponse().getEntity()), containsString("FORBIDDEN/4/index closed"));
}
}
private ActionFuture<String> start(String method, String path, String body) {
private ActionFuture<String> start(Request request) {
PlainActionFuture<String> future = new PlainActionFuture<>();
Request request = new Request(method, docPath() + path);
request.addParameter("refresh", "wait_for");
request.addParameter("error_trace", "");
request.setJsonEntity(body);
client().performRequestAsync(request, new ResponseListener() {
@Override
public void onSuccess(Response response) {
try {
future.onResponse(EntityUtils.toString(response.getEntity(), StandardCharsets.UTF_8));
future.onResponse(EntityUtils.toString(response.getEntity()));
} catch (IOException e) {
future.onFailure(e);
}

View File

@ -173,7 +173,7 @@ if not "%SERVICE_USERNAME%" == "" (
)
)
"%EXECUTABLE%" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.Elasticsearch --StartMethod main ++StartParams --quiet --StopClass org.elasticsearch.bootstrap.Elasticsearch --StopMethod close --Classpath "%ES_CLASSPATH%" --JvmMs %JVM_MS% --JvmMx %JVM_MX% --JvmSs %JVM_SS% --JvmOptions %ES_JAVA_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile "%SERVICE_ID%.pid" --DisplayName "%SERVICE_DISPLAY_NAME%" --Description "%SERVICE_DESCRIPTION%" --Jvm "%%JAVA_HOME%%%JVM_DLL%" --StartMode jvm --StopMode jvm --StartPath "%ES_HOME%" %SERVICE_PARAMS%
"%EXECUTABLE%" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.Elasticsearch --StartMethod main ++StartParams --quiet --StopClass org.elasticsearch.bootstrap.Elasticsearch --StopMethod close --Classpath "%ES_CLASSPATH%" --JvmMs %JVM_MS% --JvmMx %JVM_MX% --JvmSs %JVM_SS% --JvmOptions %ES_JAVA_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile "%SERVICE_ID%.pid" --DisplayName "%SERVICE_DISPLAY_NAME%" --Description "%SERVICE_DESCRIPTION%" --Jvm "%%JAVA_HOME%%%JVM_DLL%" --StartMode jvm --StopMode jvm --StartPath "%ES_HOME%" %SERVICE_PARAMS% ++Environment HOSTNAME="%%COMPUTERNAME%%"
if not errorlevel 1 goto installed
echo Failed installing '%SERVICE_ID%' service

View File

@ -1,5 +0,0 @@
#!/bin/bash
ES_MAIN_CLASS=org.elasticsearch.index.translog.TranslogToolCli \
"`dirname "$0"`"/elasticsearch-cli \
"$@"

View File

@ -1,12 +0,0 @@
@echo off
setlocal enabledelayedexpansion
setlocal enableextensions
set ES_MAIN_CLASS=org.elasticsearch.index.translog.TranslogToolCli
call "%~dp0elasticsearch-cli.bat" ^
%%* ^
|| exit /b 1
endlocal
endlocal

View File

@ -37,6 +37,14 @@
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## G1GC Configuration
# NOTE: G1GC is only supported on JDK version 10 or later.
# To use G1GC uncomment the lines below.
# 10-:-XX:-UseConcMarkSweepGC
# 10-:-XX:-UseCMSInitiatingOccupancyOnly
# 10-:-XX:+UseG1GC
# 10-:-XX:InitiatingHeapOccupancyPercent=75
## optimizations
# pre-touch memory pages used by the JVM during initialization

View File

@ -19,9 +19,9 @@ are tests even if they don't have `// CONSOLE` but usually `// TEST` is used
for its modifiers:
* `// TEST[s/foo/bar/]`: Replace `foo` with `bar` in the generated test. This
should be used sparingly because it makes the snippet "lie". Sometimes,
though, you can use it to make the snippet more clear more clear. Keep in
mind the that if there are multiple substitutions then they are applied in
the order that they are defined.
though, you can use it to make the snippet more clear. Keep in mind that
if there are multiple substitutions then they are applied in the order that
they are defined.
* `// TEST[catch:foo]`: Used to expect errors in the requests. Replace `foo`
with `request` to expect a 400 error, for example. If the snippet contains
multiple requests then only the last request will expect the error.

View File

@ -100,6 +100,7 @@ buildRestTests.docs = fileTree(projectDir) {
exclude 'reference/rollup/apis/delete-job.asciidoc'
exclude 'reference/rollup/apis/get-job.asciidoc'
exclude 'reference/rollup/apis/rollup-caps.asciidoc'
exclude 'reference/graph/explore.asciidoc'
}
listSnippets.docs = buildRestTests.docs

View File

@ -1,6 +1,8 @@
[[java-query-dsl-type-query]]
==== Type Query
deprecated[7.0.0, Types are being removed, prefer filtering on a field instead. For more information, please see {ref}/removal-of-types.html[Removal of mapping types].]
See {ref}/query-dsl-type-query.html[Type Query]
["source","java",subs="attributes,callouts,macros"]

View File

@ -1,16 +1,22 @@
[[java-rest-high-cluster-get-settings]]
--
:api: get-settings
:request: ClusterGetSettingsRequest
:response: ClusterGetSettingsResponse
--
[id="{upid}-{api}"]
=== Cluster Get Settings API
The Cluster Get Settings API allows to get the cluster wide settings.
[[java-rest-high-cluster-get-settings-request]]
[id="{upid}-{api}-request"]
==== Cluster Get Settings Request
A `ClusterGetSettingsRequest`:
A +{request}+:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-request]
include-tagged::{doc-tests-file}[{api}-request]
--------------------------------------------------
==== Optional arguments
@ -18,75 +24,40 @@ The following arguments can optionally be provided:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-request-includeDefaults]
include-tagged::{doc-tests-file}[{api}-request-includeDefaults]
--------------------------------------------------
<1> By default only those settings that were explicitly set are returned. Setting this to true also returns
the default settings.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-request-local]
include-tagged::{doc-tests-file}[{api}-request-local]
--------------------------------------------------
<1> By default the request goes to the master of the cluster to get the latest results. If local is specified it gets
the results from whichever node the request goes to.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-request-masterTimeout]
include-tagged::{doc-tests-file}[{api}-request-masterTimeout]
--------------------------------------------------
<1> Timeout to connect to the master node as a `TimeValue`
<2> Timeout to connect to the master node as a `String`
[[java-rest-high-cluster-get-settings-sync]]
==== Synchronous Execution
include::../execution.asciidoc[]
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-execute]
--------------------------------------------------
<1> Execute the request and get back the response in a `ClusterGetSettingsResponse` object.
[[java-rest-high-cluster-get-settings-async]]
==== Asynchronous Execution
The asynchronous execution of a cluster get settings requires both the
`ClusterGetSettingsRequest` instance and an `ActionListener` instance to be
passed to the asynchronous method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-execute-async]
--------------------------------------------------
<1> The `ClusterGetSettingsRequest` to execute and the `ActionListener`
to use when the execution completes
The asynchronous method does not block and returns immediately. Once it is
completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for `ClusterGetSettingsResponse` looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed. The response is
provided as an argument
<2> Called in case of a failure. The raised exception is provided as an argument
[[java-rest-high-cluster-get-settings-response]]
[id="{upid}-{api}-response"]
==== Cluster Get Settings Response
The returned `ClusterGetSettingsResponse` allows to retrieve information about the
The returned +{response}+ allows to retrieve information about the
executed operation as follows:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[get-settings-response]
include-tagged::{doc-tests-file}[{api}-response]
--------------------------------------------------
<1> Get the persistent settings.
<2> Get the transient settings.
<3> Get the default settings (returns empty settings if `includeDefaults` was not set to `true`).
<4> Get the value as a `String` for a particular setting. The order of searching is first in `persistentSettings` then in
`transientSettings` and finally, if not found in either, in `defaultSettings`.

View File

@ -1,16 +1,22 @@
[[java-rest-high-cluster-health]]
--
:api: health
:request: ClusterHealthRequest
:response: ClusterHealthResponse
--
[id="{upid}-{api}"]
=== Cluster Health API
The Cluster Health API allows getting cluster health.
[[java-rest-high-cluster-health-request]]
[id="{upid}-{api}-request"]
==== Cluster Health Request
A `ClusterHealthRequest`:
A +{request}+:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request]
include-tagged::{doc-tests-file}[{api}-request]
--------------------------------------------------
There are no required parameters. By default, the client will check all indices and will not wait
for any events.
@ -21,14 +27,14 @@ Indices which should be checked can be passed in the constructor:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-indices-ctr]
include-tagged::{doc-tests-file}[{api}-request-indices-ctr]
--------------------------------------------------
Or using the corresponding setter method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-indices-setter]
include-tagged::{doc-tests-file}[{api}-request-indices-setter]
--------------------------------------------------
==== Other parameters
@ -37,53 +43,53 @@ Other parameters can be passed only through setter methods:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-timeout]
include-tagged::{doc-tests-file}[{api}-request-timeout]
--------------------------------------------------
<1> Timeout for the request as a `TimeValue`. Defaults to 30 seconds
<2> As a `String`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-master-timeout]
include-tagged::{doc-tests-file}[{api}-request-master-timeout]
--------------------------------------------------
<1> Timeout to connect to the master node as a `TimeValue`. Defaults to the same as `timeout`
<2> As a `String`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wait-status]
include-tagged::{doc-tests-file}[{api}-request-wait-status]
--------------------------------------------------
<1> The status to wait (e.g. `green`, `yellow`, or `red`). Accepts a `ClusterHealthStatus` value.
<2> Using predefined method
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wait-events]
include-tagged::{doc-tests-file}[{api}-request-wait-events]
--------------------------------------------------
<1> The priority of the events to wait for. Accepts a `Priority` value.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-level]
include-tagged::{doc-tests-file}[{api}-request-level]
--------------------------------------------------
<1> The level of detail of the returned health information. Accepts a `ClusterHealthRequest.Level` value.
<1> The level of detail of the returned health information. Accepts a +{request}.Level+ value.
Default value is `cluster`.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wait-relocation]
include-tagged::{doc-tests-file}[{api}-request-wait-relocation]
--------------------------------------------------
<1> Wait for 0 relocating shards. Defaults to `false`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wait-initializing]
include-tagged::{doc-tests-file}[{api}-request-wait-initializing]
--------------------------------------------------
<1> Wait for 0 initializing shards. Defaults to `false`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wait-nodes]
include-tagged::{doc-tests-file}[{api}-request-wait-nodes]
--------------------------------------------------
<1> Wait for `N` nodes in the cluster. Defaults to `0`
<2> Using `>=N`, `<=N`, `>N` and `<N` notation
@ -91,7 +97,7 @@ include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wai
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wait-active]
include-tagged::{doc-tests-file}[{api}-request-wait-active]
--------------------------------------------------
<1> Wait for all shards to be active in the cluster
@ -99,77 +105,42 @@ include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-wai
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-request-local]
include-tagged::{doc-tests-file}[{api}-request-local]
--------------------------------------------------
<1> Non-master node can be used for this request. Defaults to `false`
[[java-rest-high-cluster-health-sync]]
==== Synchronous Execution
include::../execution.asciidoc[]
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-execute]
--------------------------------------------------
[[java-rest-high-cluster-health-async]]
==== Asynchronous Execution
The asynchronous execution of a cluster health request requires both the
`ClusterHealthRequest` instance and an `ActionListener` instance to be
passed to the asynchronous method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-execute-async]
--------------------------------------------------
<1> The `ClusterHealthRequest` to execute and the `ActionListener` to use
when the execution completes
The asynchronous method does not block and returns immediately. Once it is
completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for `ClusterHealthResponse` looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed. The response is
provided as an argument
<2> Called in case of a failure. The raised exception is provided as an argument
[[java-rest-high-cluster-health-response]]
[id="{upid}-{api}-response"]
==== Cluster Health Response
The returned `ClusterHealthResponse` contains the next information about the
The returned +{response}+ contains the next information about the
cluster:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-general]
include-tagged::{doc-tests-file}[{api}-response-general]
--------------------------------------------------
<1> Name of the cluster
<2> Cluster status (`green`, `yellow` or `red`)
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-request-status]
include-tagged::{doc-tests-file}[{api}-response-request-status]
--------------------------------------------------
<1> Whether request was timed out while processing
<2> Status of the request (`OK` or `REQUEST_TIMEOUT`). Other errors will be thrown as exceptions
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-nodes]
include-tagged::{doc-tests-file}[{api}-response-nodes]
--------------------------------------------------
<1> Number of nodes in the cluster
<2> Number of data nodes in the cluster
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-shards]
include-tagged::{doc-tests-file}[{api}-response-shards]
--------------------------------------------------
<1> Number of active shards
<2> Number of primary active shards
@ -181,7 +152,7 @@ include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-sh
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-task]
include-tagged::{doc-tests-file}[{api}-response-task]
--------------------------------------------------
<1> Maximum wait time of all tasks in the queue
<2> Number of currently pending tasks
@ -189,18 +160,18 @@ include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-ta
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-indices]
include-tagged::{doc-tests-file}[{api}-response-indices]
--------------------------------------------------
<1> Detailed information about indices in the cluster
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-index]
include-tagged::{doc-tests-file}[{api}-response-index]
--------------------------------------------------
<1> Detailed information about a specific index
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[health-response-shard-details]
include-tagged::{doc-tests-file}[{api}-response-shard-details]
--------------------------------------------------
<1> Detailed information about a specific shard

View File

@ -1,16 +1,22 @@
[[java-rest-high-cluster-put-settings]]
--
:api: put-settings
:request: ClusterUpdateSettingsRequest
:response: ClusterUpdateSettingsResponse
--
[id="{upid}-{api}"]
=== Cluster Update Settings API
The Cluster Update Settings API allows to update cluster wide settings.
[[java-rest-high-cluster-put-settings-request]]
[id="{upid}-{api}-request"]
==== Cluster Update Settings Request
A `ClusterUpdateSettingsRequest`:
A +{request}+:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-request]
include-tagged::{doc-tests-file}[{api}-request]
--------------------------------------------------
==== Cluster Settings
@ -18,7 +24,7 @@ At least one setting to be updated must be provided:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-request-cluster-settings]
include-tagged::{doc-tests-file}[{api}-request-cluster-settings]
--------------------------------------------------
<1> Sets the transient settings to be applied
<2> Sets the persistent setting to be applied
@ -28,26 +34,26 @@ The settings to be applied can be provided in different ways:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-create-settings]
include-tagged::{doc-tests-file}[{api}-create-settings]
--------------------------------------------------
<1> Creates a transient setting as `Settings`
<2> Creates a persistent setting as `Settings`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-settings-builder]
include-tagged::{doc-tests-file}[{api}-settings-builder]
--------------------------------------------------
<1> Settings provided as `Settings.Builder`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-settings-source]
include-tagged::{doc-tests-file}[{api}-settings-source]
--------------------------------------------------
<1> Settings provided as `String`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-settings-map]
include-tagged::{doc-tests-file}[{api}-settings-map]
--------------------------------------------------
<1> Settings provided as a `Map`
@ -56,7 +62,7 @@ The following arguments can optionally be provided:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-request-timeout]
include-tagged::{doc-tests-file}[{api}-request-timeout]
--------------------------------------------------
<1> Timeout to wait for the all the nodes to acknowledge the settings were applied
as a `TimeValue`
@ -65,58 +71,23 @@ as a `String`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-request-masterTimeout]
include-tagged::{doc-tests-file}[{api}-request-masterTimeout]
--------------------------------------------------
<1> Timeout to connect to the master node as a `TimeValue`
<2> Timeout to connect to the master node as a `String`
[[java-rest-high-cluster-put-settings-sync]]
==== Synchronous Execution
include::../execution.asciidoc[]
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-execute]
--------------------------------------------------
[[java-rest-high-cluster-put-settings-async]]
==== Asynchronous Execution
The asynchronous execution of a cluster update settings requires both the
`ClusterUpdateSettingsRequest` instance and an `ActionListener` instance to be
passed to the asynchronous method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-execute-async]
--------------------------------------------------
<1> The `ClusterUpdateSettingsRequest` to execute and the `ActionListener`
to use when the execution completes
The asynchronous method does not block and returns immediately. Once it is
completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for `ClusterUpdateSettingsResponse` looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed. The response is
provided as an argument
<2> Called in case of a failure. The raised exception is provided as an argument
[[java-rest-high-cluster-put-settings-response]]
[id="{upid}-{api}-response"]
==== Cluster Update Settings Response
The returned `ClusterUpdateSettings` allows to retrieve information about the
The returned +{response}+ allows to retrieve information about the
executed operation as follows:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/ClusterClientDocumentationIT.java[put-settings-response]
include-tagged::{doc-tests-file}[{api}-response]
--------------------------------------------------
<1> Indicates whether all of the nodes have acknowledged the request
<2> Indicates which transient settings have been applied
<3> Indicates which persistent settings have been applied
<3> Indicates which persistent settings have been applied

View File

@ -1,14 +1,20 @@
[[java-rest-high-document-index]]
--
:api: index
:request: IndexRequest
:response: IndexResponse
--
[id="{upid}-{api}"]
=== Index API
[[java-rest-high-document-index-request]]
[id="{upid}-{api}-request"]
==== Index Request
An `IndexRequest` requires the following arguments:
An +{request}+ requires the following arguments:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-string]
include-tagged::{doc-tests-file}[{api}-request-string]
--------------------------------------------------
<1> Index
<2> Type
@ -21,21 +27,21 @@ The document source can be provided in different ways in addition to the
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-map]
include-tagged::{doc-tests-file}[{api}-request-map]
--------------------------------------------------
<1> Document source provided as a `Map` which gets automatically converted
to JSON format
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-xcontent]
include-tagged::{doc-tests-file}[{api}-request-xcontent]
--------------------------------------------------
<1> Document source provided as an `XContentBuilder` object, the Elasticsearch
built-in helpers to generate JSON content
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-shortcut]
include-tagged::{doc-tests-file}[{api}-request-shortcut]
--------------------------------------------------
<1> Document source provided as `Object` key-pairs, which gets converted to
JSON format
@ -45,95 +51,60 @@ The following arguments can optionally be provided:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-routing]
include-tagged::{doc-tests-file}[{api}-request-routing]
--------------------------------------------------
<1> Routing value
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-timeout]
include-tagged::{doc-tests-file}[{api}-request-timeout]
--------------------------------------------------
<1> Timeout to wait for primary shard to become available as a `TimeValue`
<2> Timeout to wait for primary shard to become available as a `String`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-refresh]
include-tagged::{doc-tests-file}[{api}-request-refresh]
--------------------------------------------------
<1> Refresh policy as a `WriteRequest.RefreshPolicy` instance
<2> Refresh policy as a `String`
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-version]
include-tagged::{doc-tests-file}[{api}-request-version]
--------------------------------------------------
<1> Version
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-version-type]
include-tagged::{doc-tests-file}[{api}-request-version-type]
--------------------------------------------------
<1> Version type
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-op-type]
include-tagged::{doc-tests-file}[{api}-request-op-type]
--------------------------------------------------
<1> Operation type provided as an `DocWriteRequest.OpType` value
<2> Operation type provided as a `String`: can be `create` or `update` (default)
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-request-pipeline]
include-tagged::{doc-tests-file}[{api}-request-pipeline]
--------------------------------------------------
<1> The name of the ingest pipeline to be executed before indexing the document
[[java-rest-high-document-index-sync]]
==== Synchronous Execution
include::../execution.asciidoc[]
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-execute]
--------------------------------------------------
[[java-rest-high-document-index-async]]
==== Asynchronous Execution
The asynchronous execution of an index request requires both the `IndexRequest`
instance and an `ActionListener` instance to be passed to the asynchronous
method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-execute-async]
--------------------------------------------------
<1> The `IndexRequest` to execute and the `ActionListener` to use when
the execution completes
The asynchronous method does not block and returns immediately. Once it is
completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for `IndexResponse` looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed. The response is
provided as an argument
<2> Called in case of failure. The raised exception is provided as an argument
[[java-rest-high-document-index-response]]
[id="{upid}-{api}-response"]
==== Index Response
The returned `IndexResponse` allows to retrieve information about the executed
The returned +{response}+ allows to retrieve information about the executed
operation as follows:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-response]
include-tagged::{doc-tests-file}[{api}-response]
--------------------------------------------------
<1> Handle (if needed) the case where the document was created for the first
time
@ -148,7 +119,7 @@ be thrown:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-conflict]
include-tagged::{doc-tests-file}[{api}-conflict]
--------------------------------------------------
<1> The raised exception indicates that a version conflict error was returned
@ -157,6 +128,6 @@ same index, type and id already existed:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[index-optype]
include-tagged::{doc-tests-file}[{api}-optype]
--------------------------------------------------
<1> The raised exception indicates that a version conflict error was returned

View File

@ -0,0 +1,73 @@
[[java-rest-high-document-rethrottle]]
=== Rethrottle API
[[java-rest-high-document-rethrottle-request]]
==== Rethrottle Request
A `RethrottleRequest` can be used to change the current throttling on a running
reindex, update-by-query or delete-by-query task or to disable throttling of
the task entirely. It requires the task Id of the task to change.
In its simplest form, you can use it to disable throttling of a running
task using the following:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[rethrottle-disable-request]
--------------------------------------------------
<1> Create a `RethrottleRequest` that disables throttling for a specific task id
By providing a `requestsPerSecond` argument, the request will change the
existing task throttling to the specified value:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[rethrottle-request]
--------------------------------------------------
<1> Request to change the throttling of a task to 100 requests per second
The rethrottling request can be executed by using one of the three appropriate
methods depending on whether a reindex, update-by-query or delete-by-query task
should be rethrottled:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[rethrottle-request-execution]
--------------------------------------------------
<1> Execute reindex rethrottling request
<2> The same for update-by-query
<3> The same for delete-by-query
[[java-rest-high-document-rethrottle-async]]
==== Asynchronous Execution
The asynchronous execution of a rethrottle request requires both the `RethrottleRequest`
instance and an `ActionListener` instance to be passed to the asynchronous
method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[rethrottle-execute-async]
--------------------------------------------------
<1> Execute reindex rethrottling asynchronously
<2> The same for update-by-query
<3> The same for delete-by-query
The asynchronous method does not block and returns immediately.
Once it is completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed. A typical listener looks like this:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/CRUDDocumentationIT.java[rethrottle-request-async-listener]
--------------------------------------------------
<1> Code executed when the request is successfully completed
<2> Code executed when the request fails with an exception
[[java-rest-high-document-retrottle-response]]
==== Rethrottle Response
Rethrottling returns the task that has been rethrottled in the form of a
`ListTasksResponse`. The structure of this response object is described in detail
in <<java-rest-high-cluster-list-tasks-response,this section>>.

View File

@ -0,0 +1,48 @@
////
This file is included by every high level rest client API documentation page
so we don't have to copy and paste the same asciidoc over and over again. We
*do* have to copy and paste the same Java tests over and over again. For now
this is intentional because it forces us to *write* and execute the tests
which, while a bit ceremonial, does force us to cover these calls in *some*
test.
////
[id="{upid}-{api}-sync"]
==== Synchronous Execution
When executing a +{request}+ in the following manner, the client waits
for the +{response}+ to be returned before continuing with code execution:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-execute]
--------------------------------------------------
[id="{upid}-{api}-async"]
==== Asynchronous Execution
Executing a +{request}+ can also be done in an asynchronous fashion so that
the client can return directly. Users need to specify how the response or
potential failures will be handled by passing the request and a listener to the
asynchronous {api} method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-execute-async]
--------------------------------------------------
<1> The +{request}+ to execute and the `ActionListener` to use when
the execution completes
The asynchronous method does not block and returns immediately. Once it is
completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for +{response}+ looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed.
<2> Called when the whole +{request}+ fails.

View File

@ -1,4 +1,6 @@
[[java-rest-high]]
:mainid: java-rest-high
[id="{mainid}"]
= Java High Level REST Client
[partintro]
@ -31,3 +33,4 @@ include::migration.asciidoc[]
include::../license.asciidoc[]
:doc-tests!:
:mainid!:

View File

@ -10,7 +10,7 @@ The query builders are used to create the query to execute within a search reque
is a query builder for every type of query supported by the Query DSL. Each query builder
implements the `QueryBuilder` interface and allows to set the specific options for a given
type of query. Once created, the `QueryBuilder` object can be set as the query parameter of
`SearchSourceBuilder`. The <<java-rest-high-document-search-request-building-queries, Search Request>>
`SearchSourceBuilder`. The <<java-rest-high-search-request-building-queries, Search Request>>
page shows an example of how to build a full search request using `SearchSourceBuilder` and
`QueryBuilder` objects. The <<java-rest-high-query-builders, Building Search Queries>> page
gives a list of all available search queries with their corresponding `QueryBuilder` objects
@ -24,7 +24,7 @@ aggregation (or pipeline aggregation) supported by Elasticsearch. All builders e
`AggregationBuilder` class (or `PipelineAggregationBuilder`class). Once created, `AggregationBuilder`
objects can be set as the aggregation parameter of `SearchSourceBuilder`. There is a example
of how `AggregationBuilder` objects are used with `SearchSourceBuilder` objects to define the aggregations
to compute with a search query in <<java-rest-high-document-search-request-building-aggs, Search Request>> page.
to compute with a search query in <<java-rest-high-search-request-building-aggs, Search Request>> page.
The <<java-rest-high-aggregation-builders, Building Aggregations>> page gives a list of all available
aggregations with their corresponding `AggregationBuilder` objects and `AggregationBuilders` helper methods.

View File

@ -0,0 +1,71 @@
[[java-rest-high-x-pack-ml-start-datafeed]]
=== Start Datafeed API
The Start Datafeed API provides the ability to start a {ml} datafeed in the cluster.
It accepts a `StartDatafeedRequest` object and responds
with a `StartDatafeedResponse` object.
[[java-rest-high-x-pack-ml-start-datafeed-request]]
==== Start Datafeed Request
A `StartDatafeedRequest` object is created referencing a non-null `datafeedId`.
All other fields are optional for the request.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-start-datafeed-request]
--------------------------------------------------
<1> Constructing a new request referencing an existing `datafeedId`
==== Optional Arguments
The following arguments are optional.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-start-datafeed-request-options]
--------------------------------------------------
<1> Set when the datafeed should end, the value is exclusive.
May be an epoch seconds, epoch millis or an ISO 8601 string.
"now" is a special value that indicates the current time.
If you do not specify an end time, the datafeed runs continuously.
<2> Set when the datafeed should start, the value is inclusive.
May be an epoch seconds, epoch millis or an ISO 8601 string.
If you do not specify a start time and the datafeed is associated with a new job,
the analysis starts from the earliest time for which data is available.
<3> Set the timeout for the request
[[java-rest-high-x-pack-ml-start-datafeed-execution]]
==== Execution
The request can be executed through the `MachineLearningClient` contained
in the `RestHighLevelClient` object, accessed via the `machineLearningClient()` method.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-start-datafeed-execute]
--------------------------------------------------
<1> Did the datafeed successfully start?
[[java-rest-high-x-pack-ml-start-datafeed-execution-async]]
==== Asynchronous Execution
The request can also be executed asynchronously:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-start-datafeed-execute-async]
--------------------------------------------------
<1> The `StartDatafeedRequest` to execute and the `ActionListener` to use when
the execution completes
The method does not block and returns immediately. The passed `ActionListener` is used
to notify the caller of completion. A typical `ActionListener` for `StartDatafeedResponse` may
look like
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-start-datafeed-listener]
--------------------------------------------------
<1> `onResponse` is called back when the action is completed successfully
<2> `onFailure` is called back when some unexpected error occurs

View File

@ -0,0 +1,66 @@
[[java-rest-high-x-pack-ml-stop-datafeed]]
=== Stop Datafeed API
The Stop Datafeed API provides the ability to stop a {ml} datafeed in the cluster.
It accepts a `StopDatafeedRequest` object and responds
with a `StopDatafeedResponse` object.
[[java-rest-high-x-pack-ml-stop-datafeed-request]]
==== Stop Datafeed Request
A `StopDatafeedRequest` object is created referencing any number of non-null `datafeedId` entries.
Wildcards and `_all` are also accepted.
All other fields are optional for the request.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-stop-datafeed-request]
--------------------------------------------------
<1> Constructing a new request referencing existing `datafeedId` entries.
==== Optional Arguments
The following arguments are optional.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-stop-datafeed-request-options]
--------------------------------------------------
<1> Whether to ignore if a wildcard expression matches no datafeeds. (This includes `_all` string)
<2> If true, the datafeed is stopped forcefully.
<3> Controls the amount of time to wait until a datafeed stops. The default value is 20 seconds.
[[java-rest-high-x-pack-ml-stop-datafeed-execution]]
==== Execution
The request can be executed through the `MachineLearningClient` contained
in the `RestHighLevelClient` object, accessed via the `machineLearningClient()` method.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-stop-datafeed-execute]
--------------------------------------------------
<1> Did the datafeed successfully stop?
[[java-rest-high-x-pack-ml-stop-datafeed-execution-async]]
==== Asynchronous Execution
The request can also be executed asynchronously:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-stop-datafeed-execute-async]
--------------------------------------------------
<1> The `StopDatafeedRequest` to execute and the `ActionListener` to use when
the execution completes
The method does not block and returns immediately. The passed `ActionListener` is used
to notify the caller of completion. A typical `ActionListener` for `StopDatafeedResponse` may
look like
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/MlClientDocumentationIT.java[x-pack-ml-stop-datafeed-listener]
--------------------------------------------------
<1> `onResponse` is called back when the action is completed successfully
<2> `onFailure` is called back when some unexpected error occurs

View File

@ -0,0 +1,71 @@
[[java-rest-high-x-pack-rollup-get-job]]
=== Get Rollup Job API
The Get Rollup Job API can be used to get one or all rollup jobs from the
cluster. It accepts a `GetRollupJobRequest` object as a request and returns
a `GetRollupJobResponse`.
[[java-rest-high-x-pack-rollup-get-rollup-job-request]]
==== Get Rollup Job Request
A `GetRollupJobRequest` can be built without any parameters to get all of the
rollup jobs or with a job name to get a single job:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-request]
--------------------------------------------------
<1> Gets all jobs.
<2> Gets `job_1`.
[[java-rest-high-x-pack-rollup-get-rollup-job-execution]]
==== Execution
The Get Rollup Job API can be executed through a `RollupClient`
instance. Such instance can be retrieved from a `RestHighLevelClient`
using the `rollup()` method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-execute]
--------------------------------------------------
[[java-rest-high-x-pack-rollup-get-rollup-job-response]]
==== Response
The returned `GetRollupJobResponse` includes a `JobWrapper` per returned job
which contains the configuration of the job, the job's current status, and
statistics about the job's past execution.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-response]
--------------------------------------------------
<1> We only asked for a single job
[[java-rest-high-x-pack-rollup-get-rollup-job-async]]
==== Asynchronous Execution
This request can be executed asynchronously:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-execute-async]
--------------------------------------------------
<1> The `GetRollupJobRequest` to execute and the `ActionListener` to use when
the execution completes
The asynchronous method does not block and returns immediately. Once it is
completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for `GetRollupJobResponse` looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/RollupDocumentationIT.java[x-pack-rollup-get-rollup-job-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed. The response is
provided as an argument
<2> Called in case of failure. The raised exception is provided as an argument

View File

@ -1,10 +1,16 @@
[[java-rest-high-search]]
--
:api: search
:request: SearchRequest
:response: SearchResponse
--
[id="{upid}-{api}"]
=== Search API
[[java-rest-high-document-search-request]]
[id="{upid}-{api}-request"]
==== Search Request
The `SearchRequest` is used for any operation that has to do with searching
The +{request}+ is used for any operation that has to do with searching
documents, aggregations, suggestions and also offers ways of requesting
highlighting on the resulting documents.
@ -12,7 +18,7 @@ In its most basic form, we can add a query to the request:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-basic]
include-tagged::{doc-tests-file}[{api}-request-basic]
--------------------------------------------------
<1> Creates the `SeachRequest`. Without arguments this runs against all indices.
@ -20,14 +26,14 @@ include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-basic]
<3> Add a `match_all` query to the `SearchSourceBuilder`.
<4> Add the `SearchSourceBuilder` to the `SeachRequest`.
[[java-rest-high-search-request-optional]]
[id="{upid}-{api}-request-optional"]
===== Optional arguments
Let's first look at some of the optional arguments of a `SearchRequest`:
Let's first look at some of the optional arguments of a +{request}+:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-indices-types]
include-tagged::{doc-tests-file}[{api}-request-indices-types]
--------------------------------------------------
<1> Restricts the request to an index
<2> Limits the request to a type
@ -36,20 +42,20 @@ There are a couple of other interesting optional parameters:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-routing]
include-tagged::{doc-tests-file}[{api}-request-routing]
--------------------------------------------------
<1> Set a routing parameter
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-indicesOptions]
include-tagged::{doc-tests-file}[{api}-request-indicesOptions]
--------------------------------------------------
<1> Setting `IndicesOptions` controls how unavailable indices are resolved and
how wildcard expressions are expanded
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-preference]
include-tagged::{doc-tests-file}[{api}-request-preference]
--------------------------------------------------
<1> Use the preference parameter e.g. to execute the search to prefer local
shards. The default is to randomize across shards.
@ -65,7 +71,7 @@ Here are a few examples of some common options:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-source-basics]
include-tagged::{doc-tests-file}[{api}-source-basics]
--------------------------------------------------
<1> Create a `SearchSourceBuilder` with default options.
<2> Set the query. Can be any type of `QueryBuilder`
@ -77,14 +83,14 @@ Defaults to 10.
take.
After this, the `SearchSourceBuilder` only needs to be added to the
`SearchRequest`:
+{request}+:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-source-setter]
include-tagged::{doc-tests-file}[{api}-source-setter]
--------------------------------------------------
[[java-rest-high-document-search-request-building-queries]]
[id="{upid}-{api}-request-building-queries"]
===== Building queries
Search queries are created using `QueryBuilder` objects. A `QueryBuilder` exists
@ -94,7 +100,7 @@ A `QueryBuilder` can be created using its constructor:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-query-builder-ctor]
include-tagged::{doc-tests-file}[{api}-query-builder-ctor]
--------------------------------------------------
<1> Create a full text {ref}/query-dsl-match-query.html[Match Query] that matches
the text "kimchy" over the field "user".
@ -104,7 +110,7 @@ of the search query it creates:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-query-builder-options]
include-tagged::{doc-tests-file}[{api}-query-builder-options]
--------------------------------------------------
<1> Enable fuzzy matching on the match query
<2> Set the prefix length option on the match query
@ -117,7 +123,7 @@ This class provides helper methods that can be used to create `QueryBuilder` obj
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-query-builders]
include-tagged::{doc-tests-file}[{api}-query-builders]
--------------------------------------------------
Whatever the method used to create it, the `QueryBuilder` object must be added
@ -125,10 +131,10 @@ to the `SearchSourceBuilder` as follows:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-query-setter]
include-tagged::{doc-tests-file}[{api}-query-setter]
--------------------------------------------------
The <<java-rest-high-query-builders, Building Queries>> page gives a list of all available search queries with
The <<{upid}-query-builders, Building Queries>> page gives a list of all available search queries with
their corresponding `QueryBuilder` objects and `QueryBuilders` helper methods.
@ -138,7 +144,7 @@ The `SearchSourceBuilder` allows to add one or more `SortBuilder` instances. The
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-source-sorting]
include-tagged::{doc-tests-file}[{api}-source-sorting]
--------------------------------------------------
<1> Sort descending by `_score` (the default)
<2> Also sort ascending by `_id` field
@ -149,17 +155,17 @@ By default, search requests return the contents of the document `_source` but li
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-source-filtering-off]
include-tagged::{doc-tests-file}[{api}-source-filtering-off]
--------------------------------------------------
The method also accepts an array of one or more wildcard patterns to control which fields get included or excluded in a more fine grained way:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-source-filtering-includes]
include-tagged::{doc-tests-file}[{api}-source-filtering-includes]
--------------------------------------------------
[[java-rest-high-request-highlighting]]
[id="{upid}-{api}-request-highlighting"]
===== Requesting Highlighting
Highlighting search results can be achieved by setting a `HighlightBuilder` on the
@ -168,7 +174,7 @@ fields by adding one or more `HighlightBuilder.Field` instances to a `HighlightB
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-highlighting]
include-tagged::{doc-tests-file}[{api}-request-highlighting]
--------------------------------------------------
<1> Creates a new `HighlightBuilder`
<2> Create a field highlighter for the `title` field
@ -179,9 +185,9 @@ There are many options which are explained in detail in the Rest API documentati
API parameters (e.g. `pre_tags`) are usually changed by
setters with a similar name (e.g. `#preTags(String ...)`).
Highlighted text fragments can <<java-rest-high-retrieve-highlighting,later be retrieved>> from the `SearchResponse`.
Highlighted text fragments can <<{upid}-{api}-response-highlighting,later be retrieved>> from the +{response}+.
[[java-rest-high-document-search-request-building-aggs]]
[id="{upid}-{api}-request-building-aggs"]
===== Requesting Aggregations
Aggregations can be added to the search by first creating the appropriate
@ -191,13 +197,13 @@ sub-aggregation on the average age of employees in the company:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-aggregations]
include-tagged::{doc-tests-file}[{api}-request-aggregations]
--------------------------------------------------
The <<java-rest-high-aggregation-builders, Building Aggregations>> page gives a list of all available aggregations with
The <<{upid}-aggregation-builders, Building Aggregations>> page gives a list of all available aggregations with
their corresponding `AggregationBuilder` objects and `AggregationBuilders` helper methods.
We will later see how to <<java-rest-high-retrieve-aggs,access aggregations>> in the `SearchResponse`.
We will later see how to <<{upid}-{api}-response-aggs,access aggregations>> in the +{response}+.
===== Requesting Suggestions
@ -207,14 +213,14 @@ need to be added to the top level `SuggestBuilder`, which itself can be set on t
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-suggestion]
include-tagged::{doc-tests-file}[{api}-request-suggestion]
--------------------------------------------------
<1> Creates a new `TermSuggestionBuilder` for the `user` field and
the text `kmichy`
<2> Adds the suggestion builder and names it `suggest_user`
We will later see how to <<java-rest-high-retrieve-suggestions,retrieve suggestions>> from the
`SearchResponse`.
We will later see how to <<{upid}-{api}-response-suggestions,retrieve suggestions>> from the
+{response}+.
===== Profiling Queries and Aggregations
@ -223,56 +229,18 @@ a specific search request. in order to use it, the profile flag must be set to t
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-profiling]
include-tagged::{doc-tests-file}[{api}-request-profiling]
--------------------------------------------------
Once the `SearchRequest` is executed the corresponding `SearchResponse` will
<<java-rest-high-retrieve-profile-results,contain the profiling results>>.
Once the +{request}+ is executed the corresponding +{response}+ will
<<{upid}-{api}-response-profile,contain the profiling results>>.
[[java-rest-high-document-search-sync]]
==== Synchronous Execution
include::../execution.asciidoc[]
When executing a `SearchRequest` in the following manner, the client waits
for the `SearchResponse` to be returned before continuing with code execution:
[id="{upid}-{api}-response"]
==== {response}
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-execute]
--------------------------------------------------
[[java-rest-high-document-search-async]]
==== Asynchronous Execution
Executing a `SearchRequest` can also be done in an asynchronous fashion so that
the client can return directly. Users need to specify how the response or
potential failures will be handled by passing the request and a listeners to the
asynchronous search method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-execute-async]
--------------------------------------------------
<1> The `SearchRequest` to execute and the `ActionListener` to use when
the execution completes
The asynchronous method does not block and returns immediately. Once it is
completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for `SearchResponse` looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed.
<2> Called when the whole `SearchRequest` fails.
[[java-rest-high-search-response]]
==== SearchResponse
The `SearchResponse` that is returned by executing the search provides details
The +{response}+ that is returned by executing the search provides details
about the search execution itself as well as access to the documents returned.
First, there is useful information about the request execution itself, like the
HTTP status code, execution time or whether the request terminated early or timed
@ -280,7 +248,7 @@ out:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-response-1]
include-tagged::{doc-tests-file}[{api}-response-1]
--------------------------------------------------
Second, the response also provides information about the execution on the
@ -291,10 +259,10 @@ failures can also be handled by iterating over an array off
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-response-2]
include-tagged::{doc-tests-file}[{api}-response-2]
--------------------------------------------------
[[java-rest-high-retrieve-searchHits]]
[id="{upid}-{api}-response-search-hits"]
===== Retrieving SearchHits
To get access to the returned documents, we need to first get the `SearchHits`
@ -302,7 +270,7 @@ contained in the response:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-hits-get]
include-tagged::{doc-tests-file}[{api}-hits-get]
--------------------------------------------------
The `SearchHits` provides global information about all hits, like total number
@ -310,7 +278,7 @@ of hits or the maximum score:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-hits-info]
include-tagged::{doc-tests-file}[{api}-hits-info]
--------------------------------------------------
Nested inside the `SearchHits` are the individual search results that can
@ -319,7 +287,7 @@ be iterated over:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-hits-singleHit]
include-tagged::{doc-tests-file}[{api}-hits-singleHit]
--------------------------------------------------
The `SearchHit` provides access to basic information like index, type, docId and
@ -327,7 +295,7 @@ score of each search hit:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-hits-singleHit-properties]
include-tagged::{doc-tests-file}[{api}-hits-singleHit-properties]
--------------------------------------------------
Furthermore, it lets you get back the document source, either as a simple
@ -338,34 +306,34 @@ cases need to be cast accordingly:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-hits-singleHit-source]
include-tagged::{doc-tests-file}[{api}-hits-singleHit-source]
--------------------------------------------------
[[java-rest-high-retrieve-highlighting]]
[id="{upid}-{api}-response-highlighting"]
===== Retrieving Highlighting
If <<java-rest-high-request-highlighting,requested>>, highlighted text fragments can be retrieved from each `SearchHit` in the result. The hit object offers
If <<{upid}-{api}-request-highlighting,requested>>, highlighted text fragments can be retrieved from each `SearchHit` in the result. The hit object offers
access to a map of field names to `HighlightField` instances, each of which contains one
or many highlighted text fragments:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-highlighting-get]
include-tagged::{doc-tests-file}[{api}-request-highlighting-get]
--------------------------------------------------
<1> Get the highlighting for the `title` field
<2> Get one or many fragments containing the highlighted field content
[[java-rest-high-retrieve-aggs]]
[id="{upid}-{api}-response-aggs"]
===== Retrieving Aggregations
Aggregations can be retrieved from the `SearchResponse` by first getting the
Aggregations can be retrieved from the +{response}+ by first getting the
root of the aggregation tree, the `Aggregations` object, and then getting the
aggregation by name.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-aggregations-get]
include-tagged::{doc-tests-file}[{api}-request-aggregations-get]
--------------------------------------------------
<1> Get the `by_company` terms aggregation
<2> Get the buckets that is keyed with `Elastic`
@ -377,7 +345,7 @@ otherwise a `ClassCastException` will be thrown:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-aggregations-get-wrongCast]
include-tagged::{doc-tests-file}[search-request-aggregations-get-wrongCast]
--------------------------------------------------
<1> This will throw an exception because "by_company" is a `terms` aggregation
but we try to retrieve it as a `range` aggregation
@ -388,14 +356,14 @@ needs to happen explicitly:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-aggregations-asMap]
include-tagged::{doc-tests-file}[{api}-request-aggregations-asMap]
--------------------------------------------------
There are also getters that return all top level aggregations as a list:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-aggregations-asList]
include-tagged::{doc-tests-file}[{api}-request-aggregations-asList]
--------------------------------------------------
And last but not least you can iterate over all aggregations and then e.g.
@ -403,17 +371,17 @@ decide how to further process them based on their type:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-aggregations-iterator]
include-tagged::{doc-tests-file}[{api}-request-aggregations-iterator]
--------------------------------------------------
[[java-rest-high-retrieve-suggestions]]
[id="{upid}-{api}-response-suggestions"]
===== Retrieving Suggestions
To get back the suggestions from a `SearchResponse`, use the `Suggest` object as an entry point and then retrieve the nested suggestion objects:
To get back the suggestions from a +{response}+, use the `Suggest` object as an entry point and then retrieve the nested suggestion objects:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-suggestion-get]
include-tagged::{doc-tests-file}[{api}-request-suggestion-get]
--------------------------------------------------
<1> Use the `Suggest` class to access suggestions
<2> Suggestions can be retrieved by name. You need to assign them to the correct
@ -421,21 +389,21 @@ type of Suggestion class (here `TermSuggestion`), otherwise a `ClassCastExceptio
<3> Iterate over the suggestion entries
<4> Iterate over the options in one entry
[[java-rest-high-retrieve-profile-results]]
[id="{upid}-{api}-response-profile"]
===== Retrieving Profiling Results
Profiling results are retrieved from a `SearchResponse` using the `getProfileResults()` method. This
Profiling results are retrieved from a +{response}+ using the `getProfileResults()` method. This
method returns a `Map` containing a `ProfileShardResult` object for every shard involved in the
`SearchRequest` execution. `ProfileShardResult` are stored in the `Map` using a key that uniquely
+{request}+ execution. `ProfileShardResult` are stored in the `Map` using a key that uniquely
identifies the shard the profile result corresponds to.
Here is a sample code that shows how to iterate over all the profiling results of every shard:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-profiling-get]
include-tagged::{doc-tests-file}[{api}-request-profiling-get]
--------------------------------------------------
<1> Retrieve the `Map` of `ProfileShardResult` from the `SearchResponse`
<1> Retrieve the `Map` of `ProfileShardResult` from the +{response}+
<2> Profiling results can be retrieved by shard's key if the key is known, otherwise it might be simpler
to iterate over all the profiling results
<3> Retrieve the key that identifies which shard the `ProfileShardResult` belongs to
@ -446,7 +414,7 @@ executed against the underlying Lucene index:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-profiling-queries]
include-tagged::{doc-tests-file}[{api}-request-profiling-queries]
--------------------------------------------------
<1> Retrieve the list of `QueryProfileShardResult`
<2> Iterate over each `QueryProfileShardResult`
@ -456,7 +424,7 @@ Each `QueryProfileShardResult` gives access to the detailed query tree execution
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-profiling-queries-results]
include-tagged::{doc-tests-file}[{api}-request-profiling-queries-results]
--------------------------------------------------
<1> Iterate over the profile results
<2> Retrieve the name of the Lucene query
@ -470,7 +438,7 @@ The `QueryProfileShardResult` also gives access to the profiling information for
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-profiling-queries-collectors]
include-tagged::{doc-tests-file}[{api}-request-profiling-queries-collectors]
--------------------------------------------------
<1> Retrieve the profiling result of the Lucene collector
<2> Retrieve the name of the Lucene collector
@ -485,7 +453,7 @@ to the detailed aggregations tree execution:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SearchDocumentationIT.java[search-request-profiling-aggs]
include-tagged::{doc-tests-file}[{api}-request-profiling-aggs]
--------------------------------------------------
<1> Retrieve the `AggregationProfileShardResult`
<2> Iterate over the aggregation profile results

View File

@ -0,0 +1,46 @@
[[java-rest-high-security-change-password]]
=== Change Password API
[[java-rest-high-security-change-password-execution]]
==== Execution
A user's password can be changed using the `security().changePassword()`
method:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SecurityDocumentationIT.java[change-password-execute]
--------------------------------------------------
[[java-rest-high-change-password-response]]
==== Response
The returned `EmptyResponse` does not contain any fields. The return of this
response indicates a successful request.
[[java-rest-high-x-pack-security-change-password-async]]
==== Asynchronous Execution
This request can be executed asynchronously:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SecurityDocumentationIT.java[change-password-execute-async]
--------------------------------------------------
<1> The `ChangePassword` request to execute and the `ActionListener` to use when
the execution completes.
The asynchronous method does not block and returns immediately. Once the request
has completed the `ActionListener` is called back using the `onResponse` method
if the execution successfully completed or using the `onFailure` method if
it failed.
A typical listener for a `EmptyResponse` looks like:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests}/SecurityDocumentationIT.java[change-password-execute-listener]
--------------------------------------------------
<1> Called when the execution is successfully completed. The response is
provided as an argument.
<2> Called in case of failure. The raised exception is provided as an argument.

View File

@ -2,22 +2,26 @@
== Document APIs
:upid: {mainid}-document
:doc-tests-file: {doc-tests}/CRUDDocumentationIT.java
The Java High Level REST Client supports the following Document APIs:
[[single-doc]]
Single document APIs::
* <<java-rest-high-document-index>>
* <<java-rest-high-document-get>>
* <<java-rest-high-document-delete>>
* <<java-rest-high-document-update>>
* <<{upid}-index>>
* <<{upid}-get>>
* <<{upid}-delete>>
* <<{upid}-update>>
[[multi-doc]]
Multi-document APIs::
* <<java-rest-high-document-bulk>>
* <<java-rest-high-document-multi-get>>
* <<java-rest-high-document-reindex>>
* <<java-rest-high-document-update-by-query>>
* <<java-rest-high-document-delete-by-query>>
* <<{upid}-bulk>>
* <<{upid}-multi-get>>
* <<{upid}-reindex>>
* <<{upid}-update-by-query>>
* <<{upid}-delete-by-query>>
* <<{upid}-rethrottle>>
include::document/index.asciidoc[]
include::document/get.asciidoc[]
@ -29,20 +33,24 @@ include::document/multi-get.asciidoc[]
include::document/reindex.asciidoc[]
include::document/update-by-query.asciidoc[]
include::document/delete-by-query.asciidoc[]
include::document/rethrottle.asciidoc[]
== Search APIs
:upid: {mainid}
:doc-tests-file: {doc-tests}/SearchDocumentationIT.java
The Java High Level REST Client supports the following Search APIs:
* <<java-rest-high-search>>
* <<java-rest-high-search-scroll>>
* <<java-rest-high-clear-scroll>>
* <<java-rest-high-search-template>>
* <<java-rest-high-multi-search-template>>
* <<java-rest-high-multi-search>>
* <<java-rest-high-field-caps>>
* <<java-rest-high-rank-eval>>
* <<java-rest-high-explain>>
* <<{upid}-search>>
* <<{upid}-search-scroll>>
* <<{upid}-clear-scroll>>
* <<{upid}-search-template>>
* <<{upid}-multi-search-template>>
* <<{upid}-multi-search>>
* <<{upid}-field-caps>>
* <<{upid}-rank-eval>>
* <<{upid}-explain>>
include::search/search.asciidoc[]
include::search/scroll.asciidoc[]
@ -135,6 +143,8 @@ The Java High Level REST Client supports the following Cluster APIs:
* <<java-rest-high-cluster-get-settings>>
* <<java-rest-high-cluster-health>>
:upid: {mainid}-cluster
:doc-tests-file: {doc-tests}/ClusterClientDocumentationIT.java
include::cluster/put_settings.asciidoc[]
include::cluster/get_settings.asciidoc[]
include::cluster/health.asciidoc[]
@ -223,6 +233,8 @@ The Java High Level REST Client supports the following Machine Learning APIs:
* <<java-rest-high-x-pack-ml-put-datafeed>>
* <<java-rest-high-x-pack-ml-get-datafeed>>
* <<java-rest-high-x-pack-ml-delete-datafeed>>
* <<java-rest-high-x-pack-ml-start-datafeed>>
* <<java-rest-high-x-pack-ml-stop-datafeed>>
* <<java-rest-high-x-pack-ml-forecast-job>>
* <<java-rest-high-x-pack-ml-delete-forecast>>
* <<java-rest-high-x-pack-ml-get-buckets>>
@ -245,6 +257,8 @@ include::ml/flush-job.asciidoc[]
include::ml/put-datafeed.asciidoc[]
include::ml/get-datafeed.asciidoc[]
include::ml/delete-datafeed.asciidoc[]
include::ml/start-datafeed.asciidoc[]
include::ml/stop-datafeed.asciidoc[]
include::ml/get-job-stats.asciidoc[]
include::ml/forecast-job.asciidoc[]
include::ml/delete-forecast.asciidoc[]
@ -271,8 +285,10 @@ include::migration/get-assistance.asciidoc[]
The Java High Level REST Client supports the following Rollup APIs:
* <<java-rest-high-x-pack-rollup-put-job>>
* <<java-rest-high-x-pack-rollup-get-job>>
include::rollup/put_job.asciidoc[]
include::rollup/get_job.asciidoc[]
== Security APIs
@ -281,10 +297,12 @@ The Java High Level REST Client supports the following Security APIs:
* <<java-rest-high-security-put-user>>
* <<java-rest-high-security-enable-user>>
* <<java-rest-high-security-disable-user>>
* <<java-rest-high-security-change-password>>
include::security/put-user.asciidoc[]
include::security/enable-user.asciidoc[]
include::security/disable-user.asciidoc[]
include::security/change-password.asciidoc[]
== Watcher APIs
@ -303,3 +321,15 @@ The Java High Level REST Client supports the following Graph APIs:
* <<java-rest-high-x-pack-graph-explore>>
include::graph/explore.asciidoc[]
////
Clear attributes that we use to document that APIs included above so they
don't leak into the rest of the documentation.
////
--
:api!:
:request!:
:response!:
:doc-tests-file!:
:upid!:
--

View File

@ -1,3 +1,4 @@
[[java-rest-low-config]]
== Common configuration
As explained in <<java-rest-low-usage-initialization>>, the `RestClientBuilder`

View File

@ -56,8 +56,8 @@ releases 2.0 and later do not support rivers.
* https://github.com/jprante/elasticsearch-jdbc[JDBC importer]:
The Java Database Connection (JDBC) importer allows to fetch data from JDBC sources for indexing into Elasticsearch (by Jörg Prante)
* https://github.com/reachkrishnaraj/kafka-elasticsearch-standalone-consumer/tree/branch2.0[Kafka Standalone Consumer(Indexer)]:
Kafka Standalone Consumer [Indexer] will read messages from Kafka in batches, processes(as implemented) and bulk-indexes them into Elasticsearch. Flexible and scalable. More documentation in above GitHub repo's Wiki.(Please use branch 2.0)!
* https://github.com/BigDataDevs/kafka-elasticsearch-consumer [Kafka Standalone Consumer(Indexer)]:
Kafka Standalone Consumer [Indexer] will read messages from Kafka in batches, processes(as implemented) and bulk-indexes them into Elasticsearch. Flexible and scalable. More documentation in above GitHub repo's Wiki.
* https://github.com/ozlerhakan/mongolastic[Mongolastic]:
A tool that clones data from Elasticsearch to MongoDB and vice versa

View File

@ -615,7 +615,7 @@ GET /_search
"aggs" : {
"genres" : {
"terms" : {
"field" : "gender",
"field" : "genre",
"script" : {
"source" : "'Genre: ' +_value",
"lang" : "painless"

View File

@ -59,7 +59,7 @@ PUT my_index
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"type": "custom",
"type": "custom", <1>
"tokenizer": "standard",
"char_filter": [
"html_strip"
@ -82,6 +82,11 @@ POST my_index/_analyze
--------------------------------
// CONSOLE
<1> Setting `type` to `custom` tells Elasticsearch that we are defining a custom analyzer.
Compare this to how <<configuring-analyzers,built-in analyzers can be configured>>:
`type` will be set to the name of the built-in analyzer, like
<<analysis-standard-analyzer,`standard`>> or <<analysis-simple-analyzer,`simple`>>.
/////////////////////
[source,js]

View File

@ -1,5 +1,5 @@
[[api-conventions]]
= API Conventions
= API conventions
[partintro]
--

View File

@ -16,6 +16,7 @@ GET _tasks?nodes=nodeId1,nodeId2 <2>
GET _tasks?nodes=nodeId1,nodeId2&actions=cluster:* <3>
--------------------------------------------------
// CONSOLE
// TEST[skip:No tasks to retrieve]
<1> Retrieves all tasks currently running on all nodes in the cluster.
<2> Retrieves all tasks running on nodes `nodeId1` and `nodeId2`. See <<cluster-nodes>> for more info about how to select individual nodes.
@ -57,31 +58,29 @@ The result will look similar to the following:
}
}
--------------------------------------------------
// NOTCONSOLE
// We can't test tasks output
// TESTRESPONSE
It is also possible to retrieve information for a particular task:
It is also possible to retrieve information for a particular task. The following
example retrieves information about task `oTUltX4IQMOUUVeiohTt8A:124`:
[source,js]
--------------------------------------------------
GET _tasks/task_id <1>
GET _tasks/oTUltX4IQMOUUVeiohTt8A:124
--------------------------------------------------
// CONSOLE
// TEST[s/task_id/node_id:1/]
// TEST[catch:missing]
<1> This will return a 404 if the task isn't found.
If the task isn't found, the API returns a 404.
Or to retrieve all children of a particular task:
To retrieve all children of a particular task:
[source,js]
--------------------------------------------------
GET _tasks?parent_task_id=parent_task_id <1>
GET _tasks?parent_task_id=oTUltX4IQMOUUVeiohTt8A:123
--------------------------------------------------
// CONSOLE
// TEST[s/=parent_task_id/=node_id:1/]
<1> This won't return a 404 if the parent isn't found.
If the parent isn't found, the API does not return a 404.
You can also use the `detailed` request parameter to get more information about
the running tasks. This is useful for telling one task from another but is more
@ -93,8 +92,9 @@ request parameter:
GET _tasks?actions=*search&detailed
--------------------------------------------------
// CONSOLE
// TEST[skip:No tasks to retrieve]
might look like:
The results might look like:
[source,js]
--------------------------------------------------
@ -121,8 +121,7 @@ might look like:
}
}
--------------------------------------------------
// NOTCONSOLE
// We can't test tasks output
// TESTRESPONSE
The new `description` field contains human readable text that identifies the
particular request that the task is performing such as identifying the search
@ -167,14 +166,14 @@ GET _cat/tasks?detailed
[[task-cancellation]]
=== Task Cancellation
If a long-running task supports cancellation, it can be cancelled by the following command:
If a long-running task supports cancellation, it can be cancelled with the cancel
tasks API. The following example cancels task `oTUltX4IQMOUUVeiohTt8A:12345`:
[source,js]
--------------------------------------------------
POST _tasks/node_id:task_id/_cancel
POST _tasks/oTUltX4IQMOUUVeiohTt8A:12345/_cancel
--------------------------------------------------
// CONSOLE
// TEST[s/task_id/1/]
The task cancellation command supports the same task selection parameters as the list tasks command, so multiple tasks
can be cancelled at the same time. For example, the following command will cancel all reindex tasks running on the
@ -217,7 +216,7 @@ a the client that started them:
--------------------------------------------------
curl -i -H "X-Opaque-Id: 123456" "http://localhost:9200/_tasks?group_by=parents"
--------------------------------------------------
// NOTCONSOLE
//NOTCONSOLE
The result will look similar to the following:
@ -260,8 +259,7 @@ content-length: 831
}
}
--------------------------------------------------
// NOTCONSOLE
//NOTCONSOLE
<1> id as a part of the response header
<2> id for the tasks that was initiated by the REST request
<3> the child task of the task initiated by the REST request

View File

@ -108,9 +108,11 @@ The order of precedence for cluster settings is:
2. persistent cluster settings
3. settings in the `elasticsearch.yml` configuration file.
It's best to use the `elasticsearch.yml` file only
for local configurations, and set all cluster-wide settings with the
`settings` API.
It's best to set all cluster-wide settings with the `settings` API and use the
`elasticsearch.yml` file only for local configurations. This way you can be sure that
the setting is the same on all nodes. If, on the other hand, you define different
settings on different nodes by accident using the configuration file, it is very
difficult to notice these discrepancies.
You can find the list of settings that you can dynamically update in <<modules,Modules>>.

Some files were not shown because too many files have changed in this diff Show More