Merge branch 'master' into ccr
* master: Mute ML upgrade test (#30458) Stop forking javac (#30462) Client: Deprecate many argument performRequest (#30315) Docs: Use task_id in examples of tasks (#30436) Security: Rename IndexLifecycleManager to SecurityIndexManager (#30442) [Docs] Fix typo in cardinality-aggregation.asciidoc (#30434) Avoid NPE in `more_like_this` when field has zero tokens (#30365) Build: Switch to building javadoc with html5 (#30440) Add a quick tour of the project to CONTRIBUTING (#30187) Reindex: Use request flavored methods (#30317) Silence SplitIndexIT.testSplitIndexPrimaryTerm test failure. (#30432) Auto-expand replicas when adding or removing nodes (#30423) Docs: fix changelog merge Fix line length violation in cache tests Add stricter geohash parsing (#30376) Add failing test for core cache deadlock [DOCS] convert forcemerge snippet Update forcemerge.asciidoc (#30113) Added zentity to the list of API extension plugins (#29143) Fix the search request default operation behavior doc (#29302) (#29405)
This commit is contained in:
commit
5d99157236
|
@ -209,6 +209,95 @@ Before submitting your changes, run the test suite to make sure that nothing is
|
|||
./gradlew check
|
||||
```
|
||||
|
||||
### Project layout
|
||||
|
||||
This repository is split into many top level directories. The most important
|
||||
ones are:
|
||||
|
||||
#### `docs`
|
||||
Documentation for the project.
|
||||
|
||||
#### `distribution`
|
||||
Builds our tar and zip archives and our rpm and deb packages.
|
||||
|
||||
#### `libs`
|
||||
Libraries used to build other parts of the project. These are meant to be
|
||||
internal rather than general purpose. We have no plans to
|
||||
[semver](https://semver.org/) their APIs or accept feature requests for them.
|
||||
We publish them to maven central because they are dependencies of our plugin
|
||||
test framework, high level rest client, and jdbc driver but they really aren't
|
||||
general purpose enough to *belong* in maven central. We're still working out
|
||||
what to do here.
|
||||
|
||||
#### `modules`
|
||||
Features that are shipped with Elasticsearch by default but are not built in to
|
||||
the server. We typically separate features from the server because they require
|
||||
permissions that we don't believe *all* of Elasticsearch should have or because
|
||||
they depend on libraries that we don't believe *all* of Elasticsearch should
|
||||
depend on.
|
||||
|
||||
For example, reindex requires the `connect` permission so it can perform
|
||||
reindex-from-remote but we don't believe that the *all* of Elasticsearch should
|
||||
have the "connect". For another example, Painless is implemented using antlr4
|
||||
and asm and we don't believe that *all* of Elasticsearch should have access to
|
||||
them.
|
||||
|
||||
#### `plugins`
|
||||
Officially supported plugins to Elasticsearch. We decide that a feature should
|
||||
be a plugin rather than shipped as a module because we feel that it is only
|
||||
important to a subset of users, especially if it requires extra dependencies.
|
||||
|
||||
The canonical example of this is the ICU analysis plugin. It is important for
|
||||
folks who want the fairly language neutral ICU analyzer but the library to
|
||||
implement the analyzer is 11MB so we don't ship it with Elasticsearch by
|
||||
default.
|
||||
|
||||
Another example is the `discovery-gce` plugin. It is *vital* to folks running
|
||||
in [GCP](https://cloud.google.com/) but useless otherwise and it depends on a
|
||||
dozen extra jars.
|
||||
|
||||
#### `qa`
|
||||
Honestly this is kind of in flux and we're not 100% sure where we'll end up.
|
||||
Right now the directory contains
|
||||
* Tests that require multiple modules or plugins to work
|
||||
* Tests that form a cluster made up of multiple versions of Elasticsearch like
|
||||
full cluster restart, rolling restarts, and mixed version tests
|
||||
* Tests that test the Elasticsearch clients in "interesting" places like the
|
||||
`wildfly` project.
|
||||
* Tests that test Elasticsearch in funny configurations like with ingest
|
||||
disabled
|
||||
* Tests that need to do strange things like install plugins that thrown
|
||||
uncaught `Throwable`s or add a shutdown hook
|
||||
But we're not convinced that all of these things *belong* in the qa directory.
|
||||
We're fairly sure that tests that require multiple modules or plugins to work
|
||||
should just pick a "home" plugin. We're fairly sure that the multi-version
|
||||
tests *do* belong in qa. Beyond that, we're not sure. If you want to add a new
|
||||
qa project, open a PR and be ready to discuss options.
|
||||
|
||||
#### `server`
|
||||
The server component of Elasticsearch that contains all of the modules and
|
||||
plugins. Right now things like the high level rest client depend on the server
|
||||
but we'd like to fix that in the future.
|
||||
|
||||
#### `test`
|
||||
Our test framework and test fixtures. We use the test framework for testing the
|
||||
server, the plugins, and modules, and pretty much everything else. We publish
|
||||
the test framework so folks who develop Elasticsearch plugins can use it to
|
||||
test the plugins. The test fixtures are external processes that we start before
|
||||
running specific tests that rely on them.
|
||||
|
||||
For example, we have an hdfs test that uses mini-hdfs to test our
|
||||
repository-hdfs plugin.
|
||||
|
||||
#### `x-pack`
|
||||
Commercially licensed code that integrates with the rest of Elasticsearch. The
|
||||
`docs` subdirectory functions just like the top level `docs` subdirectory and
|
||||
the `qa` subdirectory functions just like the top level `qa` subdirectory. The
|
||||
`plugin` subdirectory contains the x-pack module which runs inside the
|
||||
Elasticsearch process. The `transport-client` subdirectory contains extensions
|
||||
to Elasticsearch's standard transport client to work properly with x-pack.
|
||||
|
||||
|
||||
Contributing as part of a class
|
||||
-------------------------------
|
||||
In general Elasticsearch is happy to accept contributions that were created as
|
||||
|
|
|
@ -497,10 +497,15 @@ class BuildPlugin implements Plugin<Project> {
|
|||
project.afterEvaluate {
|
||||
project.tasks.withType(JavaCompile) {
|
||||
final JavaVersion targetCompatibilityVersion = JavaVersion.toVersion(it.targetCompatibility)
|
||||
// we fork because compiling lots of different classes in a shared jvm can eventually trigger GC overhead limitations
|
||||
final compilerJavaHomeFile = new File(project.compilerJavaHome)
|
||||
// we only fork if the Gradle JDK is not the same as the compiler JDK
|
||||
if (compilerJavaHomeFile.canonicalPath == Jvm.current().javaHome.canonicalPath) {
|
||||
options.fork = false
|
||||
} else {
|
||||
options.fork = true
|
||||
options.forkOptions.javaHome = new File(project.compilerJavaHome)
|
||||
options.forkOptions.javaHome = compilerJavaHomeFile
|
||||
options.forkOptions.memoryMaximumSize = "512m"
|
||||
}
|
||||
if (targetCompatibilityVersion == JavaVersion.VERSION_1_8) {
|
||||
// compile with compact 3 profile by default
|
||||
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
|
||||
|
@ -549,6 +554,11 @@ class BuildPlugin implements Plugin<Project> {
|
|||
javadoc.classpath = javadoc.getClasspath().filter { f ->
|
||||
return classes.contains(f) == false
|
||||
}
|
||||
/*
|
||||
* Generate docs using html5 to suppress a warning from `javadoc`
|
||||
* that the default will change to html5 in the future.
|
||||
*/
|
||||
javadoc.options.addBooleanOption('html5', true)
|
||||
}
|
||||
configureJavadocJar(project)
|
||||
}
|
||||
|
|
|
@ -210,7 +210,9 @@ public class RestClient implements Closeable {
|
|||
* @throws IOException in case of a problem or the connection was aborted
|
||||
* @throws ClientProtocolException in case of an http protocol error
|
||||
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
|
||||
* @deprecated prefer {@link #performRequest(Request)}
|
||||
*/
|
||||
@Deprecated
|
||||
public Response performRequest(String method, String endpoint, Header... headers) throws IOException {
|
||||
Request request = new Request(method, endpoint);
|
||||
request.setHeaders(headers);
|
||||
|
@ -229,7 +231,9 @@ public class RestClient implements Closeable {
|
|||
* @throws IOException in case of a problem or the connection was aborted
|
||||
* @throws ClientProtocolException in case of an http protocol error
|
||||
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
|
||||
* @deprecated prefer {@link #performRequest(Request)}
|
||||
*/
|
||||
@Deprecated
|
||||
public Response performRequest(String method, String endpoint, Map<String, String> params, Header... headers) throws IOException {
|
||||
Request request = new Request(method, endpoint);
|
||||
addParameters(request, params);
|
||||
|
@ -252,7 +256,9 @@ public class RestClient implements Closeable {
|
|||
* @throws IOException in case of a problem or the connection was aborted
|
||||
* @throws ClientProtocolException in case of an http protocol error
|
||||
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
|
||||
* @deprecated prefer {@link #performRequest(Request)}
|
||||
*/
|
||||
@Deprecated
|
||||
public Response performRequest(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, Header... headers) throws IOException {
|
||||
Request request = new Request(method, endpoint);
|
||||
|
@ -289,7 +295,9 @@ public class RestClient implements Closeable {
|
|||
* @throws IOException in case of a problem or the connection was aborted
|
||||
* @throws ClientProtocolException in case of an http protocol error
|
||||
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
|
||||
* @deprecated prefer {@link #performRequest(Request)}
|
||||
*/
|
||||
@Deprecated
|
||||
public Response performRequest(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory,
|
||||
Header... headers) throws IOException {
|
||||
|
@ -310,7 +318,9 @@ public class RestClient implements Closeable {
|
|||
* @param endpoint the path of the request (without host and port)
|
||||
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
|
||||
* @param headers the optional request headers
|
||||
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
|
||||
*/
|
||||
@Deprecated
|
||||
public void performRequestAsync(String method, String endpoint, ResponseListener responseListener, Header... headers) {
|
||||
Request request;
|
||||
try {
|
||||
|
@ -333,7 +343,9 @@ public class RestClient implements Closeable {
|
|||
* @param params the query_string parameters
|
||||
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
|
||||
* @param headers the optional request headers
|
||||
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
|
||||
*/
|
||||
@Deprecated
|
||||
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
|
||||
ResponseListener responseListener, Header... headers) {
|
||||
Request request;
|
||||
|
@ -361,7 +373,9 @@ public class RestClient implements Closeable {
|
|||
* @param entity the body of the request, null if not applicable
|
||||
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
|
||||
* @param headers the optional request headers
|
||||
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
|
||||
*/
|
||||
@Deprecated
|
||||
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, ResponseListener responseListener, Header... headers) {
|
||||
Request request;
|
||||
|
@ -394,7 +408,9 @@ public class RestClient implements Closeable {
|
|||
* connection on the client side.
|
||||
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
|
||||
* @param headers the optional request headers
|
||||
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
|
||||
*/
|
||||
@Deprecated
|
||||
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory,
|
||||
ResponseListener responseListener, Header... headers) {
|
||||
|
|
|
@ -104,6 +104,8 @@ ones that the user is authorized to access in case field level security is enabl
|
|||
[float]
|
||||
=== Bug Fixes
|
||||
|
||||
Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])
|
||||
|
||||
Fixed prerelease version of elasticsearch in the `deb` package to sort before GA versions
|
||||
({pull}29000[#29000])
|
||||
|
||||
|
@ -137,8 +139,11 @@ coming[6.4.0]
|
|||
//[float]
|
||||
//=== Breaking Java Changes
|
||||
|
||||
//[float]
|
||||
//=== Deprecations
|
||||
[float]
|
||||
=== Deprecations
|
||||
|
||||
Deprecated multi-argument versions of the request methods in the RestClient.
|
||||
Prefer the "Request" object flavored methods. ({pull}30315[#30315])
|
||||
|
||||
[float]
|
||||
=== New Features
|
||||
|
@ -155,8 +160,8 @@ analysis module. ({pull}30397[#30397])
|
|||
|
||||
{ref-64}/breaking_64_api_changes.html#copy-source-settings-on-resize[Allow copying source settings on index resize operations] ({pull}30255[#30255])
|
||||
|
||||
Added new "Request" object flavored request methods. Prefer these instead of the
|
||||
multi-argument versions. ({pull}29623[#29623])
|
||||
Added new "Request" object flavored request methods in the RestClient. Prefer
|
||||
these instead of the multi-argument versions. ({pull}29623[#29623])
|
||||
|
||||
The cluster state listener to decide if watcher should be
|
||||
stopped/started/paused now runs far less code in an executor but is more
|
||||
|
@ -169,6 +174,8 @@ Added put index template API to the high level rest client ({pull}30400[#30400])
|
|||
[float]
|
||||
=== Bug Fixes
|
||||
|
||||
Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])
|
||||
|
||||
Do not ignore request analysis/similarity settings on index resize operations when the source index already contains such settings ({pull}30216[#30216])
|
||||
|
||||
Fix NPE when CumulativeSum agg encounters null value/empty bucket ({pull}29641[#29641])
|
||||
|
@ -177,10 +184,18 @@ Machine Learning::
|
|||
|
||||
* Account for gaps in data counts after job is reopened ({pull}30294[#30294])
|
||||
|
||||
Add validation that geohashes are not empty and don't contain unsupported characters ({pull}30376[#30376])
|
||||
|
||||
Rollup::
|
||||
* Validate timezone in range queries to ensure they match the selected job when
|
||||
searching ({pull}30338[#30338])
|
||||
|
||||
|
||||
Allocation::
|
||||
|
||||
Auto-expand replicas when adding or removing nodes to prevent shard copies from
|
||||
being dropped and resynced when a data node rejoins the cluster ({pull}30423[#30423])
|
||||
|
||||
//[float]
|
||||
//=== Regressions
|
||||
|
||||
|
|
|
@ -20,6 +20,9 @@ A number of plugins have been contributed by our community:
|
|||
* https://github.com/YannBrrd/elasticsearch-entity-resolution[Entity Resolution Plugin]:
|
||||
Uses http://github.com/larsga/Duke[Duke] for duplication detection (by Yann Barraud)
|
||||
|
||||
* https://github.com/zentity-io/zentity[Entity Resolution Plugin] (https://zentity.io[zentity]):
|
||||
Real-time entity resolution with pure Elasticsearch (by Dave Moore)
|
||||
|
||||
* https://github.com/NLPchina/elasticsearch-sql/[SQL language Plugin]:
|
||||
Allows Elasticsearch to be queried with SQL (by nlpcn)
|
||||
|
||||
|
|
|
@ -63,7 +63,7 @@ POST /sales/_search?size=0
|
|||
defines a unique count below which counts are expected to be close to
|
||||
accurate. Above this value, counts might become a bit more fuzzy. The maximum
|
||||
supported value is 40000, thresholds above this number will have the same
|
||||
effect as a threshold of 40000. The default values is +3000+.
|
||||
effect as a threshold of 40000. The default value is +3000+.
|
||||
|
||||
==== Counts are approximate
|
||||
|
||||
|
|
|
@ -64,9 +64,10 @@ It is also possible to retrieve information for a particular task:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET _tasks/task_id:1 <1>
|
||||
GET _tasks/task_id <1>
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
// TEST[catch:missing]
|
||||
|
||||
<1> This will return a 404 if the task isn't found.
|
||||
|
@ -75,9 +76,10 @@ Or to retrieve all children of a particular task:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET _tasks?parent_task_id=parentTaskId:1 <1>
|
||||
GET _tasks?parent_task_id=parent_task_id <1>
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/=parent_task_id/=node_id:1/]
|
||||
|
||||
<1> This won't return a 404 if the parent isn't found.
|
||||
|
||||
|
|
|
@ -357,9 +357,10 @@ With the task id you can look up the task directly:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_tasks/taskId:1
|
||||
GET /_tasks/task_id
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
// TEST[catch:missing]
|
||||
|
||||
The advantage of this API is that it integrates with `wait_for_completion=false`
|
||||
|
@ -378,8 +379,9 @@ Any Delete By Query can be canceled using the <<tasks,Task Cancel API>>:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST _tasks/task_id:1/_cancel
|
||||
POST _tasks/task_id/_cancel
|
||||
--------------------------------------------------
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
// CONSOLE
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
@ -397,8 +399,9 @@ using the `_rethrottle` API:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST _delete_by_query/task_id:1/_rethrottle?requests_per_second=-1
|
||||
POST _delete_by_query/task_id/_rethrottle?requests_per_second=-1
|
||||
--------------------------------------------------
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
// CONSOLE
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
|
|
@ -740,9 +740,10 @@ With the task id you can look up the task directly:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_tasks/taskId:1
|
||||
GET /_tasks/task_id
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
// TEST[catch:missing]
|
||||
|
||||
The advantage of this API is that it integrates with `wait_for_completion=false`
|
||||
|
@ -761,9 +762,10 @@ Any Reindex can be canceled using the <<tasks,Task Cancel API>>:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST _tasks/task_id:1/_cancel
|
||||
POST _tasks/task_id/_cancel
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
|
||||
The `task_id` can be found using the Tasks API.
|
||||
|
||||
|
@ -780,9 +782,10 @@ the `_rethrottle` API:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST _reindex/task_id:1/_rethrottle?requests_per_second=-1
|
||||
POST _reindex/task_id/_rethrottle?requests_per_second=-1
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
|
||||
The `task_id` can be found using the Tasks API above.
|
||||
|
||||
|
|
|
@ -415,9 +415,10 @@ With the task id you can look up the task directly:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
GET /_tasks/taskId:1
|
||||
GET /_tasks/task_id
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
// TEST[catch:missing]
|
||||
|
||||
The advantage of this API is that it integrates with `wait_for_completion=false`
|
||||
|
@ -436,9 +437,10 @@ Any Update By Query can be canceled using the <<tasks,Task Cancel API>>:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST _tasks/task_id:1/_cancel
|
||||
POST _tasks/task_id/_cancel
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
||||
|
@ -455,9 +457,10 @@ using the `_rethrottle` API:
|
|||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST _update_by_query/task_id:1/_rethrottle?requests_per_second=-1
|
||||
POST _update_by_query/task_id/_rethrottle?requests_per_second=-1
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/task_id/node_id:1/]
|
||||
|
||||
The `task_id` can be found using the tasks API above.
|
||||
|
||||
|
|
|
@ -38,6 +38,13 @@ deletes. Defaults to `false`. Note that this won't override the
|
|||
`flush`:: Should a flush be performed after the forced merge. Defaults to
|
||||
`true`.
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
POST /kimchy/_forcemerge?only_expunge_deletes=false&max_num_segments=100&flush=true
|
||||
--------------------------------------------------
|
||||
// CONSOLE
|
||||
// TEST[s/^/PUT kimchy\n/]
|
||||
|
||||
[float]
|
||||
[[forcemerge-multi-index]]
|
||||
=== Multi Index
|
||||
|
|
|
@ -2,7 +2,8 @@
|
|||
=== Preference
|
||||
|
||||
Controls a `preference` of which shard copies on which to execute the
|
||||
search. By default, the operation is randomized among the available shard copies.
|
||||
search. By default, the operation is randomized among the available shard
|
||||
copies, unless allocation awareness is used.
|
||||
|
||||
The `preference` is a query string parameter which can be set to:
|
||||
|
||||
|
|
|
@ -19,7 +19,6 @@
|
|||
|
||||
package org.elasticsearch.index.reindex.remote;
|
||||
|
||||
import org.apache.http.HttpEntity;
|
||||
import org.apache.http.entity.ByteArrayEntity;
|
||||
import org.apache.http.entity.ContentType;
|
||||
import org.apache.http.entity.StringEntity;
|
||||
|
@ -27,6 +26,7 @@ import org.apache.lucene.util.BytesRef;
|
|||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
|
@ -40,33 +40,27 @@ import org.elasticsearch.search.sort.FieldSortBuilder;
|
|||
import org.elasticsearch.search.sort.SortBuilder;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import static java.util.Collections.singletonMap;
|
||||
import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;
|
||||
|
||||
/**
|
||||
* Builds requests for remote version of Elasticsearch. Note that unlike most of the
|
||||
* rest of Elasticsearch this file needs to be compatible with very old versions of
|
||||
* Elasticsearch. Thus is often uses identifiers for versions like {@code 2000099}
|
||||
* Elasticsearch. Thus it often uses identifiers for versions like {@code 2000099}
|
||||
* for {@code 2.0.0-alpha1}. Do not drop support for features from this file just
|
||||
* because the version constants have been removed.
|
||||
*/
|
||||
final class RemoteRequestBuilders {
|
||||
private RemoteRequestBuilders() {}
|
||||
|
||||
static String initialSearchPath(SearchRequest searchRequest) {
|
||||
static Request initialSearch(SearchRequest searchRequest, BytesReference query, Version remoteVersion) {
|
||||
// It is nasty to build paths with StringBuilder but we'll be careful....
|
||||
StringBuilder path = new StringBuilder("/");
|
||||
addIndexesOrTypes(path, "Index", searchRequest.indices());
|
||||
addIndexesOrTypes(path, "Type", searchRequest.types());
|
||||
path.append("_search");
|
||||
return path.toString();
|
||||
}
|
||||
Request request = new Request("POST", path.toString());
|
||||
|
||||
static Map<String, String> initialSearchParams(SearchRequest searchRequest, Version remoteVersion) {
|
||||
Map<String, String> params = new HashMap<>();
|
||||
if (searchRequest.scroll() != null) {
|
||||
TimeValue keepAlive = searchRequest.scroll().keepAlive();
|
||||
if (remoteVersion.before(Version.V_5_0_0)) {
|
||||
|
@ -75,16 +69,16 @@ final class RemoteRequestBuilders {
|
|||
* timeout seems safer than less. */
|
||||
keepAlive = timeValueMillis((long) Math.ceil(keepAlive.millisFrac()));
|
||||
}
|
||||
params.put("scroll", keepAlive.getStringRep());
|
||||
request.addParameter("scroll", keepAlive.getStringRep());
|
||||
}
|
||||
params.put("size", Integer.toString(searchRequest.source().size()));
|
||||
request.addParameter("size", Integer.toString(searchRequest.source().size()));
|
||||
if (searchRequest.source().version() == null || searchRequest.source().version() == true) {
|
||||
/*
|
||||
* Passing `null` here just add the `version` request parameter
|
||||
* without any value. This way of requesting the version works
|
||||
* for all supported versions of Elasticsearch.
|
||||
*/
|
||||
params.put("version", null);
|
||||
request.addParameter("version", null);
|
||||
}
|
||||
if (searchRequest.source().sorts() != null) {
|
||||
boolean useScan = false;
|
||||
|
@ -101,13 +95,13 @@ final class RemoteRequestBuilders {
|
|||
}
|
||||
}
|
||||
if (useScan) {
|
||||
params.put("search_type", "scan");
|
||||
request.addParameter("search_type", "scan");
|
||||
} else {
|
||||
StringBuilder sorts = new StringBuilder(sortToUri(searchRequest.source().sorts().get(0)));
|
||||
for (int i = 1; i < searchRequest.source().sorts().size(); i++) {
|
||||
sorts.append(',').append(sortToUri(searchRequest.source().sorts().get(i)));
|
||||
}
|
||||
params.put("sort", sorts.toString());
|
||||
request.addParameter("sort", sorts.toString());
|
||||
}
|
||||
}
|
||||
if (remoteVersion.before(Version.fromId(2000099))) {
|
||||
|
@ -126,12 +120,9 @@ final class RemoteRequestBuilders {
|
|||
fields.append(',').append(searchRequest.source().storedFields().fieldNames().get(i));
|
||||
}
|
||||
String storedFieldsParamName = remoteVersion.before(Version.V_5_0_0_alpha4) ? "fields" : "stored_fields";
|
||||
params.put(storedFieldsParamName, fields.toString());
|
||||
}
|
||||
return params;
|
||||
request.addParameter(storedFieldsParamName, fields.toString());
|
||||
}
|
||||
|
||||
static HttpEntity initialSearchEntity(SearchRequest searchRequest, BytesReference query, Version remoteVersion) {
|
||||
// EMPTY is safe here because we're not calling namedObject
|
||||
try (XContentBuilder entity = JsonXContent.contentBuilder();
|
||||
XContentParser queryParser = XContentHelper
|
||||
|
@ -139,7 +130,8 @@ final class RemoteRequestBuilders {
|
|||
entity.startObject();
|
||||
|
||||
entity.field("query"); {
|
||||
/* We're intentionally a bit paranoid here - copying the query as xcontent rather than writing a raw field. We don't want
|
||||
/* We're intentionally a bit paranoid here - copying the query
|
||||
* as xcontent rather than writing a raw field. We don't want
|
||||
* poorly written queries to escape. Ever. */
|
||||
entity.copyCurrentStructure(queryParser);
|
||||
XContentParser.Token shouldBeEof = queryParser.nextToken();
|
||||
|
@ -160,10 +152,11 @@ final class RemoteRequestBuilders {
|
|||
|
||||
entity.endObject();
|
||||
BytesRef bytes = BytesReference.bytes(entity).toBytesRef();
|
||||
return new ByteArrayEntity(bytes.bytes, bytes.offset, bytes.length, ContentType.APPLICATION_JSON);
|
||||
request.setEntity(new ByteArrayEntity(bytes.bytes, bytes.offset, bytes.length, ContentType.APPLICATION_JSON));
|
||||
} catch (IOException e) {
|
||||
throw new ElasticsearchException("unexpected error building entity", e);
|
||||
}
|
||||
return request;
|
||||
}
|
||||
|
||||
private static void addIndexesOrTypes(StringBuilder path, String name, String[] indicesOrTypes) {
|
||||
|
@ -193,45 +186,50 @@ final class RemoteRequestBuilders {
|
|||
throw new IllegalArgumentException("Unsupported sort [" + sort + "]");
|
||||
}
|
||||
|
||||
static String scrollPath() {
|
||||
return "/_search/scroll";
|
||||
}
|
||||
static Request scroll(String scroll, TimeValue keepAlive, Version remoteVersion) {
|
||||
Request request = new Request("POST", "/_search/scroll");
|
||||
|
||||
static Map<String, String> scrollParams(TimeValue keepAlive, Version remoteVersion) {
|
||||
if (remoteVersion.before(Version.V_5_0_0)) {
|
||||
/* Versions of Elasticsearch before 5.0 couldn't parse nanos or micros
|
||||
* so we toss out that resolution, rounding up so we shouldn't end up
|
||||
* with 0s. */
|
||||
keepAlive = timeValueMillis((long) Math.ceil(keepAlive.millisFrac()));
|
||||
}
|
||||
return singletonMap("scroll", keepAlive.getStringRep());
|
||||
}
|
||||
request.addParameter("scroll", keepAlive.getStringRep());
|
||||
|
||||
static HttpEntity scrollEntity(String scroll, Version remoteVersion) {
|
||||
if (remoteVersion.before(Version.fromId(2000099))) {
|
||||
// Versions before 2.0.0 extract the plain scroll_id from the body
|
||||
return new StringEntity(scroll, ContentType.TEXT_PLAIN);
|
||||
request.setEntity(new StringEntity(scroll, ContentType.TEXT_PLAIN));
|
||||
return request;
|
||||
}
|
||||
|
||||
try (XContentBuilder entity = JsonXContent.contentBuilder()) {
|
||||
return new StringEntity(Strings.toString(entity.startObject()
|
||||
entity.startObject()
|
||||
.field("scroll_id", scroll)
|
||||
.endObject()), ContentType.APPLICATION_JSON);
|
||||
.endObject();
|
||||
request.setEntity(new StringEntity(Strings.toString(entity), ContentType.APPLICATION_JSON));
|
||||
} catch (IOException e) {
|
||||
throw new ElasticsearchException("failed to build scroll entity", e);
|
||||
}
|
||||
return request;
|
||||
}
|
||||
|
||||
static HttpEntity clearScrollEntity(String scroll, Version remoteVersion) {
|
||||
static Request clearScroll(String scroll, Version remoteVersion) {
|
||||
Request request = new Request("DELETE", "/_search/scroll");
|
||||
|
||||
if (remoteVersion.before(Version.fromId(2000099))) {
|
||||
// Versions before 2.0.0 extract the plain scroll_id from the body
|
||||
return new StringEntity(scroll, ContentType.TEXT_PLAIN);
|
||||
request.setEntity(new StringEntity(scroll, ContentType.TEXT_PLAIN));
|
||||
return request;
|
||||
}
|
||||
try (XContentBuilder entity = JsonXContent.contentBuilder()) {
|
||||
return new StringEntity(Strings.toString(entity.startObject()
|
||||
entity.startObject()
|
||||
.array("scroll_id", scroll)
|
||||
.endObject()), ContentType.APPLICATION_JSON);
|
||||
.endObject();
|
||||
request.setEntity(new StringEntity(Strings.toString(entity), ContentType.APPLICATION_JSON));
|
||||
} catch (IOException e) {
|
||||
throw new ElasticsearchException("failed to build clear scroll entity", e);
|
||||
}
|
||||
return request;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -30,22 +30,22 @@ import org.elasticsearch.ElasticsearchException;
|
|||
import org.elasticsearch.ElasticsearchStatusException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.bulk.BackoffPolicy;
|
||||
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.XContentParseException;
|
||||
import org.elasticsearch.index.reindex.ScrollableHitSource;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.client.ResponseException;
|
||||
import org.elasticsearch.client.ResponseListener;
|
||||
import org.elasticsearch.client.RestClient;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.ParsingException;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.XContentParseException;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
|
@ -53,20 +53,11 @@ import org.elasticsearch.threadpool.ThreadPool;
|
|||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.util.Iterator;
|
||||
import java.util.Map;
|
||||
import java.util.function.BiFunction;
|
||||
import java.util.function.Consumer;
|
||||
|
||||
import static java.util.Collections.emptyMap;
|
||||
import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;
|
||||
import static org.elasticsearch.common.unit.TimeValue.timeValueNanos;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.clearScrollEntity;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchEntity;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchParams;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchPath;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scrollEntity;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scrollParams;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scrollPath;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteResponseParsers.MAIN_ACTION_PARSER;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteResponseParsers.RESPONSE_PARSER;
|
||||
|
||||
|
@ -88,13 +79,13 @@ public class RemoteScrollableHitSource extends ScrollableHitSource {
|
|||
protected void doStart(Consumer<? super Response> onResponse) {
|
||||
lookupRemoteVersion(version -> {
|
||||
remoteVersion = version;
|
||||
execute("POST", initialSearchPath(searchRequest), initialSearchParams(searchRequest, version),
|
||||
initialSearchEntity(searchRequest, query, remoteVersion), RESPONSE_PARSER, r -> onStartResponse(onResponse, r));
|
||||
execute(RemoteRequestBuilders.initialSearch(searchRequest, query, remoteVersion),
|
||||
RESPONSE_PARSER, r -> onStartResponse(onResponse, r));
|
||||
});
|
||||
}
|
||||
|
||||
void lookupRemoteVersion(Consumer<Version> onVersion) {
|
||||
execute("GET", "", emptyMap(), null, MAIN_ACTION_PARSER, onVersion);
|
||||
execute(new Request("GET", ""), MAIN_ACTION_PARSER, onVersion);
|
||||
}
|
||||
|
||||
private void onStartResponse(Consumer<? super Response> onResponse, Response response) {
|
||||
|
@ -108,15 +99,13 @@ public class RemoteScrollableHitSource extends ScrollableHitSource {
|
|||
|
||||
@Override
|
||||
protected void doStartNextScroll(String scrollId, TimeValue extraKeepAlive, Consumer<? super Response> onResponse) {
|
||||
Map<String, String> scrollParams = scrollParams(
|
||||
timeValueNanos(searchRequest.scroll().keepAlive().nanos() + extraKeepAlive.nanos()),
|
||||
remoteVersion);
|
||||
execute("POST", scrollPath(), scrollParams, scrollEntity(scrollId, remoteVersion), RESPONSE_PARSER, onResponse);
|
||||
TimeValue keepAlive = timeValueNanos(searchRequest.scroll().keepAlive().nanos() + extraKeepAlive.nanos());
|
||||
execute(RemoteRequestBuilders.scroll(scrollId, keepAlive, remoteVersion), RESPONSE_PARSER, onResponse);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void clearScroll(String scrollId, Runnable onCompletion) {
|
||||
client.performRequestAsync("DELETE", scrollPath(), emptyMap(), clearScrollEntity(scrollId, remoteVersion), new ResponseListener() {
|
||||
client.performRequestAsync(RemoteRequestBuilders.clearScroll(scrollId, remoteVersion), new ResponseListener() {
|
||||
@Override
|
||||
public void onSuccess(org.elasticsearch.client.Response response) {
|
||||
logger.debug("Successfully cleared [{}]", scrollId);
|
||||
|
@ -162,7 +151,7 @@ public class RemoteScrollableHitSource extends ScrollableHitSource {
|
|||
});
|
||||
}
|
||||
|
||||
private <T> void execute(String method, String uri, Map<String, String> params, HttpEntity entity,
|
||||
private <T> void execute(Request request,
|
||||
BiFunction<XContentParser, XContentType, T> parser, Consumer<? super T> listener) {
|
||||
// Preserve the thread context so headers survive after the call
|
||||
java.util.function.Supplier<ThreadContext.StoredContext> contextSupplier = threadPool.getThreadContext().newRestorableContext(true);
|
||||
|
@ -171,7 +160,7 @@ public class RemoteScrollableHitSource extends ScrollableHitSource {
|
|||
|
||||
@Override
|
||||
protected void doRun() throws Exception {
|
||||
client.performRequestAsync(method, uri, params, entity, new ResponseListener() {
|
||||
client.performRequestAsync(request, new ResponseListener() {
|
||||
@Override
|
||||
public void onSuccess(org.elasticsearch.client.Response response) {
|
||||
// Restore the thread context to get the precious headers
|
||||
|
|
|
@ -23,7 +23,9 @@ import org.apache.http.HttpEntity;
|
|||
import org.apache.http.entity.ContentType;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.client.Request;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.Streams;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.search.builder.SearchSourceBuilder;
|
||||
|
@ -35,14 +37,12 @@ import java.nio.charset.StandardCharsets;
|
|||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.clearScrollEntity;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchEntity;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchParams;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearchPath;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scrollEntity;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scrollParams;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.clearScroll;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.initialSearch;
|
||||
import static org.elasticsearch.index.reindex.remote.RemoteRequestBuilders.scroll;
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.either;
|
||||
import static org.hamcrest.Matchers.empty;
|
||||
import static org.hamcrest.Matchers.endsWith;
|
||||
import static org.hamcrest.Matchers.hasEntry;
|
||||
import static org.hamcrest.Matchers.hasKey;
|
||||
|
@ -57,15 +57,17 @@ import static org.hamcrest.Matchers.not;
|
|||
*/
|
||||
public class RemoteRequestBuildersTests extends ESTestCase {
|
||||
public void testIntialSearchPath() {
|
||||
SearchRequest searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
Version remoteVersion = Version.fromId(between(0, Version.CURRENT.id));
|
||||
BytesReference query = new BytesArray("{}");
|
||||
|
||||
assertEquals("/_search", initialSearchPath(searchRequest));
|
||||
SearchRequest searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
assertEquals("/_search", initialSearch(searchRequest, query, remoteVersion).getEndpoint());
|
||||
searchRequest.indices("a");
|
||||
searchRequest.types("b");
|
||||
assertEquals("/a/b/_search", initialSearchPath(searchRequest));
|
||||
assertEquals("/a/b/_search", initialSearch(searchRequest, query, remoteVersion).getEndpoint());
|
||||
searchRequest.indices("a", "b");
|
||||
searchRequest.types("c", "d");
|
||||
assertEquals("/a,b/c,d/_search", initialSearchPath(searchRequest));
|
||||
assertEquals("/a,b/c,d/_search", initialSearch(searchRequest, query, remoteVersion).getEndpoint());
|
||||
|
||||
searchRequest.indices("cat,");
|
||||
expectBadStartRequest(searchRequest, "Index", ",", "cat,");
|
||||
|
@ -96,63 +98,70 @@ public class RemoteRequestBuildersTests extends ESTestCase {
|
|||
}
|
||||
|
||||
private void expectBadStartRequest(SearchRequest searchRequest, String type, String bad, String failed) {
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> initialSearchPath(searchRequest));
|
||||
Version remoteVersion = Version.fromId(between(0, Version.CURRENT.id));
|
||||
BytesReference query = new BytesArray("{}");
|
||||
IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> initialSearch(searchRequest, query, remoteVersion));
|
||||
assertEquals(type + " containing [" + bad + "] not supported but got [" + failed + "]", e.getMessage());
|
||||
}
|
||||
|
||||
public void testInitialSearchParamsSort() {
|
||||
BytesReference query = new BytesArray("{}");
|
||||
SearchRequest searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
|
||||
// Test sort:_doc for versions that support it.
|
||||
Version remoteVersion = Version.fromId(between(2010099, Version.CURRENT.id));
|
||||
searchRequest.source().sort("_doc");
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry("sort", "_doc:asc"));
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(), hasEntry("sort", "_doc:asc"));
|
||||
|
||||
// Test search_type scan for versions that don't support sort:_doc.
|
||||
remoteVersion = Version.fromId(between(0, 2010099 - 1));
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry("search_type", "scan"));
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(), hasEntry("search_type", "scan"));
|
||||
|
||||
// Test sorting by some field. Version doesn't matter.
|
||||
remoteVersion = Version.fromId(between(0, Version.CURRENT.id));
|
||||
searchRequest.source().sorts().clear();
|
||||
searchRequest.source().sort("foo");
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry("sort", "foo:asc"));
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(), hasEntry("sort", "foo:asc"));
|
||||
}
|
||||
|
||||
public void testInitialSearchParamsFields() {
|
||||
BytesReference query = new BytesArray("{}");
|
||||
SearchRequest searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
|
||||
// Test request without any fields
|
||||
Version remoteVersion = Version.fromId(between(2000099, Version.CURRENT.id));
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion),
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(),
|
||||
not(either(hasKey("stored_fields")).or(hasKey("fields"))));
|
||||
|
||||
// Test stored_fields for versions that support it
|
||||
searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
searchRequest.source().storedField("_source").storedField("_id");
|
||||
remoteVersion = Version.fromId(between(Version.V_5_0_0_alpha4_ID, Version.CURRENT.id));
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry("stored_fields", "_source,_id"));
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(), hasEntry("stored_fields", "_source,_id"));
|
||||
|
||||
// Test fields for versions that support it
|
||||
searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
searchRequest.source().storedField("_source").storedField("_id");
|
||||
remoteVersion = Version.fromId(between(2000099, Version.V_5_0_0_alpha4_ID - 1));
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry("fields", "_source,_id"));
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(), hasEntry("fields", "_source,_id"));
|
||||
|
||||
// Test extra fields for versions that need it
|
||||
searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
searchRequest.source().storedField("_source").storedField("_id");
|
||||
remoteVersion = Version.fromId(between(0, 2000099 - 1));
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry("fields", "_source,_id,_parent,_routing,_ttl"));
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(),
|
||||
hasEntry("fields", "_source,_id,_parent,_routing,_ttl"));
|
||||
|
||||
// But only versions before 1.0 force _source to be in the list
|
||||
searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
searchRequest.source().storedField("_id");
|
||||
remoteVersion = Version.fromId(between(1000099, 2000099 - 1));
|
||||
assertThat(initialSearchParams(searchRequest, remoteVersion), hasEntry("fields", "_id,_parent,_routing,_ttl"));
|
||||
assertThat(initialSearch(searchRequest, query, remoteVersion).getParameters(),
|
||||
hasEntry("fields", "_id,_parent,_routing,_ttl"));
|
||||
}
|
||||
|
||||
public void testInitialSearchParamsMisc() {
|
||||
BytesReference query = new BytesArray("{}");
|
||||
SearchRequest searchRequest = new SearchRequest().source(new SearchSourceBuilder());
|
||||
Version remoteVersion = Version.fromId(between(0, Version.CURRENT.id));
|
||||
|
||||
|
@ -169,7 +178,7 @@ public class RemoteRequestBuildersTests extends ESTestCase {
|
|||
searchRequest.source().version(fetchVersion);
|
||||
}
|
||||
|
||||
Map<String, String> params = initialSearchParams(searchRequest, remoteVersion);
|
||||
Map<String, String> params = initialSearch(searchRequest, query, remoteVersion).getParameters();
|
||||
|
||||
if (scroll == null) {
|
||||
assertThat(params, not(hasKey("scroll")));
|
||||
|
@ -199,7 +208,7 @@ public class RemoteRequestBuildersTests extends ESTestCase {
|
|||
SearchRequest searchRequest = new SearchRequest();
|
||||
searchRequest.source(new SearchSourceBuilder());
|
||||
String query = "{\"match_all\":{}}";
|
||||
HttpEntity entity = initialSearchEntity(searchRequest, new BytesArray(query), remoteVersion);
|
||||
HttpEntity entity = initialSearch(searchRequest, new BytesArray(query), remoteVersion).getEntity();
|
||||
assertEquals(ContentType.APPLICATION_JSON.toString(), entity.getContentType().getValue());
|
||||
if (remoteVersion.onOrAfter(Version.fromId(1000099))) {
|
||||
assertEquals("{\"query\":" + query + ",\"_source\":true}",
|
||||
|
@ -211,48 +220,51 @@ public class RemoteRequestBuildersTests extends ESTestCase {
|
|||
|
||||
// Source filtering is included if set up
|
||||
searchRequest.source().fetchSource(new String[] {"in1", "in2"}, new String[] {"out"});
|
||||
entity = initialSearchEntity(searchRequest, new BytesArray(query), remoteVersion);
|
||||
entity = initialSearch(searchRequest, new BytesArray(query), remoteVersion).getEntity();
|
||||
assertEquals(ContentType.APPLICATION_JSON.toString(), entity.getContentType().getValue());
|
||||
assertEquals("{\"query\":" + query + ",\"_source\":{\"includes\":[\"in1\",\"in2\"],\"excludes\":[\"out\"]}}",
|
||||
Streams.copyToString(new InputStreamReader(entity.getContent(), StandardCharsets.UTF_8)));
|
||||
|
||||
// Invalid XContent fails
|
||||
RuntimeException e = expectThrows(RuntimeException.class,
|
||||
() -> initialSearchEntity(searchRequest, new BytesArray("{}, \"trailing\": {}"), remoteVersion));
|
||||
() -> initialSearch(searchRequest, new BytesArray("{}, \"trailing\": {}"), remoteVersion));
|
||||
assertThat(e.getCause().getMessage(), containsString("Unexpected character (',' (code 44))"));
|
||||
e = expectThrows(RuntimeException.class, () -> initialSearchEntity(searchRequest, new BytesArray("{"), remoteVersion));
|
||||
e = expectThrows(RuntimeException.class, () -> initialSearch(searchRequest, new BytesArray("{"), remoteVersion));
|
||||
assertThat(e.getCause().getMessage(), containsString("Unexpected end-of-input"));
|
||||
}
|
||||
|
||||
public void testScrollParams() {
|
||||
String scroll = randomAlphaOfLength(30);
|
||||
Version remoteVersion = Version.fromId(between(0, Version.CURRENT.id));
|
||||
TimeValue scroll = TimeValue.parseTimeValue(randomPositiveTimeValue(), "test");
|
||||
assertScroll(remoteVersion, scrollParams(scroll, remoteVersion), scroll);
|
||||
TimeValue keepAlive = TimeValue.parseTimeValue(randomPositiveTimeValue(), "test");
|
||||
assertScroll(remoteVersion, scroll(scroll, keepAlive, remoteVersion).getParameters(), keepAlive);
|
||||
}
|
||||
|
||||
public void testScrollEntity() throws IOException {
|
||||
String scroll = randomAlphaOfLength(30);
|
||||
HttpEntity entity = scrollEntity(scroll, Version.V_5_0_0);
|
||||
HttpEntity entity = scroll(scroll, timeValueMillis(between(1, 1000)), Version.V_5_0_0).getEntity();
|
||||
assertEquals(ContentType.APPLICATION_JSON.toString(), entity.getContentType().getValue());
|
||||
assertThat(Streams.copyToString(new InputStreamReader(entity.getContent(), StandardCharsets.UTF_8)),
|
||||
containsString("\"" + scroll + "\""));
|
||||
|
||||
// Test with version < 2.0.0
|
||||
entity = scrollEntity(scroll, Version.fromId(1070499));
|
||||
entity = scroll(scroll, timeValueMillis(between(1, 1000)), Version.fromId(1070499)).getEntity();
|
||||
assertEquals(ContentType.TEXT_PLAIN.toString(), entity.getContentType().getValue());
|
||||
assertEquals(scroll, Streams.copyToString(new InputStreamReader(entity.getContent(), StandardCharsets.UTF_8)));
|
||||
}
|
||||
|
||||
public void testClearScrollEntity() throws IOException {
|
||||
public void testClearScroll() throws IOException {
|
||||
String scroll = randomAlphaOfLength(30);
|
||||
HttpEntity entity = clearScrollEntity(scroll, Version.V_5_0_0);
|
||||
assertEquals(ContentType.APPLICATION_JSON.toString(), entity.getContentType().getValue());
|
||||
assertThat(Streams.copyToString(new InputStreamReader(entity.getContent(), StandardCharsets.UTF_8)),
|
||||
Request request = clearScroll(scroll, Version.V_5_0_0);
|
||||
assertEquals(ContentType.APPLICATION_JSON.toString(), request.getEntity().getContentType().getValue());
|
||||
assertThat(Streams.copyToString(new InputStreamReader(request.getEntity().getContent(), StandardCharsets.UTF_8)),
|
||||
containsString("\"" + scroll + "\""));
|
||||
assertThat(request.getParameters().keySet(), empty());
|
||||
|
||||
// Test with version < 2.0.0
|
||||
entity = clearScrollEntity(scroll, Version.fromId(1070499));
|
||||
assertEquals(ContentType.TEXT_PLAIN.toString(), entity.getContentType().getValue());
|
||||
assertEquals(scroll, Streams.copyToString(new InputStreamReader(entity.getContent(), StandardCharsets.UTF_8)));
|
||||
request = clearScroll(scroll, Version.fromId(1070499));
|
||||
assertEquals(ContentType.TEXT_PLAIN.toString(), request.getEntity().getContentType().getValue());
|
||||
assertEquals(scroll, Streams.copyToString(new InputStreamReader(request.getEntity().getContent(), StandardCharsets.UTF_8)));
|
||||
assertThat(request.getParameters().keySet(), empty());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -70,6 +70,10 @@ final class TermVectorsWriter {
|
|||
Terms topLevelTerms = topLevelFields.terms(field);
|
||||
|
||||
// if no terms found, take the retrieved term vector fields for stats
|
||||
if (fieldTermVector == null) {
|
||||
fieldTermVector = EMPTY_TERMS;
|
||||
}
|
||||
|
||||
if (topLevelTerms == null) {
|
||||
topLevelTerms = EMPTY_TERMS;
|
||||
}
|
||||
|
|
|
@ -18,23 +18,36 @@
|
|||
*/
|
||||
package org.elasticsearch.cluster.metadata;
|
||||
|
||||
import org.elasticsearch.cluster.node.DiscoveryNodes;
|
||||
import org.elasticsearch.common.Booleans;
|
||||
import org.elasticsearch.common.settings.Setting;
|
||||
import org.elasticsearch.common.settings.Setting.Property;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
|
||||
/**
|
||||
* This class acts as a functional wrapper around the {@code index.auto_expand_replicas} setting.
|
||||
* This setting or rather it's value is expanded into a min and max value which requires special handling
|
||||
* based on the number of datanodes in the cluster. This class handles all the parsing and streamlines the access to these values.
|
||||
*/
|
||||
final class AutoExpandReplicas {
|
||||
public final class AutoExpandReplicas {
|
||||
// the value we recognize in the "max" position to mean all the nodes
|
||||
private static final String ALL_NODES_VALUE = "all";
|
||||
public static final Setting<AutoExpandReplicas> SETTING = new Setting<>(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, "false", (value) -> {
|
||||
|
||||
private static final AutoExpandReplicas FALSE_INSTANCE = new AutoExpandReplicas(0, 0, false);
|
||||
|
||||
public static final Setting<AutoExpandReplicas> SETTING = new Setting<>(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, "false",
|
||||
AutoExpandReplicas::parse, Property.Dynamic, Property.IndexScope);
|
||||
|
||||
private static AutoExpandReplicas parse(String value) {
|
||||
final int min;
|
||||
final int max;
|
||||
if (Booleans.isFalse(value)) {
|
||||
return new AutoExpandReplicas(0, 0, false);
|
||||
return FALSE_INSTANCE;
|
||||
}
|
||||
final int dash = value.indexOf('-');
|
||||
if (-1 == dash) {
|
||||
|
@ -57,7 +70,7 @@ final class AutoExpandReplicas {
|
|||
}
|
||||
}
|
||||
return new AutoExpandReplicas(min, max, true);
|
||||
}, Property.Dynamic, Property.IndexScope);
|
||||
}
|
||||
|
||||
private final int minReplicas;
|
||||
private final int maxReplicas;
|
||||
|
@ -80,6 +93,24 @@ final class AutoExpandReplicas {
|
|||
return Math.min(maxReplicas, numDataNodes-1);
|
||||
}
|
||||
|
||||
Optional<Integer> getDesiredNumberOfReplicas(int numDataNodes) {
|
||||
if (enabled) {
|
||||
final int min = getMinReplicas();
|
||||
final int max = getMaxReplicas(numDataNodes);
|
||||
int numberOfReplicas = numDataNodes - 1;
|
||||
if (numberOfReplicas < min) {
|
||||
numberOfReplicas = min;
|
||||
} else if (numberOfReplicas > max) {
|
||||
numberOfReplicas = max;
|
||||
}
|
||||
|
||||
if (numberOfReplicas >= min && numberOfReplicas <= max) {
|
||||
return Optional.of(numberOfReplicas);
|
||||
}
|
||||
}
|
||||
return Optional.empty();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return enabled ? minReplicas + "-" + maxReplicas : "false";
|
||||
|
@ -88,6 +119,31 @@ final class AutoExpandReplicas {
|
|||
boolean isEnabled() {
|
||||
return enabled;
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the are replicas with the auto-expand feature that need to be adapted.
|
||||
* Returns a map of updates, which maps the indices to be updated to the desired number of replicas.
|
||||
* The map has the desired number of replicas as key and the indices to update as value, as this allows the result
|
||||
* of this method to be directly applied to RoutingTable.Builder#updateNumberOfReplicas.
|
||||
*/
|
||||
public static Map<Integer, List<String>> getAutoExpandReplicaChanges(MetaData metaData, DiscoveryNodes discoveryNodes) {
|
||||
// used for translating "all" to a number
|
||||
final int dataNodeCount = discoveryNodes.getDataNodes().size();
|
||||
|
||||
Map<Integer, List<String>> nrReplicasChanged = new HashMap<>();
|
||||
|
||||
for (final IndexMetaData indexMetaData : metaData) {
|
||||
if (indexMetaData.getState() != IndexMetaData.State.CLOSE) {
|
||||
AutoExpandReplicas autoExpandReplicas = SETTING.get(indexMetaData.getSettings());
|
||||
autoExpandReplicas.getDesiredNumberOfReplicas(dataNodeCount).ifPresent(numberOfReplicas -> {
|
||||
if (numberOfReplicas != indexMetaData.getNumberOfReplicas()) {
|
||||
nrReplicasChanged.computeIfAbsent(numberOfReplicas, ArrayList::new).add(indexMetaData.getIndex().getName());
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
return nrReplicasChanged;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -25,9 +25,7 @@ import org.elasticsearch.action.ActionListener;
|
|||
import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsClusterStateUpdateRequest;
|
||||
import org.elasticsearch.action.admin.indices.upgrade.post.UpgradeSettingsClusterStateUpdateRequest;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateListener;
|
||||
import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;
|
||||
import org.elasticsearch.cluster.block.ClusterBlock;
|
||||
import org.elasticsearch.cluster.block.ClusterBlocks;
|
||||
|
@ -42,16 +40,12 @@ import org.elasticsearch.common.regex.Regex;
|
|||
import org.elasticsearch.common.settings.IndexScopedSettings;
|
||||
import org.elasticsearch.common.settings.Setting;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.index.Index;
|
||||
import org.elasticsearch.indices.IndicesService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
@ -61,7 +55,7 @@ import static org.elasticsearch.action.support.ContextPreservingActionListener.w
|
|||
/**
|
||||
* Service responsible for submitting update index settings requests
|
||||
*/
|
||||
public class MetaDataUpdateSettingsService extends AbstractComponent implements ClusterStateListener {
|
||||
public class MetaDataUpdateSettingsService extends AbstractComponent {
|
||||
|
||||
private final ClusterService clusterService;
|
||||
|
||||
|
@ -77,87 +71,11 @@ public class MetaDataUpdateSettingsService extends AbstractComponent implements
|
|||
super(settings);
|
||||
this.clusterService = clusterService;
|
||||
this.threadPool = threadPool;
|
||||
this.clusterService.addListener(this);
|
||||
this.allocationService = allocationService;
|
||||
this.indexScopedSettings = indexScopedSettings;
|
||||
this.indicesService = indicesService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterChanged(ClusterChangedEvent event) {
|
||||
// update an index with number of replicas based on data nodes if possible
|
||||
if (!event.state().nodes().isLocalNodeElectedMaster()) {
|
||||
return;
|
||||
}
|
||||
// we will want to know this for translating "all" to a number
|
||||
final int dataNodeCount = event.state().nodes().getDataNodes().size();
|
||||
|
||||
Map<Integer, List<Index>> nrReplicasChanged = new HashMap<>();
|
||||
// we need to do this each time in case it was changed by update settings
|
||||
for (final IndexMetaData indexMetaData : event.state().metaData()) {
|
||||
AutoExpandReplicas autoExpandReplicas = IndexMetaData.INDEX_AUTO_EXPAND_REPLICAS_SETTING.get(indexMetaData.getSettings());
|
||||
if (autoExpandReplicas.isEnabled()) {
|
||||
/*
|
||||
* we have to expand the number of replicas for this index to at least min and at most max nodes here
|
||||
* so we are bumping it up if we have to or reduce it depending on min/max and the number of datanodes.
|
||||
* If we change the number of replicas we just let the shard allocator do it's thing once we updated it
|
||||
* since it goes through the index metadata to figure out if something needs to be done anyway. Do do that
|
||||
* we issue a cluster settings update command below and kicks off a reroute.
|
||||
*/
|
||||
final int min = autoExpandReplicas.getMinReplicas();
|
||||
final int max = autoExpandReplicas.getMaxReplicas(dataNodeCount);
|
||||
int numberOfReplicas = dataNodeCount - 1;
|
||||
if (numberOfReplicas < min) {
|
||||
numberOfReplicas = min;
|
||||
} else if (numberOfReplicas > max) {
|
||||
numberOfReplicas = max;
|
||||
}
|
||||
// same value, nothing to do there
|
||||
if (numberOfReplicas == indexMetaData.getNumberOfReplicas()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (numberOfReplicas >= min && numberOfReplicas <= max) {
|
||||
|
||||
if (!nrReplicasChanged.containsKey(numberOfReplicas)) {
|
||||
nrReplicasChanged.put(numberOfReplicas, new ArrayList<>());
|
||||
}
|
||||
|
||||
nrReplicasChanged.get(numberOfReplicas).add(indexMetaData.getIndex());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (nrReplicasChanged.size() > 0) {
|
||||
// update settings and kick of a reroute (implicit) for them to take effect
|
||||
for (final Integer fNumberOfReplicas : nrReplicasChanged.keySet()) {
|
||||
Settings settings = Settings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, fNumberOfReplicas).build();
|
||||
final List<Index> indices = nrReplicasChanged.get(fNumberOfReplicas);
|
||||
|
||||
UpdateSettingsClusterStateUpdateRequest updateRequest = new UpdateSettingsClusterStateUpdateRequest()
|
||||
.indices(indices.toArray(new Index[indices.size()])).settings(settings)
|
||||
.ackTimeout(TimeValue.timeValueMillis(0)) //no need to wait for ack here
|
||||
.masterNodeTimeout(TimeValue.timeValueMinutes(10));
|
||||
|
||||
updateSettings(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {
|
||||
@Override
|
||||
public void onResponse(ClusterStateUpdateResponse response) {
|
||||
for (Index index : indices) {
|
||||
logger.info("{} auto expanded replicas to [{}]", index, fNumberOfReplicas);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception t) {
|
||||
for (Index index : indices) {
|
||||
logger.warn("{} fail to auto expand replicas to [{}]", index, fNumberOfReplicas);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public void updateSettings(final UpdateSettingsClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {
|
||||
final Settings normalizedSettings = Settings.builder().put(request.settings()).normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build();
|
||||
Settings.Builder settingsForClosedIndices = Settings.builder();
|
||||
|
|
|
@ -25,6 +25,7 @@ import org.elasticsearch.cluster.ClusterState;
|
|||
import org.elasticsearch.cluster.RestoreInProgress;
|
||||
import org.elasticsearch.cluster.health.ClusterHealthStatus;
|
||||
import org.elasticsearch.cluster.health.ClusterStateHealth;
|
||||
import org.elasticsearch.cluster.metadata.AutoExpandReplicas;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.routing.RoutingNode;
|
||||
|
@ -46,6 +47,7 @@ import java.util.Collections;
|
|||
import java.util.Comparator;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.function.Function;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
|
@ -206,11 +208,12 @@ public class AllocationService extends AbstractComponent {
|
|||
* unassigned an shards that are associated with nodes that are no longer part of the cluster, potentially promoting replicas
|
||||
* if needed.
|
||||
*/
|
||||
public ClusterState deassociateDeadNodes(final ClusterState clusterState, boolean reroute, String reason) {
|
||||
RoutingNodes routingNodes = getMutableRoutingNodes(clusterState);
|
||||
public ClusterState deassociateDeadNodes(ClusterState clusterState, boolean reroute, String reason) {
|
||||
ClusterState fixedClusterState = adaptAutoExpandReplicas(clusterState);
|
||||
RoutingNodes routingNodes = getMutableRoutingNodes(fixedClusterState);
|
||||
// shuffle the unassigned nodes, just so we won't have things like poison failed shards
|
||||
routingNodes.unassigned().shuffle();
|
||||
RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, clusterState,
|
||||
RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, fixedClusterState,
|
||||
clusterInfoService.getClusterInfo(), currentNanoTime());
|
||||
|
||||
// first, clear from the shards any node id they used to belong to that is now dead
|
||||
|
@ -220,12 +223,40 @@ public class AllocationService extends AbstractComponent {
|
|||
reroute(allocation);
|
||||
}
|
||||
|
||||
if (allocation.routingNodesChanged() == false) {
|
||||
if (fixedClusterState == clusterState && allocation.routingNodesChanged() == false) {
|
||||
return clusterState;
|
||||
}
|
||||
return buildResultAndLogHealthChange(clusterState, allocation, reason);
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the are replicas with the auto-expand feature that need to be adapted.
|
||||
* Returns an updated cluster state if changes were necessary, or the identical cluster if no changes were required.
|
||||
*/
|
||||
private ClusterState adaptAutoExpandReplicas(ClusterState clusterState) {
|
||||
final Map<Integer, List<String>> autoExpandReplicaChanges =
|
||||
AutoExpandReplicas.getAutoExpandReplicaChanges(clusterState.metaData(), clusterState.nodes());
|
||||
if (autoExpandReplicaChanges.isEmpty()) {
|
||||
return clusterState;
|
||||
} else {
|
||||
final RoutingTable.Builder routingTableBuilder = RoutingTable.builder(clusterState.routingTable());
|
||||
final MetaData.Builder metaDataBuilder = MetaData.builder(clusterState.metaData());
|
||||
for (Map.Entry<Integer, List<String>> entry : autoExpandReplicaChanges.entrySet()) {
|
||||
final int numberOfReplicas = entry.getKey();
|
||||
final String[] indices = entry.getValue().toArray(new String[entry.getValue().size()]);
|
||||
// we do *not* update the in sync allocation ids as they will be removed upon the first index
|
||||
// operation which make these copies stale
|
||||
routingTableBuilder.updateNumberOfReplicas(numberOfReplicas, indices);
|
||||
metaDataBuilder.updateNumberOfReplicas(numberOfReplicas, indices);
|
||||
logger.info("updating number_of_replicas to [{}] for indices {}", numberOfReplicas, indices);
|
||||
}
|
||||
final ClusterState fixedState = ClusterState.builder(clusterState).routingTable(routingTableBuilder.build())
|
||||
.metaData(metaDataBuilder).build();
|
||||
assert AutoExpandReplicas.getAutoExpandReplicaChanges(fixedState.metaData(), fixedState.nodes()).isEmpty();
|
||||
return fixedState;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes delay markers from unassigned shards based on current time stamp.
|
||||
*/
|
||||
|
@ -301,6 +332,7 @@ public class AllocationService extends AbstractComponent {
|
|||
if (retryFailed) {
|
||||
resetFailedAllocationCounter(allocation);
|
||||
}
|
||||
|
||||
reroute(allocation);
|
||||
return new CommandsResult(explanations, buildResultAndLogHealthChange(clusterState, allocation, "reroute commands"));
|
||||
}
|
||||
|
@ -320,15 +352,17 @@ public class AllocationService extends AbstractComponent {
|
|||
* <p>
|
||||
* If the same instance of ClusterState is returned, then no change has been made.
|
||||
*/
|
||||
protected ClusterState reroute(final ClusterState clusterState, String reason, boolean debug) {
|
||||
RoutingNodes routingNodes = getMutableRoutingNodes(clusterState);
|
||||
protected ClusterState reroute(ClusterState clusterState, String reason, boolean debug) {
|
||||
ClusterState fixedClusterState = adaptAutoExpandReplicas(clusterState);
|
||||
|
||||
RoutingNodes routingNodes = getMutableRoutingNodes(fixedClusterState);
|
||||
// shuffle the unassigned nodes, just so we won't have things like poison failed shards
|
||||
routingNodes.unassigned().shuffle();
|
||||
RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, clusterState,
|
||||
RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, fixedClusterState,
|
||||
clusterInfoService.getClusterInfo(), currentNanoTime());
|
||||
allocation.debugDecision(debug);
|
||||
reroute(allocation);
|
||||
if (allocation.routingNodesChanged() == false) {
|
||||
if (fixedClusterState == clusterState && allocation.routingNodesChanged() == false) {
|
||||
return clusterState;
|
||||
}
|
||||
return buildResultAndLogHealthChange(clusterState, allocation, reason);
|
||||
|
@ -353,6 +387,8 @@ public class AllocationService extends AbstractComponent {
|
|||
|
||||
private void reroute(RoutingAllocation allocation) {
|
||||
assert hasDeadNodes(allocation) == false : "dead nodes should be explicitly cleaned up. See deassociateDeadNodes";
|
||||
assert AutoExpandReplicas.getAutoExpandReplicaChanges(allocation.metaData(), allocation.nodes()).isEmpty() :
|
||||
"auto-expand replicas out of sync with number of nodes in the cluster";
|
||||
|
||||
// now allocate all the unassigned to available nodes
|
||||
if (allocation.routingNodes().unassigned().size() > 0) {
|
||||
|
|
|
@ -171,11 +171,17 @@ public class GeoHashUtils {
|
|||
* Encode to a morton long value from a given geohash string
|
||||
*/
|
||||
public static final long mortonEncode(final String hash) {
|
||||
if (hash.isEmpty()) {
|
||||
throw new IllegalArgumentException("empty geohash");
|
||||
}
|
||||
int level = 11;
|
||||
long b;
|
||||
long l = 0L;
|
||||
for(char c : hash.toCharArray()) {
|
||||
b = (long)(BASE_32_STRING.indexOf(c));
|
||||
if (b < 0) {
|
||||
throw new IllegalArgumentException("unsupported symbol [" + c + "] in geohash [" + hash + "]");
|
||||
}
|
||||
l |= (b<<((level--*5) + MORTON_OFFSET));
|
||||
if (level < 0) {
|
||||
// We cannot handle more than 12 levels
|
||||
|
|
|
@ -28,7 +28,6 @@ import org.apache.lucene.util.BytesRef;
|
|||
import org.elasticsearch.common.xcontent.ToXContentFragment;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.common.Strings;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
|
@ -126,7 +125,12 @@ public final class GeoPoint implements ToXContentFragment {
|
|||
}
|
||||
|
||||
public GeoPoint resetFromGeoHash(String geohash) {
|
||||
final long hash = mortonEncode(geohash);
|
||||
final long hash;
|
||||
try {
|
||||
hash = mortonEncode(geohash);
|
||||
} catch (IllegalArgumentException ex) {
|
||||
throw new ElasticsearchParseException(ex.getMessage(), ex);
|
||||
}
|
||||
return this.reset(GeoHashUtils.decodeLatitude(hash), GeoHashUtils.decodeLongitude(hash));
|
||||
}
|
||||
|
||||
|
|
|
@ -58,9 +58,7 @@ import static org.elasticsearch.gateway.GatewayService.STATE_NOT_RECOVERED_BLOCK
|
|||
public class NodeJoinController extends AbstractComponent {
|
||||
|
||||
private final MasterService masterService;
|
||||
private final AllocationService allocationService;
|
||||
private final ElectMasterService electMaster;
|
||||
private final JoinTaskExecutor joinTaskExecutor = new JoinTaskExecutor();
|
||||
private final JoinTaskExecutor joinTaskExecutor;
|
||||
|
||||
// this is set while trying to become a master
|
||||
// mutation should be done under lock
|
||||
|
@ -71,8 +69,7 @@ public class NodeJoinController extends AbstractComponent {
|
|||
Settings settings) {
|
||||
super(settings);
|
||||
this.masterService = masterService;
|
||||
this.allocationService = allocationService;
|
||||
this.electMaster = electMaster;
|
||||
joinTaskExecutor = new JoinTaskExecutor(allocationService, electMaster, logger);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -404,7 +401,20 @@ public class NodeJoinController extends AbstractComponent {
|
|||
}
|
||||
};
|
||||
|
||||
class JoinTaskExecutor implements ClusterStateTaskExecutor<DiscoveryNode> {
|
||||
// visible for testing
|
||||
public static class JoinTaskExecutor implements ClusterStateTaskExecutor<DiscoveryNode> {
|
||||
|
||||
private final AllocationService allocationService;
|
||||
|
||||
private final ElectMasterService electMasterService;
|
||||
|
||||
private final Logger logger;
|
||||
|
||||
public JoinTaskExecutor(AllocationService allocationService, ElectMasterService electMasterService, Logger logger) {
|
||||
this.allocationService = allocationService;
|
||||
this.electMasterService = electMasterService;
|
||||
this.logger = logger;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterTasksResult<DiscoveryNode> execute(ClusterState currentState, List<DiscoveryNode> joiningNodes) throws Exception {
|
||||
|
@ -512,7 +522,7 @@ public class NodeJoinController extends AbstractComponent {
|
|||
|
||||
@Override
|
||||
public void clusterStatePublished(ClusterChangedEvent event) {
|
||||
NodeJoinController.this.electMaster.logMinimumMasterNodesWarningIfNecessary(event.previousState(), event.state());
|
||||
electMasterService.logMinimumMasterNodesWarningIfNecessary(event.previousState(), event.state());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -558,19 +558,19 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover
|
|||
}
|
||||
|
||||
// visible for testing
|
||||
static class NodeRemovalClusterStateTaskExecutor implements ClusterStateTaskExecutor<NodeRemovalClusterStateTaskExecutor.Task>, ClusterStateTaskListener {
|
||||
public static class NodeRemovalClusterStateTaskExecutor implements ClusterStateTaskExecutor<NodeRemovalClusterStateTaskExecutor.Task>, ClusterStateTaskListener {
|
||||
|
||||
private final AllocationService allocationService;
|
||||
private final ElectMasterService electMasterService;
|
||||
private final Consumer<String> rejoin;
|
||||
private final Logger logger;
|
||||
|
||||
static class Task {
|
||||
public static class Task {
|
||||
|
||||
private final DiscoveryNode node;
|
||||
private final String reason;
|
||||
|
||||
Task(final DiscoveryNode node, final String reason) {
|
||||
public Task(final DiscoveryNode node, final String reason) {
|
||||
this.node = node;
|
||||
this.reason = reason;
|
||||
}
|
||||
|
@ -589,7 +589,7 @@ public class ZenDiscovery extends AbstractLifecycleComponent implements Discover
|
|||
}
|
||||
}
|
||||
|
||||
NodeRemovalClusterStateTaskExecutor(
|
||||
public NodeRemovalClusterStateTaskExecutor(
|
||||
final AllocationService allocationService,
|
||||
final ElectMasterService electMasterService,
|
||||
final Consumer<String> rejoin,
|
||||
|
|
|
@ -299,14 +299,7 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
|
|||
if (token == XContentParser.Token.START_ARRAY) {
|
||||
// its an array of array of lon/lat [ [1.2, 1.3], [1.4, 1.5] ]
|
||||
while (token != XContentParser.Token.END_ARRAY) {
|
||||
try {
|
||||
parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse));
|
||||
} catch (ElasticsearchParseException e) {
|
||||
if (ignoreMalformed.value() == false) {
|
||||
throw e;
|
||||
}
|
||||
context.addIgnoredField(fieldType.name());
|
||||
}
|
||||
parseGeoPointIgnoringMalformed(context, sparse);
|
||||
token = context.parser().nextToken();
|
||||
}
|
||||
} else {
|
||||
|
@ -326,27 +319,33 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
|
|||
} else {
|
||||
while (token != XContentParser.Token.END_ARRAY) {
|
||||
if (token == XContentParser.Token.VALUE_STRING) {
|
||||
parse(context, sparse.resetFromString(context.parser().text(), ignoreZValue.value()));
|
||||
parseGeoPointStringIgnoringMalformed(context, sparse);
|
||||
} else {
|
||||
try {
|
||||
parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse));
|
||||
} catch (ElasticsearchParseException e) {
|
||||
if (ignoreMalformed.value() == false) {
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
parseGeoPointIgnoringMalformed(context, sparse);
|
||||
}
|
||||
token = context.parser().nextToken();
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (token == XContentParser.Token.VALUE_STRING) {
|
||||
parse(context, sparse.resetFromString(context.parser().text(), ignoreZValue.value()));
|
||||
parseGeoPointStringIgnoringMalformed(context, sparse);
|
||||
} else if (token == XContentParser.Token.VALUE_NULL) {
|
||||
if (fieldType.nullValue() != null) {
|
||||
parse(context, (GeoPoint) fieldType.nullValue());
|
||||
}
|
||||
} else {
|
||||
parseGeoPointIgnoringMalformed(context, sparse);
|
||||
}
|
||||
}
|
||||
|
||||
context.path().remove();
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parses geopoint represented as an object or an array, ignores malformed geopoints if needed
|
||||
*/
|
||||
private void parseGeoPointIgnoringMalformed(ParseContext context, GeoPoint sparse) throws IOException {
|
||||
try {
|
||||
parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse));
|
||||
} catch (ElasticsearchParseException e) {
|
||||
|
@ -356,10 +355,19 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
|
|||
context.addIgnoredField(fieldType.name());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
context.path().remove();
|
||||
return null;
|
||||
/**
|
||||
* Parses geopoint represented as a string and ignores malformed geopoints if needed
|
||||
*/
|
||||
private void parseGeoPointStringIgnoringMalformed(ParseContext context, GeoPoint sparse) throws IOException {
|
||||
try {
|
||||
parse(context, sparse.resetFromString(context.parser().text(), ignoreZValue.value()));
|
||||
} catch (ElasticsearchParseException e) {
|
||||
if (ignoreMalformed.value() == false) {
|
||||
throw e;
|
||||
}
|
||||
context.addIgnoredField(fieldType.name());
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -283,6 +283,7 @@ public class SplitIndexIT extends ESIntegTestCase {
|
|||
assertEquals(numDocs, ids.size());
|
||||
}
|
||||
|
||||
@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/30432")
|
||||
public void testSplitIndexPrimaryTerm() throws Exception {
|
||||
final List<Integer> factors = Arrays.asList(1, 2, 4, 8);
|
||||
final List<Integer> numberOfShardsFactors = randomSubsetOf(scaledRandomIntBetween(1, factors.size()), factors);
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
|
||||
package org.elasticsearch.common.cache;
|
||||
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.junit.Before;
|
||||
|
||||
|
@ -343,6 +344,38 @@ public class CacheTests extends ESTestCase {
|
|||
assertEquals(numberOfEntries, cache.stats().getEvictions());
|
||||
}
|
||||
|
||||
@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/30428")
|
||||
public void testComputeIfAbsentDeadlock() throws BrokenBarrierException, InterruptedException {
|
||||
final int numberOfThreads = randomIntBetween(2, 32);
|
||||
final Cache<Integer, String> cache =
|
||||
CacheBuilder.<Integer, String>builder().setExpireAfterAccess(TimeValue.timeValueNanos(1)).build();
|
||||
|
||||
final CyclicBarrier barrier = new CyclicBarrier(1 + numberOfThreads);
|
||||
for (int i = 0; i < numberOfThreads; i++) {
|
||||
final Thread thread = new Thread(() -> {
|
||||
try {
|
||||
barrier.await();
|
||||
for (int j = 0; j < numberOfEntries; j++) {
|
||||
try {
|
||||
cache.computeIfAbsent(0, k -> Integer.toString(k));
|
||||
} catch (final ExecutionException e) {
|
||||
throw new AssertionError(e);
|
||||
}
|
||||
}
|
||||
barrier.await();
|
||||
} catch (final BrokenBarrierException | InterruptedException e) {
|
||||
throw new AssertionError(e);
|
||||
}
|
||||
});
|
||||
thread.start();
|
||||
}
|
||||
|
||||
// wait for all threads to be ready
|
||||
barrier.await();
|
||||
// wait for all threads to finish
|
||||
barrier.await();
|
||||
}
|
||||
|
||||
// randomly promote some entries, step the clock forward, then check that the promoted entries remain and the
|
||||
// non-promoted entries were removed
|
||||
public void testPromotion() {
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
package org.elasticsearch.common.geo;
|
||||
|
||||
import org.apache.lucene.geo.Rectangle;
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
|
||||
/**
|
||||
|
@ -95,7 +96,17 @@ public class GeoHashTests extends ESTestCase {
|
|||
Rectangle expectedBbox = GeoHashUtils.bbox(geohash);
|
||||
Rectangle actualBbox = GeoHashUtils.bbox(extendedGeohash);
|
||||
assertEquals("Additional data points above 12 should be ignored [" + extendedGeohash + "]" , expectedBbox, actualBbox);
|
||||
}
|
||||
}
|
||||
|
||||
public void testInvalidGeohashes() {
|
||||
IllegalArgumentException ex;
|
||||
|
||||
ex = expectThrows(IllegalArgumentException.class, () -> GeoHashUtils.mortonEncode("55.5"));
|
||||
assertEquals("unsupported symbol [.] in geohash [55.5]", ex.getMessage());
|
||||
|
||||
ex = expectThrows(IllegalArgumentException.class, () -> GeoHashUtils.mortonEncode(""));
|
||||
assertEquals("empty geohash", ex.getMessage());
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -49,6 +49,7 @@ import static org.hamcrest.Matchers.equalTo;
|
|||
import static org.hamcrest.Matchers.instanceOf;
|
||||
import static org.hamcrest.Matchers.not;
|
||||
import static org.hamcrest.Matchers.notNullValue;
|
||||
import static org.hamcrest.Matchers.nullValue;
|
||||
|
||||
public class GeoPointFieldMapperTests extends ESSingleNodeTestCase {
|
||||
|
||||
|
@ -398,4 +399,50 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase {
|
|||
assertThat(defaultValue, not(equalTo(doc.rootDoc().getField("location").binaryValue())));
|
||||
}
|
||||
|
||||
public void testInvalidGeohashIgnored() throws Exception {
|
||||
String mapping = Strings.toString(XContentFactory.jsonBuilder().startObject().startObject("type")
|
||||
.startObject("properties")
|
||||
.startObject("location")
|
||||
.field("type", "geo_point")
|
||||
.field("ignore_malformed", "true")
|
||||
.endObject()
|
||||
.endObject().endObject().endObject());
|
||||
|
||||
DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser()
|
||||
.parse("type", new CompressedXContent(mapping));
|
||||
|
||||
ParsedDocument doc = defaultMapper.parse(SourceToParse.source("test", "type", "1", BytesReference
|
||||
.bytes(XContentFactory.jsonBuilder()
|
||||
.startObject()
|
||||
.field("location", "1234.333")
|
||||
.endObject()),
|
||||
XContentType.JSON));
|
||||
|
||||
assertThat(doc.rootDoc().getField("location"), nullValue());
|
||||
}
|
||||
|
||||
|
||||
public void testInvalidGeohashNotIgnored() throws Exception {
|
||||
String mapping = Strings.toString(XContentFactory.jsonBuilder().startObject().startObject("type")
|
||||
.startObject("properties")
|
||||
.startObject("location")
|
||||
.field("type", "geo_point")
|
||||
.endObject()
|
||||
.endObject().endObject().endObject());
|
||||
|
||||
DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser()
|
||||
.parse("type", new CompressedXContent(mapping));
|
||||
|
||||
MapperParsingException ex = expectThrows(MapperParsingException.class,
|
||||
() -> defaultMapper.parse(SourceToParse.source("test", "type", "1", BytesReference
|
||||
.bytes(XContentFactory.jsonBuilder()
|
||||
.startObject()
|
||||
.field("location", "1234.333")
|
||||
.endObject()),
|
||||
XContentType.JSON)));
|
||||
|
||||
assertThat(ex.getMessage(), equalTo("failed to parse"));
|
||||
assertThat(ex.getRootCause().getMessage(), equalTo("unsupported symbol [.] in geohash [1234.333]"));
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -175,6 +175,19 @@ public class GeoPointParsingTests extends ESTestCase {
|
|||
assertThat(e.getMessage(), is("field must be either [lat], [lon] or [geohash]"));
|
||||
}
|
||||
|
||||
public void testInvalidGeoHash() throws IOException {
|
||||
XContentBuilder content = JsonXContent.contentBuilder();
|
||||
content.startObject();
|
||||
content.field("geohash", "!!!!");
|
||||
content.endObject();
|
||||
|
||||
XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(content));
|
||||
parser.nextToken();
|
||||
|
||||
Exception e = expectThrows(ElasticsearchParseException.class, () -> GeoUtils.parseGeoPoint(parser));
|
||||
assertThat(e.getMessage(), is("unsupported symbol [!] in geohash [!!!!]"));
|
||||
}
|
||||
|
||||
private XContentParser objectLatLon(double lat, double lon) throws IOException {
|
||||
XContentBuilder content = JsonXContent.contentBuilder();
|
||||
content.startObject();
|
||||
|
|
|
@ -72,6 +72,9 @@ import org.elasticsearch.common.settings.ClusterSettings;
|
|||
import org.elasticsearch.common.settings.IndexScopedSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.discovery.zen.ElectMasterService;
|
||||
import org.elasticsearch.discovery.zen.NodeJoinController;
|
||||
import org.elasticsearch.discovery.zen.ZenDiscovery;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.env.TestEnvironment;
|
||||
import org.elasticsearch.index.IndexService;
|
||||
|
@ -117,6 +120,9 @@ public class ClusterStateChanges extends AbstractComponent {
|
|||
private final TransportClusterRerouteAction transportClusterRerouteAction;
|
||||
private final TransportCreateIndexAction transportCreateIndexAction;
|
||||
|
||||
private final ZenDiscovery.NodeRemovalClusterStateTaskExecutor nodeRemovalExecutor;
|
||||
private final NodeJoinController.JoinTaskExecutor joinTaskExecutor;
|
||||
|
||||
public ClusterStateChanges(NamedXContentRegistry xContentRegistry, ThreadPool threadPool) {
|
||||
super(Settings.builder().put(PATH_HOME_SETTING.getKey(), "dummy").build());
|
||||
|
||||
|
@ -191,6 +197,11 @@ public class ClusterStateChanges extends AbstractComponent {
|
|||
transportService, clusterService, threadPool, allocationService, actionFilters, indexNameExpressionResolver);
|
||||
transportCreateIndexAction = new TransportCreateIndexAction(settings,
|
||||
transportService, clusterService, threadPool, createIndexService, actionFilters, indexNameExpressionResolver);
|
||||
|
||||
ElectMasterService electMasterService = new ElectMasterService(settings);
|
||||
nodeRemovalExecutor = new ZenDiscovery.NodeRemovalClusterStateTaskExecutor(allocationService, electMasterService,
|
||||
s -> { throw new AssertionError("rejoin not implemented"); }, logger);
|
||||
joinTaskExecutor = new NodeJoinController.JoinTaskExecutor(allocationService, electMasterService, logger);
|
||||
}
|
||||
|
||||
public ClusterState createIndex(ClusterState state, CreateIndexRequest request) {
|
||||
|
@ -217,8 +228,13 @@ public class ClusterStateChanges extends AbstractComponent {
|
|||
return execute(transportClusterRerouteAction, request, state);
|
||||
}
|
||||
|
||||
public ClusterState deassociateDeadNodes(ClusterState clusterState, boolean reroute, String reason) {
|
||||
return allocationService.deassociateDeadNodes(clusterState, reroute, reason);
|
||||
public ClusterState addNodes(ClusterState clusterState, List<DiscoveryNode> nodes) {
|
||||
return runTasks(joinTaskExecutor, clusterState, nodes);
|
||||
}
|
||||
|
||||
public ClusterState removeNodes(ClusterState clusterState, List<DiscoveryNode> nodes) {
|
||||
return runTasks(nodeRemovalExecutor, clusterState, nodes.stream()
|
||||
.map(n -> new ZenDiscovery.NodeRemovalClusterStateTaskExecutor.Task(n, "dummy reason")).collect(Collectors.toList()));
|
||||
}
|
||||
|
||||
public ClusterState applyFailedShards(ClusterState clusterState, List<FailedShard> failedShards) {
|
||||
|
|
|
@ -70,6 +70,7 @@ import java.util.concurrent.ExecutorService;
|
|||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.function.Supplier;
|
||||
|
||||
import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS;
|
||||
import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;
|
||||
import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;
|
||||
import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;
|
||||
|
@ -258,8 +259,14 @@ public class IndicesClusterStateServiceRandomUpdatesTests extends AbstractIndice
|
|||
}
|
||||
String name = "index_" + randomAlphaOfLength(15).toLowerCase(Locale.ROOT);
|
||||
Settings.Builder settingsBuilder = Settings.builder()
|
||||
.put(SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 3))
|
||||
.put(SETTING_NUMBER_OF_REPLICAS, randomInt(2));
|
||||
.put(SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 3));
|
||||
if (randomBoolean()) {
|
||||
int min = randomInt(2);
|
||||
int max = min + randomInt(3);
|
||||
settingsBuilder.put(SETTING_AUTO_EXPAND_REPLICAS, randomBoolean() ? min + "-" + max : min + "-all");
|
||||
} else {
|
||||
settingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, randomInt(2));
|
||||
}
|
||||
CreateIndexRequest request = new CreateIndexRequest(name, settingsBuilder.build()).waitForActiveShards(ActiveShardCount.NONE);
|
||||
state = cluster.createIndex(state, request);
|
||||
assertTrue(state.metaData().hasIndex(name));
|
||||
|
@ -345,9 +352,7 @@ public class IndicesClusterStateServiceRandomUpdatesTests extends AbstractIndice
|
|||
if (randomBoolean()) {
|
||||
// add node
|
||||
if (state.nodes().getSize() < 10) {
|
||||
DiscoveryNodes newNodes = DiscoveryNodes.builder(state.nodes()).add(createNode()).build();
|
||||
state = ClusterState.builder(state).nodes(newNodes).build();
|
||||
state = cluster.reroute(state, new ClusterRerouteRequest()); // always reroute after node leave
|
||||
state = cluster.addNodes(state, Collections.singletonList(createNode()));
|
||||
updateNodes(state, clusterStateServiceMap, indicesServiceSupplier);
|
||||
}
|
||||
} else {
|
||||
|
@ -355,16 +360,12 @@ public class IndicesClusterStateServiceRandomUpdatesTests extends AbstractIndice
|
|||
if (state.nodes().getDataNodes().size() > 3) {
|
||||
DiscoveryNode discoveryNode = randomFrom(state.nodes().getNodes().values().toArray(DiscoveryNode.class));
|
||||
if (discoveryNode.equals(state.nodes().getMasterNode()) == false) {
|
||||
DiscoveryNodes newNodes = DiscoveryNodes.builder(state.nodes()).remove(discoveryNode.getId()).build();
|
||||
state = ClusterState.builder(state).nodes(newNodes).build();
|
||||
state = cluster.deassociateDeadNodes(state, true, "removed and added a node");
|
||||
state = cluster.removeNodes(state, Collections.singletonList(discoveryNode));
|
||||
updateNodes(state, clusterStateServiceMap, indicesServiceSupplier);
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
// and add it back
|
||||
DiscoveryNodes newNodes = DiscoveryNodes.builder(state.nodes()).add(discoveryNode).build();
|
||||
state = ClusterState.builder(state).nodes(newNodes).build();
|
||||
state = cluster.reroute(state, new ClusterRerouteRequest());
|
||||
state = cluster.addNodes(state, Collections.singletonList(discoveryNode));
|
||||
updateNodes(state, clusterStateServiceMap, indicesServiceSupplier);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -91,6 +91,36 @@ public class MoreLikeThisIT extends ESIntegTestCase {
|
|||
assertHitCount(response, 1L);
|
||||
}
|
||||
|
||||
//Issue #30148
|
||||
public void testMoreLikeThisForZeroTokensInOneOfTheAnalyzedFields() throws Exception {
|
||||
CreateIndexRequestBuilder createIndexRequestBuilder = prepareCreate("test")
|
||||
.addMapping("type", jsonBuilder()
|
||||
.startObject().startObject("type")
|
||||
.startObject("properties")
|
||||
.startObject("myField").field("type", "text").endObject()
|
||||
.startObject("empty").field("type", "text").endObject()
|
||||
.endObject()
|
||||
.endObject().endObject());
|
||||
|
||||
assertAcked(createIndexRequestBuilder);
|
||||
|
||||
ensureGreen();
|
||||
|
||||
client().index(indexRequest("test").type("type").id("1").source(jsonBuilder().startObject()
|
||||
.field("myField", "and_foo").field("empty", "").endObject())).actionGet();
|
||||
client().index(indexRequest("test").type("type").id("2").source(jsonBuilder().startObject()
|
||||
.field("myField", "and_foo").field("empty", "").endObject())).actionGet();
|
||||
|
||||
client().admin().indices().refresh(refreshRequest()).actionGet();
|
||||
|
||||
SearchResponse searchResponse = client().prepareSearch().setQuery(
|
||||
moreLikeThisQuery(new String[]{"myField", "empty"}, null, new Item[]{new Item("test", "type", "1")})
|
||||
.minTermFreq(1).minDocFreq(1)
|
||||
).get();
|
||||
|
||||
assertHitCount(searchResponse, 1L);
|
||||
}
|
||||
|
||||
public void testSimpleMoreLikeOnLongField() throws Exception {
|
||||
logger.info("Creating index test");
|
||||
assertAcked(prepareCreate("test")
|
||||
|
|
|
@ -197,7 +197,7 @@ import org.elasticsearch.xpack.security.rest.action.user.RestGetUsersAction;
|
|||
import org.elasticsearch.xpack.security.rest.action.user.RestHasPrivilegesAction;
|
||||
import org.elasticsearch.xpack.security.rest.action.user.RestPutUserAction;
|
||||
import org.elasticsearch.xpack.security.rest.action.user.RestSetEnabledAction;
|
||||
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor;
|
||||
import org.elasticsearch.xpack.security.transport.filter.IPFilter;
|
||||
import org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4HttpServerTransport;
|
||||
|
@ -233,7 +233,7 @@ import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_FORMAT_SETT
|
|||
import static org.elasticsearch.xpack.core.XPackSettings.HTTP_SSL_ENABLED;
|
||||
import static org.elasticsearch.xpack.core.security.SecurityLifecycleServiceField.SECURITY_TEMPLATE_NAME;
|
||||
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_INDEX_NAME;
|
||||
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.INTERNAL_INDEX_FORMAT;
|
||||
import static org.elasticsearch.xpack.security.support.SecurityIndexManager.INTERNAL_INDEX_FORMAT;
|
||||
|
||||
public class Security extends Plugin implements ActionPlugin, IngestPlugin, NetworkPlugin, ClusterPlugin,
|
||||
DiscoveryPlugin, MapperPlugin, ExtensiblePlugin {
|
||||
|
@ -424,8 +424,8 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
|
|||
components.add(realms);
|
||||
components.add(reservedRealm);
|
||||
|
||||
securityLifecycleService.addSecurityIndexHealthChangeListener(nativeRoleMappingStore::onSecurityIndexHealthChange);
|
||||
securityLifecycleService.addSecurityIndexOutOfDateListener(nativeRoleMappingStore::onSecurityIndexOutOfDateChange);
|
||||
securityLifecycleService.securityIndex().addIndexHealthChangeListener(nativeRoleMappingStore::onSecurityIndexHealthChange);
|
||||
securityLifecycleService.securityIndex().addIndexOutOfDateListener(nativeRoleMappingStore::onSecurityIndexOutOfDateChange);
|
||||
|
||||
AuthenticationFailureHandler failureHandler = null;
|
||||
String extensionName = null;
|
||||
|
@ -458,8 +458,8 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
|
|||
}
|
||||
final CompositeRolesStore allRolesStore = new CompositeRolesStore(settings, fileRolesStore, nativeRolesStore,
|
||||
reservedRolesStore, rolesProviders, threadPool.getThreadContext(), getLicenseState());
|
||||
securityLifecycleService.addSecurityIndexHealthChangeListener(allRolesStore::onSecurityIndexHealthChange);
|
||||
securityLifecycleService.addSecurityIndexOutOfDateListener(allRolesStore::onSecurityIndexOutOfDateChange);
|
||||
securityLifecycleService.securityIndex().addIndexHealthChangeListener(allRolesStore::onSecurityIndexHealthChange);
|
||||
securityLifecycleService.securityIndex().addIndexOutOfDateListener(allRolesStore::onSecurityIndexOutOfDateChange);
|
||||
// to keep things simple, just invalidate all cached entries on license change. this happens so rarely that the impact should be
|
||||
// minimal
|
||||
getLicenseState().addListener(allRolesStore::invalidateAll);
|
||||
|
@ -886,7 +886,7 @@ public class Security extends Plugin implements ActionPlugin, IngestPlugin, Netw
|
|||
templates.remove(SECURITY_TEMPLATE_NAME);
|
||||
final XContent xContent = XContentFactory.xContent(XContentType.JSON);
|
||||
final byte[] auditTemplate = TemplateUtils.loadTemplate("/" + IndexAuditTrail.INDEX_TEMPLATE_NAME + ".json",
|
||||
Version.CURRENT.toString(), IndexLifecycleManager.TEMPLATE_VERSION_PATTERN).getBytes(StandardCharsets.UTF_8);
|
||||
Version.CURRENT.toString(), SecurityIndexManager.TEMPLATE_VERSION_PATTERN).getBytes(StandardCharsets.UTF_8);
|
||||
|
||||
try (XContentParser parser = xContent
|
||||
.createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, auditTemplate)) {
|
||||
|
|
|
@ -22,7 +22,7 @@ import org.elasticsearch.common.util.concurrent.AbstractRunnable;
|
|||
import org.elasticsearch.gateway.GatewayService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.security.audit.index.IndexAuditTrail;
|
||||
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
|
@ -46,7 +46,7 @@ import java.util.function.Predicate;
|
|||
*/
|
||||
public class SecurityLifecycleService extends AbstractComponent implements ClusterStateListener {
|
||||
|
||||
public static final String INTERNAL_SECURITY_INDEX = IndexLifecycleManager.INTERNAL_SECURITY_INDEX;
|
||||
public static final String INTERNAL_SECURITY_INDEX = SecurityIndexManager.INTERNAL_SECURITY_INDEX;
|
||||
public static final String SECURITY_INDEX_NAME = ".security";
|
||||
|
||||
private static final Version MIN_READ_VERSION = Version.V_5_0_0;
|
||||
|
@ -55,7 +55,7 @@ public class SecurityLifecycleService extends AbstractComponent implements Clust
|
|||
private final ThreadPool threadPool;
|
||||
private final IndexAuditTrail indexAuditTrail;
|
||||
|
||||
private final IndexLifecycleManager securityIndex;
|
||||
private final SecurityIndexManager securityIndex;
|
||||
|
||||
public SecurityLifecycleService(Settings settings, ClusterService clusterService,
|
||||
ThreadPool threadPool, Client client,
|
||||
|
@ -64,7 +64,7 @@ public class SecurityLifecycleService extends AbstractComponent implements Clust
|
|||
this.settings = settings;
|
||||
this.threadPool = threadPool;
|
||||
this.indexAuditTrail = indexAuditTrail;
|
||||
this.securityIndex = new IndexLifecycleManager(settings, client, SECURITY_INDEX_NAME);
|
||||
this.securityIndex = new SecurityIndexManager(settings, client, SECURITY_INDEX_NAME);
|
||||
clusterService.addListener(this);
|
||||
clusterService.addLifecycleListener(new LifecycleListener() {
|
||||
@Override
|
||||
|
@ -110,69 +110,10 @@ public class SecurityLifecycleService extends AbstractComponent implements Clust
|
|||
}
|
||||
}
|
||||
|
||||
IndexLifecycleManager securityIndex() {
|
||||
public SecurityIndexManager securityIndex() {
|
||||
return securityIndex;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns {@code true} if the security index exists
|
||||
*/
|
||||
public boolean isSecurityIndexExisting() {
|
||||
return securityIndex.indexExists();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns <code>true</code> if the security index does not exist or it exists and has the current
|
||||
* value for the <code>index.format</code> index setting
|
||||
*/
|
||||
public boolean isSecurityIndexUpToDate() {
|
||||
return securityIndex.isIndexUpToDate();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns <code>true</code> if the security index exists and all primary shards are active
|
||||
*/
|
||||
public boolean isSecurityIndexAvailable() {
|
||||
return securityIndex.isAvailable();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns <code>true</code> if the security index does not exist or the mappings are up to date
|
||||
* based on the version in the <code>_meta</code> field
|
||||
*/
|
||||
public boolean isSecurityIndexMappingUpToDate() {
|
||||
return securityIndex().isMappingUpToDate();
|
||||
}
|
||||
|
||||
/**
|
||||
* Test whether the effective (active) version of the security mapping meets the
|
||||
* <code>requiredVersion</code>.
|
||||
*
|
||||
* @return <code>true</code> if the effective version passes the predicate, or the security
|
||||
* mapping does not exist (<code>null</code> version). Otherwise, <code>false</code>.
|
||||
*/
|
||||
public boolean checkSecurityMappingVersion(Predicate<Version> requiredVersion) {
|
||||
return securityIndex.checkMappingVersion(requiredVersion);
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a listener which will be notified when the security index health changes. The previous and
|
||||
* current health will be provided to the listener so that the listener can determine if any action
|
||||
* needs to be taken.
|
||||
*/
|
||||
public void addSecurityIndexHealthChangeListener(BiConsumer<ClusterIndexHealth, ClusterIndexHealth> listener) {
|
||||
securityIndex.addIndexHealthChangeListener(listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a listener which will be notified when the security index out of date value changes. The previous and
|
||||
* current value will be provided to the listener so that the listener can determine if any action
|
||||
* needs to be taken.
|
||||
*/
|
||||
void addSecurityIndexOutOfDateListener(BiConsumer<Boolean, Boolean> listener) {
|
||||
securityIndex.addIndexOutOfDateListener(listener);
|
||||
}
|
||||
|
||||
// this is called in a lifecycle listener beforeStop on the cluster service
|
||||
private void close() {
|
||||
if (indexAuditTrail != null) {
|
||||
|
@ -193,29 +134,13 @@ public class SecurityLifecycleService extends AbstractComponent implements Clust
|
|||
}
|
||||
|
||||
private static boolean checkMappingVersions(ClusterState clusterState, Logger logger, Predicate<Version> versionPredicate) {
|
||||
return IndexLifecycleManager.checkIndexMappingVersionMatches(SECURITY_INDEX_NAME, clusterState, logger, versionPredicate);
|
||||
return SecurityIndexManager.checkIndexMappingVersionMatches(SECURITY_INDEX_NAME, clusterState, logger, versionPredicate);
|
||||
}
|
||||
|
||||
public static List<String> indexNames() {
|
||||
return Collections.unmodifiableList(Arrays.asList(SECURITY_INDEX_NAME, INTERNAL_SECURITY_INDEX));
|
||||
}
|
||||
|
||||
/**
|
||||
* Prepares the security index by creating it if it doesn't exist or updating the mappings if the mappings are
|
||||
* out of date. After any tasks have been executed, the runnable is then executed.
|
||||
*/
|
||||
public void prepareIndexIfNeededThenExecute(final Consumer<Exception> consumer, final Runnable andThen) {
|
||||
securityIndex.prepareIndexIfNeededThenExecute(consumer, andThen);
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the security index is out of date with the current version. If the index does not exist
|
||||
* we treat the index as up to date as we expect it to be created with the current format.
|
||||
*/
|
||||
public boolean isSecurityIndexOutOfDate() {
|
||||
return securityIndex.isIndexUpToDate() == false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Is the move from {@code previousHealth} to {@code currentHealth} a move from an unhealthy ("RED") index state to a healthy
|
||||
* ("non-RED") state.
|
||||
|
|
|
@ -57,7 +57,7 @@ import org.elasticsearch.xpack.core.template.TemplateUtils;
|
|||
import org.elasticsearch.xpack.security.audit.AuditLevel;
|
||||
import org.elasticsearch.xpack.security.audit.AuditTrail;
|
||||
import org.elasticsearch.xpack.security.rest.RemoteHostHeader;
|
||||
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.elasticsearch.xpack.security.transport.filter.SecurityIpFilterRule;
|
||||
import org.joda.time.DateTime;
|
||||
import org.joda.time.DateTimeZone;
|
||||
|
@ -105,7 +105,7 @@ import static org.elasticsearch.xpack.security.audit.AuditLevel.parse;
|
|||
import static org.elasticsearch.xpack.security.audit.AuditUtil.indices;
|
||||
import static org.elasticsearch.xpack.security.audit.AuditUtil.restRequestContent;
|
||||
import static org.elasticsearch.xpack.security.audit.index.IndexNameResolver.resolve;
|
||||
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.SECURITY_VERSION_STRING;
|
||||
import static org.elasticsearch.xpack.security.support.SecurityIndexManager.SECURITY_VERSION_STRING;
|
||||
|
||||
/**
|
||||
* Audit trail implementation that writes events into an index.
|
||||
|
@ -1001,7 +1001,7 @@ public class IndexAuditTrail extends AbstractComponent implements AuditTrail {
|
|||
|
||||
private PutIndexTemplateRequest getPutIndexTemplateRequest(Settings customSettings) {
|
||||
final byte[] template = TemplateUtils.loadTemplate("/" + INDEX_TEMPLATE_NAME + ".json",
|
||||
Version.CURRENT.toString(), IndexLifecycleManager.TEMPLATE_VERSION_PATTERN).getBytes(StandardCharsets.UTF_8);
|
||||
Version.CURRENT.toString(), SecurityIndexManager.TEMPLATE_VERSION_PATTERN).getBytes(StandardCharsets.UTF_8);
|
||||
final PutIndexTemplateRequest request = new PutIndexTemplateRequest(INDEX_TEMPLATE_NAME).source(template, XContentType.JSON);
|
||||
if (customSettings != null && customSettings.names().size() > 0) {
|
||||
Settings updatedSettings = Settings.builder()
|
||||
|
|
|
@ -96,7 +96,7 @@ public final class InternalRealms {
|
|||
map.put(FileRealmSettings.TYPE, config -> new FileRealm(config, resourceWatcherService));
|
||||
map.put(NativeRealmSettings.TYPE, config -> {
|
||||
final NativeRealm nativeRealm = new NativeRealm(config, nativeUsersStore);
|
||||
securityLifecycleService.addSecurityIndexHealthChangeListener(nativeRealm::onSecurityIndexHealthChange);
|
||||
securityLifecycleService.securityIndex().addIndexHealthChangeListener(nativeRealm::onSecurityIndexHealthChange);
|
||||
return nativeRealm;
|
||||
});
|
||||
map.put(LdapRealmSettings.AD_TYPE, config -> new LdapRealm(LdapRealmSettings.AD_TYPE, config, sslService,
|
||||
|
|
|
@ -250,7 +250,7 @@ public final class TokenService extends AbstractComponent {
|
|||
.setSource(builder)
|
||||
.setRefreshPolicy(RefreshPolicy.WAIT_UNTIL)
|
||||
.request();
|
||||
lifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
lifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client, SECURITY_ORIGIN, IndexAction.INSTANCE, request,
|
||||
ActionListener.wrap(indexResponse -> listener.onResponse(new Tuple<>(userToken, refreshToken)),
|
||||
listener::onFailure))
|
||||
|
@ -354,7 +354,7 @@ public final class TokenService extends AbstractComponent {
|
|||
if (version.onOrAfter(Version.V_6_2_0)) {
|
||||
// we only have the id and need to get the token from the doc!
|
||||
decryptTokenId(in, cipher, version, ActionListener.wrap(tokenId ->
|
||||
lifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
lifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
final GetRequest getRequest =
|
||||
client.prepareGet(SecurityLifecycleService.SECURITY_INDEX_NAME, TYPE,
|
||||
getTokenDocumentId(tokenId)).request();
|
||||
|
@ -524,7 +524,7 @@ public final class TokenService extends AbstractComponent {
|
|||
.request();
|
||||
final String tokenDocId = getTokenDocumentId(userToken);
|
||||
final Version version = userToken.getVersion();
|
||||
lifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
lifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, indexRequest,
|
||||
ActionListener.<IndexResponse>wrap(indexResponse -> {
|
||||
ActionListener<Boolean> wrappedListener =
|
||||
|
@ -566,7 +566,7 @@ public final class TokenService extends AbstractComponent {
|
|||
.setVersion(documentVersion)
|
||||
.setRefreshPolicy(RefreshPolicy.WAIT_UNTIL)
|
||||
.request();
|
||||
lifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
lifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, request,
|
||||
ActionListener.<UpdateResponse>wrap(updateResponse -> {
|
||||
if (updateResponse.getGetResult() != null
|
||||
|
@ -665,7 +665,7 @@ public final class TokenService extends AbstractComponent {
|
|||
.setVersion(true)
|
||||
.request();
|
||||
|
||||
lifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
lifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN, request,
|
||||
ActionListener.<SearchResponse>wrap(searchResponse -> {
|
||||
if (searchResponse.isTimedOut()) {
|
||||
|
@ -847,7 +847,7 @@ public final class TokenService extends AbstractComponent {
|
|||
.request();
|
||||
|
||||
final Supplier<ThreadContext.StoredContext> supplier = client.threadPool().getThreadContext().newRestorableContext(false);
|
||||
lifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
lifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
ScrollHelper.fetchAllByEntity(client, request, new ContextPreservingActionListener<>(supplier, listener), this::parseHit));
|
||||
}
|
||||
|
||||
|
@ -914,11 +914,11 @@ public final class TokenService extends AbstractComponent {
|
|||
* have been explicitly cleared.
|
||||
*/
|
||||
private void checkIfTokenIsRevoked(UserToken userToken, ActionListener<UserToken> listener) {
|
||||
if (lifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (lifecycleService.securityIndex().indexExists() == false) {
|
||||
// index doesn't exist so the token is considered valid.
|
||||
listener.onResponse(userToken);
|
||||
} else {
|
||||
lifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
lifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
MultiGetRequest mGetRequest = client.prepareMultiGet()
|
||||
.add(SecurityLifecycleService.SECURITY_INDEX_NAME, TYPE, getInvalidatedTokenDocumentId(userToken))
|
||||
.add(SecurityLifecycleService.SECURITY_INDEX_NAME, TYPE, getTokenDocumentId(userToken))
|
||||
|
@ -989,7 +989,7 @@ public final class TokenService extends AbstractComponent {
|
|||
}
|
||||
|
||||
private void maybeStartTokenRemover() {
|
||||
if (lifecycleService.isSecurityIndexAvailable()) {
|
||||
if (lifecycleService.securityIndex().isAvailable()) {
|
||||
if (client.threadPool().relativeTimeInMillis() - lastExpirationRunMs > deleteInterval.getMillis()) {
|
||||
expiredTokenRemover.submit(client.threadPool());
|
||||
lastExpirationRunMs = client.threadPool().relativeTimeInMillis();
|
||||
|
|
|
@ -114,7 +114,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
}
|
||||
};
|
||||
|
||||
if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
// TODO remove this short circuiting and fix tests that fail without this!
|
||||
listener.onResponse(Collections.emptyList());
|
||||
} else if (userNames.length == 1) { // optimization for single user lookup
|
||||
|
@ -123,7 +123,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
(uap) -> listener.onResponse(uap == null ? Collections.emptyList() : Collections.singletonList(uap.user())),
|
||||
handleException));
|
||||
} else {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
final QueryBuilder query;
|
||||
if (userNames == null || userNames.length == 0) {
|
||||
query = QueryBuilders.termQuery(Fields.TYPE.getPreferredName(), USER_DOC_TYPE);
|
||||
|
@ -154,11 +154,11 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
* Async method to retrieve a user and their password
|
||||
*/
|
||||
private void getUserAndPassword(final String user, final ActionListener<UserAndPassword> listener) {
|
||||
if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
// TODO remove this short circuiting and fix tests that fail without this!
|
||||
listener.onResponse(null);
|
||||
} else {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareGet(SECURITY_INDEX_NAME,
|
||||
INDEX_TYPE, getIdForUser(USER_DOC_TYPE, user)).request(),
|
||||
|
@ -199,7 +199,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
docType = USER_DOC_TYPE;
|
||||
}
|
||||
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareUpdate(SECURITY_INDEX_NAME, INDEX_TYPE, getIdForUser(docType, username))
|
||||
.setDoc(Requests.INDEX_CONTENT_TYPE, Fields.PASSWORD.getPreferredName(),
|
||||
|
@ -237,7 +237,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
* has been indexed
|
||||
*/
|
||||
private void createReservedUser(String username, char[] passwordHash, RefreshPolicy refresh, ActionListener<Void> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareIndex(SECURITY_INDEX_NAME, INDEX_TYPE,
|
||||
getIdForUser(RESERVED_USER_TYPE, username))
|
||||
|
@ -279,7 +279,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
private void updateUserWithoutPassword(final PutUserRequest putUserRequest, final ActionListener<Boolean> listener) {
|
||||
assert putUserRequest.passwordHash() == null;
|
||||
// We must have an existing document
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareUpdate(SECURITY_INDEX_NAME, INDEX_TYPE,
|
||||
getIdForUser(USER_DOC_TYPE, putUserRequest.username()))
|
||||
|
@ -322,7 +322,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
|
||||
private void indexUser(final PutUserRequest putUserRequest, final ActionListener<Boolean> listener) {
|
||||
assert putUserRequest.passwordHash() != null;
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareIndex(SECURITY_INDEX_NAME, INDEX_TYPE,
|
||||
getIdForUser(USER_DOC_TYPE, putUserRequest.username()))
|
||||
|
@ -366,7 +366,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
|
||||
private void setRegularUserEnabled(final String username, final boolean enabled, final RefreshPolicy refreshPolicy,
|
||||
final ActionListener<Void> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareUpdate(SECURITY_INDEX_NAME, INDEX_TYPE,
|
||||
getIdForUser(USER_DOC_TYPE, username))
|
||||
|
@ -401,7 +401,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
|
||||
private void setReservedUserEnabled(final String username, final boolean enabled, final RefreshPolicy refreshPolicy,
|
||||
boolean clearCache, final ActionListener<Void> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareUpdate(SECURITY_INDEX_NAME, INDEX_TYPE,
|
||||
getIdForUser(RESERVED_USER_TYPE, username))
|
||||
|
@ -431,7 +431,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
}
|
||||
|
||||
public void deleteUser(final DeleteUserRequest deleteUserRequest, final ActionListener<Boolean> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
DeleteRequest request = client.prepareDelete(SECURITY_INDEX_NAME,
|
||||
INDEX_TYPE, getIdForUser(USER_DOC_TYPE, deleteUserRequest.username())).request();
|
||||
request.setRefreshPolicy(deleteUserRequest.getRefreshPolicy());
|
||||
|
@ -470,11 +470,11 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
}
|
||||
|
||||
void getReservedUserInfo(String username, ActionListener<ReservedUserInfo> listener) {
|
||||
if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
// TODO remove this short circuiting and fix tests that fail without this!
|
||||
listener.onResponse(null);
|
||||
} else {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareGet(SECURITY_INDEX_NAME, INDEX_TYPE,
|
||||
getIdForUser(RESERVED_USER_TYPE, username)).request(),
|
||||
|
@ -514,7 +514,7 @@ public class NativeUsersStore extends AbstractComponent {
|
|||
}
|
||||
|
||||
void getAllReservedUserInfo(ActionListener<Map<String, ReservedUserInfo>> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareSearch(SECURITY_INDEX_NAME)
|
||||
.setQuery(QueryBuilders.termQuery(Fields.TYPE.getPreferredName(), RESERVED_USER_TYPE))
|
||||
|
|
|
@ -191,7 +191,7 @@ public class ReservedRealm extends CachingUsernamePasswordRealm {
|
|||
if (userIsDefinedForCurrentSecurityMapping(username) == false) {
|
||||
logger.debug("Marking user [{}] as disabled because the security mapping is not at the required version", username);
|
||||
listener.onResponse(DISABLED_DEFAULT_USER_INFO.deepClone());
|
||||
} else if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
} else if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
listener.onResponse(getDefaultUserInfo(username));
|
||||
} else {
|
||||
nativeUsersStore.getReservedUserInfo(username, ActionListener.wrap((userInfo) -> {
|
||||
|
@ -218,7 +218,7 @@ public class ReservedRealm extends CachingUsernamePasswordRealm {
|
|||
|
||||
private boolean userIsDefinedForCurrentSecurityMapping(String username) {
|
||||
final Version requiredVersion = getDefinedVersion(username);
|
||||
return securityLifecycleService.checkSecurityMappingVersion(requiredVersion::onOrBefore);
|
||||
return securityLifecycleService.securityIndex().checkMappingVersion(requiredVersion::onOrBefore);
|
||||
}
|
||||
|
||||
private Version getDefinedVersion(String username) {
|
||||
|
|
|
@ -120,7 +120,7 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
|
|||
* <em>package private</em> for unit testing
|
||||
*/
|
||||
void loadMappings(ActionListener<List<ExpressionRoleMapping>> listener) {
|
||||
if (securityLifecycleService.isSecurityIndexOutOfDate()) {
|
||||
if (securityLifecycleService.securityIndex().isIndexUpToDate() == false) {
|
||||
listener.onFailure(new IllegalStateException(
|
||||
"Security index is not on the current version - the native realm will not be operational until " +
|
||||
"the upgrade API is run on the security index"));
|
||||
|
@ -176,7 +176,7 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
|
|||
|
||||
private <Request, Result> void modifyMapping(String name, CheckedBiConsumer<Request, ActionListener<Result>, Exception> inner,
|
||||
Request request, ActionListener<Result> listener) {
|
||||
if (securityLifecycleService.isSecurityIndexOutOfDate()) {
|
||||
if (securityLifecycleService.securityIndex().isIndexUpToDate() == false) {
|
||||
listener.onFailure(new IllegalStateException(
|
||||
"Security index is not on the current version - the native realm will not be operational until " +
|
||||
"the upgrade API is run on the security index"));
|
||||
|
@ -192,7 +192,7 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
|
|||
|
||||
private void innerPutMapping(PutRoleMappingRequest request, ActionListener<Boolean> listener) {
|
||||
final ExpressionRoleMapping mapping = request.getMapping();
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
final XContentBuilder xContentBuilder;
|
||||
try {
|
||||
xContentBuilder = mapping.toXContent(jsonBuilder(), ToXContent.EMPTY_PARAMS, true);
|
||||
|
@ -222,7 +222,7 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
|
|||
}
|
||||
|
||||
private void innerDeleteMapping(DeleteRoleMappingRequest request, ActionListener<Boolean> listener) throws IOException {
|
||||
if (securityLifecycleService.isSecurityIndexOutOfDate()) {
|
||||
if (securityLifecycleService.securityIndex().isIndexUpToDate() == false) {
|
||||
listener.onFailure(new IllegalStateException(
|
||||
"Security index is not on the current version - the native realm will not be operational until " +
|
||||
"the upgrade API is run on the security index"));
|
||||
|
@ -276,16 +276,16 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
|
|||
}
|
||||
|
||||
private void getMappings(ActionListener<List<ExpressionRoleMapping>> listener) {
|
||||
if (securityLifecycleService.isSecurityIndexAvailable()) {
|
||||
if (securityLifecycleService.securityIndex().isAvailable()) {
|
||||
loadMappings(listener);
|
||||
} else {
|
||||
logger.info("The security index is not yet available - no role mappings can be loaded");
|
||||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("Security Index [{}] [exists: {}] [available: {}] [mapping up to date: {}]",
|
||||
SECURITY_INDEX_NAME,
|
||||
securityLifecycleService.isSecurityIndexExisting(),
|
||||
securityLifecycleService.isSecurityIndexAvailable(),
|
||||
securityLifecycleService.isSecurityIndexMappingUpToDate()
|
||||
securityLifecycleService.securityIndex().indexExists(),
|
||||
securityLifecycleService.securityIndex().isAvailable(),
|
||||
securityLifecycleService.securityIndex().isMappingUpToDate()
|
||||
);
|
||||
}
|
||||
listener.onResponse(Collections.emptyList());
|
||||
|
@ -302,7 +302,7 @@ public class NativeRoleMappingStore extends AbstractComponent implements UserRol
|
|||
* </ul>
|
||||
*/
|
||||
public void usageStats(ActionListener<Map<String, Object>> listener) {
|
||||
if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
reportStats(listener, Collections.emptyList());
|
||||
} else {
|
||||
getMappings(ActionListener.wrap(mappings -> reportStats(listener, mappings), listener::onFailure));
|
||||
|
|
|
@ -100,7 +100,7 @@ public class NativeRolesStore extends AbstractComponent {
|
|||
* Retrieve a list of roles, if rolesToGet is null or empty, fetch all roles
|
||||
*/
|
||||
public void getRoleDescriptors(String[] names, final ActionListener<Collection<RoleDescriptor>> listener) {
|
||||
if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
// TODO remove this short circuiting and fix tests that fail without this!
|
||||
listener.onResponse(Collections.emptyList());
|
||||
} else if (names != null && names.length == 1) {
|
||||
|
@ -108,7 +108,7 @@ public class NativeRolesStore extends AbstractComponent {
|
|||
listener.onResponse(roleDescriptor == null ? Collections.emptyList() : Collections.singletonList(roleDescriptor)),
|
||||
listener::onFailure));
|
||||
} else {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
QueryBuilder query;
|
||||
if (names == null || names.length == 0) {
|
||||
query = QueryBuilders.termQuery(RoleDescriptor.Fields.TYPE.getPreferredName(), ROLE_TYPE);
|
||||
|
@ -133,7 +133,7 @@ public class NativeRolesStore extends AbstractComponent {
|
|||
}
|
||||
|
||||
public void deleteRole(final DeleteRoleRequest deleteRoleRequest, final ActionListener<Boolean> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
DeleteRequest request = client.prepareDelete(SecurityLifecycleService.SECURITY_INDEX_NAME,
|
||||
ROLE_DOC_TYPE, getIdForUser(deleteRoleRequest.name())).request();
|
||||
request.setRefreshPolicy(deleteRoleRequest.getRefreshPolicy());
|
||||
|
@ -166,7 +166,7 @@ public class NativeRolesStore extends AbstractComponent {
|
|||
|
||||
// pkg-private for testing
|
||||
void innerPutRole(final PutRoleRequest request, final RoleDescriptor role, final ActionListener<Boolean> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () -> {
|
||||
final XContentBuilder xContentBuilder;
|
||||
try {
|
||||
xContentBuilder = role.toXContent(jsonBuilder(), ToXContent.EMPTY_PARAMS, true);
|
||||
|
@ -197,13 +197,13 @@ public class NativeRolesStore extends AbstractComponent {
|
|||
|
||||
public void usageStats(ActionListener<Map<String, Object>> listener) {
|
||||
Map<String, Object> usageStats = new HashMap<>();
|
||||
if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
usageStats.put("size", 0L);
|
||||
usageStats.put("fls", false);
|
||||
usageStats.put("dls", false);
|
||||
listener.onResponse(usageStats);
|
||||
} else {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareMultiSearch()
|
||||
.add(client.prepareSearch(SecurityLifecycleService.SECURITY_INDEX_NAME)
|
||||
|
@ -259,11 +259,11 @@ public class NativeRolesStore extends AbstractComponent {
|
|||
}
|
||||
|
||||
private void getRoleDescriptor(final String roleId, ActionListener<RoleDescriptor> roleActionListener) {
|
||||
if (securityLifecycleService.isSecurityIndexExisting() == false) {
|
||||
if (securityLifecycleService.securityIndex().indexExists() == false) {
|
||||
// TODO remove this short circuiting and fix tests that fail without this!
|
||||
roleActionListener.onResponse(null);
|
||||
} else {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(roleActionListener::onFailure, () ->
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(roleActionListener::onFailure, () ->
|
||||
executeGetRoleRequest(roleId, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse response) {
|
||||
|
@ -288,7 +288,7 @@ public class NativeRolesStore extends AbstractComponent {
|
|||
}
|
||||
|
||||
private void executeGetRoleRequest(String role, ActionListener<GetResponse> listener) {
|
||||
securityLifecycleService.prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
securityLifecycleService.securityIndex().prepareIndexIfNeededThenExecute(listener::onFailure, () ->
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), SECURITY_ORIGIN,
|
||||
client.prepareGet(SecurityLifecycleService.SECURITY_INDEX_NAME,
|
||||
ROLE_DOC_TYPE, getIdForUser(role)).request(),
|
||||
|
|
|
@ -58,7 +58,7 @@ import static org.elasticsearch.xpack.core.security.SecurityLifecycleServiceFiel
|
|||
/**
|
||||
* Manages the lifecycle of a single index, its template, mapping and and data upgrades/migrations.
|
||||
*/
|
||||
public class IndexLifecycleManager extends AbstractComponent {
|
||||
public class SecurityIndexManager extends AbstractComponent {
|
||||
|
||||
public static final String INTERNAL_SECURITY_INDEX = ".security-" + IndexUpgradeCheckVersion.UPRADE_VERSION;
|
||||
public static final int INTERNAL_INDEX_FORMAT = 6;
|
||||
|
@ -74,7 +74,7 @@ public class IndexLifecycleManager extends AbstractComponent {
|
|||
|
||||
private volatile State indexState = new State(false, false, false, false, null);
|
||||
|
||||
public IndexLifecycleManager(Settings settings, Client client, String indexName) {
|
||||
public SecurityIndexManager(Settings settings, Client client, String indexName) {
|
||||
super(settings);
|
||||
this.client = client;
|
||||
this.indexName = indexName;
|
||||
|
@ -347,7 +347,7 @@ public class IndexLifecycleManager extends AbstractComponent {
|
|||
|
||||
private Tuple<String, Settings> loadMappingAndSettingsSourceFromTemplate() {
|
||||
final byte[] template = TemplateUtils.loadTemplate("/" + SECURITY_TEMPLATE_NAME + ".json",
|
||||
Version.CURRENT.toString(), IndexLifecycleManager.TEMPLATE_VERSION_PATTERN).getBytes(StandardCharsets.UTF_8);
|
||||
Version.CURRENT.toString(), SecurityIndexManager.TEMPLATE_VERSION_PATTERN).getBytes(StandardCharsets.UTF_8);
|
||||
PutIndexTemplateRequest request = new PutIndexTemplateRequest(SECURITY_TEMPLATE_NAME).source(template, XContentType.JSON);
|
||||
return new Tuple<>(request.mappings().get("doc"), request.settings());
|
||||
}
|
|
@ -37,7 +37,7 @@ import org.elasticsearch.threadpool.ThreadPool;
|
|||
import org.elasticsearch.transport.MockTransportClient;
|
||||
import org.elasticsearch.xpack.core.security.SecurityLifecycleServiceField;
|
||||
import org.elasticsearch.xpack.security.audit.index.IndexAuditTrail;
|
||||
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.elasticsearch.xpack.security.test.SecurityTestUtils;
|
||||
import org.elasticsearch.xpack.core.template.TemplateUtils;
|
||||
import org.junit.After;
|
||||
|
@ -105,10 +105,10 @@ public class SecurityLifecycleServiceTests extends ESTestCase {
|
|||
ClusterState.Builder clusterStateBuilder = createClusterStateWithTemplate(templateString);
|
||||
final ClusterState clusterState = clusterStateBuilder.build();
|
||||
|
||||
assertTrue(IndexLifecycleManager.checkTemplateExistsAndVersionMatches(
|
||||
assertTrue(SecurityIndexManager.checkTemplateExistsAndVersionMatches(
|
||||
SecurityLifecycleServiceField.SECURITY_TEMPLATE_NAME, clusterState, logger,
|
||||
Version.V_5_0_0::before));
|
||||
assertFalse(IndexLifecycleManager.checkTemplateExistsAndVersionMatches(
|
||||
assertFalse(SecurityIndexManager.checkTemplateExistsAndVersionMatches(
|
||||
SecurityLifecycleServiceField.SECURITY_TEMPLATE_NAME, clusterState, logger,
|
||||
Version.V_5_0_0::after));
|
||||
}
|
||||
|
@ -126,7 +126,7 @@ public class SecurityLifecycleServiceTests extends ESTestCase {
|
|||
ClusterState.Builder clusterStateBuilder = createClusterStateWithMappingAndTemplate(templateString);
|
||||
securityLifecycleService.clusterChanged(new ClusterChangedEvent("test-event",
|
||||
clusterStateBuilder.build(), EMPTY_CLUSTER_STATE));
|
||||
final IndexLifecycleManager securityIndex = securityLifecycleService.securityIndex();
|
||||
final SecurityIndexManager securityIndex = securityLifecycleService.securityIndex();
|
||||
assertTrue(securityIndex.checkMappingVersion(Version.V_5_0_0::before));
|
||||
assertFalse(securityIndex.checkMappingVersion(Version.V_5_0_0::after));
|
||||
}
|
||||
|
@ -172,7 +172,7 @@ public class SecurityLifecycleServiceTests extends ESTestCase {
|
|||
|
||||
private static IndexMetaData.Builder createIndexMetadata(String indexName, String templateString) throws IOException {
|
||||
String template = TemplateUtils.loadTemplate(templateString, Version.CURRENT.toString(),
|
||||
IndexLifecycleManager.TEMPLATE_VERSION_PATTERN);
|
||||
SecurityIndexManager.TEMPLATE_VERSION_PATTERN);
|
||||
PutIndexTemplateRequest request = new PutIndexTemplateRequest();
|
||||
request.source(template, XContentType.JSON);
|
||||
IndexMetaData.Builder indexMetaData = IndexMetaData.builder(indexName);
|
||||
|
@ -219,7 +219,7 @@ public class SecurityLifecycleServiceTests extends ESTestCase {
|
|||
String templateName, String templateString) throws IOException {
|
||||
|
||||
String template = TemplateUtils.loadTemplate(templateString, Version.CURRENT.toString(),
|
||||
IndexLifecycleManager.TEMPLATE_VERSION_PATTERN);
|
||||
SecurityIndexManager.TEMPLATE_VERSION_PATTERN);
|
||||
PutIndexTemplateRequest request = new PutIndexTemplateRequest();
|
||||
request.source(template, XContentType.JSON);
|
||||
IndexTemplateMetaData.Builder templateBuilder = IndexTemplateMetaData.builder(templateName)
|
||||
|
|
|
@ -63,7 +63,7 @@ import java.util.function.Predicate;
|
|||
|
||||
import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_FORMAT_SETTING;
|
||||
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_INDEX_NAME;
|
||||
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.INTERNAL_INDEX_FORMAT;
|
||||
import static org.elasticsearch.xpack.security.support.SecurityIndexManager.INTERNAL_INDEX_FORMAT;
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.hasItem;
|
||||
|
|
|
@ -67,6 +67,7 @@ import org.elasticsearch.xpack.security.authc.saml.SamlRealm;
|
|||
import org.elasticsearch.xpack.security.authc.saml.SamlRealmTestHelper;
|
||||
import org.elasticsearch.xpack.security.authc.saml.SamlRealmTests;
|
||||
import org.elasticsearch.xpack.security.authc.saml.SamlTestCase;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
import org.opensaml.saml.saml2.core.NameID;
|
||||
|
@ -161,10 +162,12 @@ public class TransportSamlInvalidateSessionActionTests extends SamlTestCase {
|
|||
};
|
||||
|
||||
final SecurityLifecycleService lifecycleService = mock(SecurityLifecycleService.class);
|
||||
final SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(lifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
doAnswer(inv -> {
|
||||
((Runnable) inv.getArguments()[1]).run();
|
||||
return null;
|
||||
}).when(lifecycleService).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
}).when(securityIndex).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
|
||||
final ClusterService clusterService = ClusterServiceUtils.createClusterService(threadPool);
|
||||
tokenService = new TokenService(settings, Clock.systemUTC(), client, lifecycleService, clusterService);
|
||||
|
|
|
@ -56,6 +56,7 @@ import org.elasticsearch.xpack.security.authc.saml.SamlRealm;
|
|||
import org.elasticsearch.xpack.security.authc.saml.SamlRealmTests;
|
||||
import org.elasticsearch.xpack.security.authc.saml.SamlTestCase;
|
||||
import org.elasticsearch.xpack.security.authc.support.UserRoleMapper;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
import org.opensaml.saml.saml2.core.NameID;
|
||||
|
@ -173,10 +174,12 @@ public class TransportSamlLogoutActionTests extends SamlTestCase {
|
|||
}).when(client).execute(eq(IndexAction.INSTANCE), any(IndexRequest.class), any(ActionListener.class));
|
||||
|
||||
final SecurityLifecycleService lifecycleService = mock(SecurityLifecycleService.class);
|
||||
final SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(lifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
doAnswer(inv -> {
|
||||
((Runnable) inv.getArguments()[1]).run();
|
||||
return null;
|
||||
}).when(lifecycleService).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
}).when(securityIndex).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
|
||||
final ClusterService clusterService = ClusterServiceUtils.createClusterService(threadPool);
|
||||
tokenService = new TokenService(settings, Clock.systemUTC(), client, lifecycleService, clusterService);
|
||||
|
|
|
@ -28,6 +28,7 @@ import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
|||
import org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore;
|
||||
import org.elasticsearch.xpack.security.authc.esnative.ReservedRealm;
|
||||
import org.elasticsearch.xpack.security.authc.esnative.ReservedRealmTests;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.junit.Before;
|
||||
import org.mockito.invocation.InvocationOnMock;
|
||||
import org.mockito.stubbing.Answer;
|
||||
|
@ -76,7 +77,9 @@ public class TransportGetUsersActionTests extends ESTestCase {
|
|||
public void testAnonymousUser() {
|
||||
NativeUsersStore usersStore = mock(NativeUsersStore.class);
|
||||
SecurityLifecycleService securityLifecycleService = mock(SecurityLifecycleService.class);
|
||||
when(securityLifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(securityLifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
AnonymousUser anonymousUser = new AnonymousUser(settings);
|
||||
ReservedRealm reservedRealm =
|
||||
new ReservedRealm(mock(Environment.class), settings, usersStore, anonymousUser, securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
|
@ -146,8 +149,10 @@ public class TransportGetUsersActionTests extends ESTestCase {
|
|||
public void testReservedUsersOnly() {
|
||||
NativeUsersStore usersStore = mock(NativeUsersStore.class);
|
||||
SecurityLifecycleService securityLifecycleService = mock(SecurityLifecycleService.class);
|
||||
when(securityLifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
when(securityLifecycleService.checkSecurityMappingVersion(any())).thenReturn(true);
|
||||
SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(securityLifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
when(securityIndex.checkMappingVersion(any())).thenReturn(true);
|
||||
|
||||
ReservedRealmTests.mockGetAllReservedUserInfo(usersStore, Collections.emptyMap());
|
||||
ReservedRealm reservedRealm =
|
||||
|
@ -194,7 +199,9 @@ public class TransportGetUsersActionTests extends ESTestCase {
|
|||
Arrays.asList(new User("jane"), new User("fred")), randomUsers());
|
||||
NativeUsersStore usersStore = mock(NativeUsersStore.class);
|
||||
SecurityLifecycleService securityLifecycleService = mock(SecurityLifecycleService.class);
|
||||
when(securityLifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(securityLifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
ReservedRealmTests.mockGetAllReservedUserInfo(usersStore, Collections.emptyMap());
|
||||
ReservedRealm reservedRealm = new ReservedRealm(mock(Environment.class), settings, usersStore, new AnonymousUser(settings),
|
||||
securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
|
|
|
@ -29,6 +29,7 @@ import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
|||
import org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore;
|
||||
import org.elasticsearch.xpack.security.authc.esnative.ReservedRealm;
|
||||
import org.elasticsearch.xpack.security.authc.esnative.ReservedRealmTests;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.mockito.invocation.InvocationOnMock;
|
||||
import org.mockito.stubbing.Answer;
|
||||
|
||||
|
@ -118,7 +119,9 @@ public class TransportPutUserActionTests extends ESTestCase {
|
|||
public void testReservedUser() {
|
||||
NativeUsersStore usersStore = mock(NativeUsersStore.class);
|
||||
SecurityLifecycleService securityLifecycleService = mock(SecurityLifecycleService.class);
|
||||
when(securityLifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(securityLifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
ReservedRealmTests.mockGetAllReservedUserInfo(usersStore, Collections.emptyMap());
|
||||
Settings settings = Settings.builder().put("path.home", createTempDir()).build();
|
||||
ReservedRealm reservedRealm = new ReservedRealm(TestEnvironment.newEnvironment(settings), settings, usersStore,
|
||||
|
|
|
@ -68,6 +68,7 @@ import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
|||
import org.elasticsearch.xpack.security.audit.AuditTrailService;
|
||||
import org.elasticsearch.xpack.security.authc.AuthenticationService.Authenticator;
|
||||
import org.elasticsearch.xpack.security.authc.esnative.ReservedRealm;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.junit.After;
|
||||
import org.junit.Before;
|
||||
|
||||
|
@ -125,6 +126,7 @@ public class AuthenticationServiceTests extends ESTestCase {
|
|||
private ThreadContext threadContext;
|
||||
private TokenService tokenService;
|
||||
private SecurityLifecycleService lifecycleService;
|
||||
private SecurityIndexManager securityIndex;
|
||||
private Client client;
|
||||
private InetSocketAddress remoteAddress;
|
||||
|
||||
|
@ -181,11 +183,13 @@ public class AuthenticationServiceTests extends ESTestCase {
|
|||
return builder;
|
||||
}).when(client).prepareGet(anyString(), anyString(), anyString());
|
||||
lifecycleService = mock(SecurityLifecycleService.class);
|
||||
securityIndex = mock(SecurityIndexManager.class);
|
||||
when(lifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
doAnswer(invocationOnMock -> {
|
||||
Runnable runnable = (Runnable) invocationOnMock.getArguments()[1];
|
||||
runnable.run();
|
||||
return null;
|
||||
}).when(lifecycleService).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
}).when(securityIndex).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
ClusterService clusterService = ClusterServiceUtils.createClusterService(threadPool);
|
||||
tokenService = new TokenService(settings, Clock.systemUTC(), client, lifecycleService, clusterService);
|
||||
service = new AuthenticationService(settings, realms, auditTrail,
|
||||
|
@ -924,8 +928,8 @@ public class AuthenticationServiceTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testExpiredToken() throws Exception {
|
||||
when(lifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
when(lifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
when(lifecycleService.securityIndex().indexExists()).thenReturn(true);
|
||||
User user = new User("_username", "r1");
|
||||
final Authentication expected = new Authentication(user, new RealmRef("realm", "custom", "node"), null);
|
||||
PlainActionFuture<Tuple<UserToken, String>> tokenFuture = new PlainActionFuture<>();
|
||||
|
@ -963,7 +967,7 @@ public class AuthenticationServiceTests extends ESTestCase {
|
|||
doAnswer(invocationOnMock -> {
|
||||
((Runnable) invocationOnMock.getArguments()[1]).run();
|
||||
return null;
|
||||
}).when(lifecycleService).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
}).when(securityIndex).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
|
||||
try (ThreadContext.StoredContext ignore = threadContext.stashContext()) {
|
||||
threadContext.putHeader("Authorization", "Bearer " + token);
|
||||
|
|
|
@ -18,6 +18,7 @@ import org.elasticsearch.xpack.core.ssl.SSLService;
|
|||
import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
||||
import org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore;
|
||||
import org.elasticsearch.xpack.security.authc.support.mapper.NativeRoleMappingStore;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.function.BiConsumer;
|
||||
|
@ -30,11 +31,14 @@ import static org.mockito.Matchers.isA;
|
|||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.verifyZeroInteractions;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
public class InternalRealmsTests extends ESTestCase {
|
||||
|
||||
public void testNativeRealmRegistersIndexHealthChangeListener() throws Exception {
|
||||
SecurityLifecycleService lifecycleService = mock(SecurityLifecycleService.class);
|
||||
SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(lifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
Map<String, Realm.Factory> factories = InternalRealms.getFactories(mock(ThreadPool.class), mock(ResourceWatcherService.class),
|
||||
mock(SSLService.class), mock(NativeUsersStore.class), mock(NativeRoleMappingStore.class), lifecycleService);
|
||||
assertThat(factories, hasEntry(is(NativeRealmSettings.TYPE), any(Realm.Factory.class)));
|
||||
|
@ -43,10 +47,10 @@ public class InternalRealmsTests extends ESTestCase {
|
|||
Settings settings = Settings.builder().put("path.home", createTempDir()).build();
|
||||
factories.get(NativeRealmSettings.TYPE).create(new RealmConfig("test", Settings.EMPTY, settings,
|
||||
TestEnvironment.newEnvironment(settings), new ThreadContext(settings)));
|
||||
verify(lifecycleService).addSecurityIndexHealthChangeListener(isA(BiConsumer.class));
|
||||
verify(securityIndex).addIndexHealthChangeListener(isA(BiConsumer.class));
|
||||
|
||||
factories.get(NativeRealmSettings.TYPE).create(new RealmConfig("test", Settings.EMPTY, settings,
|
||||
TestEnvironment.newEnvironment(settings), new ThreadContext(settings)));
|
||||
verify(lifecycleService, times(2)).addSecurityIndexHealthChangeListener(isA(BiConsumer.class));
|
||||
verify(securityIndex, times(2)).addIndexHealthChangeListener(isA(BiConsumer.class));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -51,6 +51,7 @@ import org.elasticsearch.xpack.core.security.authc.TokenMetaData;
|
|||
import org.elasticsearch.xpack.core.security.user.User;
|
||||
import org.elasticsearch.xpack.core.watcher.watch.ClockMock;
|
||||
import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.Before;
|
||||
import org.junit.BeforeClass;
|
||||
|
@ -86,6 +87,7 @@ public class TokenServiceTests extends ESTestCase {
|
|||
|
||||
private Client client;
|
||||
private SecurityLifecycleService lifecycleService;
|
||||
private SecurityIndexManager securityIndex;
|
||||
private ClusterService clusterService;
|
||||
private Settings tokenServiceEnabledSettings = Settings.builder()
|
||||
.put(XPackSettings.TOKEN_SERVICE_ENABLED_SETTING.getKey(), true).build();
|
||||
|
@ -131,11 +133,13 @@ public class TokenServiceTests extends ESTestCase {
|
|||
|
||||
// setup lifecycle service
|
||||
lifecycleService = mock(SecurityLifecycleService.class);
|
||||
securityIndex = mock(SecurityIndexManager.class);
|
||||
when(lifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
doAnswer(invocationOnMock -> {
|
||||
Runnable runnable = (Runnable) invocationOnMock.getArguments()[1];
|
||||
runnable.run();
|
||||
return null;
|
||||
}).when(lifecycleService).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
}).when(securityIndex).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
this.clusterService = ClusterServiceUtils.createClusterService(threadPool);
|
||||
}
|
||||
|
||||
|
@ -376,7 +380,7 @@ public class TokenServiceTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testInvalidatedToken() throws Exception {
|
||||
when(lifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
TokenService tokenService =
|
||||
new TokenService(tokenServiceEnabledSettings, systemUTC(), client, lifecycleService, clusterService);
|
||||
Authentication authentication = new Authentication(new User("joe", "admin"), new RealmRef("native_realm", "native", "node1"), null);
|
||||
|
@ -563,8 +567,8 @@ public class TokenServiceTests extends ESTestCase {
|
|||
UserToken serialized = future.get();
|
||||
assertEquals(authentication, serialized.getAuthentication());
|
||||
|
||||
when(lifecycleService.isSecurityIndexAvailable()).thenReturn(false);
|
||||
when(lifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.isAvailable()).thenReturn(false);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
future = new PlainActionFuture<>();
|
||||
tokenService.getAndValidateToken(requestContext, future);
|
||||
assertNull(future.get());
|
||||
|
|
|
@ -55,7 +55,7 @@ import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDI
|
|||
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoTimeout;
|
||||
import static org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken.basicAuthHeaderValue;
|
||||
import static org.elasticsearch.xpack.security.SecurityLifecycleService.SECURITY_INDEX_NAME;
|
||||
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.INTERNAL_SECURITY_INDEX;
|
||||
import static org.elasticsearch.xpack.security.support.SecurityIndexManager.INTERNAL_SECURITY_INDEX;
|
||||
import static org.hamcrest.Matchers.arrayContaining;
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.xpack.core.security.user.KibanaUser;
|
|||
import org.elasticsearch.xpack.core.security.user.LogstashSystemUser;
|
||||
import org.elasticsearch.xpack.core.security.user.User;
|
||||
import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.junit.Before;
|
||||
|
||||
import java.io.IOException;
|
||||
|
@ -236,16 +237,17 @@ public class NativeUsersStoreTests extends ESTestCase {
|
|||
|
||||
private NativeUsersStore startNativeUsersStore() {
|
||||
SecurityLifecycleService securityLifecycleService = mock(SecurityLifecycleService.class);
|
||||
when(securityLifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityLifecycleService.isSecurityIndexMappingUpToDate()).thenReturn(true);
|
||||
when(securityLifecycleService.isSecurityIndexOutOfDate()).thenReturn(false);
|
||||
when(securityLifecycleService.isSecurityIndexUpToDate()).thenReturn(true);
|
||||
SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(securityLifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
when(securityIndex.isMappingUpToDate()).thenReturn(true);
|
||||
when(securityIndex.isIndexUpToDate()).thenReturn(true);
|
||||
doAnswer((i) -> {
|
||||
Runnable action = (Runnable) i.getArguments()[1];
|
||||
action.run();
|
||||
return null;
|
||||
}).when(securityLifecycleService).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
}).when(securityIndex).prepareIndexIfNeededThenExecute(any(Consumer.class), any(Runnable.class));
|
||||
return new NativeUsersStore(Settings.EMPTY, client, securityLifecycleService);
|
||||
}
|
||||
|
||||
|
|
|
@ -29,6 +29,7 @@ import org.elasticsearch.xpack.core.security.user.User;
|
|||
import org.elasticsearch.xpack.core.security.user.UsernamesField;
|
||||
import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
||||
import org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore.ReservedUserInfo;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.junit.Before;
|
||||
import org.mockito.ArgumentCaptor;
|
||||
|
||||
|
@ -63,13 +64,16 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
private static final SecureString EMPTY_PASSWORD = new SecureString("".toCharArray());
|
||||
private NativeUsersStore usersStore;
|
||||
private SecurityLifecycleService securityLifecycleService;
|
||||
private SecurityIndexManager securityIndex;
|
||||
|
||||
@Before
|
||||
public void setupMocks() throws Exception {
|
||||
usersStore = mock(NativeUsersStore.class);
|
||||
securityLifecycleService = mock(SecurityLifecycleService.class);
|
||||
when(securityLifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
when(securityLifecycleService.checkSecurityMappingVersion(any())).thenReturn(true);
|
||||
securityIndex = mock(SecurityIndexManager.class);
|
||||
when(securityLifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
when(securityIndex.checkMappingVersion(any())).thenReturn(true);
|
||||
mockGetAllReservedUserInfo(usersStore, Collections.emptyMap());
|
||||
}
|
||||
|
||||
|
@ -90,7 +94,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
Settings settings = Settings.builder().put(XPackSettings.RESERVED_REALM_ENABLED_SETTING.getKey(), false).build();
|
||||
final boolean securityIndexExists = randomBoolean();
|
||||
if (securityIndexExists) {
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
}
|
||||
final ReservedRealm reservedRealm =
|
||||
new ReservedRealm(mock(Environment.class), settings, usersStore,
|
||||
|
@ -120,7 +124,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
final User expectedUser = randomReservedUser(enabled);
|
||||
final String principal = expectedUser.principal();
|
||||
final SecureString newPassword = new SecureString("foobar".toCharArray());
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
doAnswer((i) -> {
|
||||
ActionListener callback = (ActionListener) i.getArguments()[1];
|
||||
callback.onResponse(new ReservedUserInfo(Hasher.BCRYPT.hash(newPassword), enabled, false));
|
||||
|
@ -146,10 +150,10 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
assertEquals(expectedUser, authenticated);
|
||||
assertThat(expectedUser.enabled(), is(enabled));
|
||||
|
||||
verify(securityLifecycleService, times(2)).isSecurityIndexExisting();
|
||||
verify(securityIndex, times(2)).indexExists();
|
||||
verify(usersStore, times(2)).getReservedUserInfo(eq(principal), any(ActionListener.class));
|
||||
final ArgumentCaptor<Predicate> predicateCaptor = ArgumentCaptor.forClass(Predicate.class);
|
||||
verify(securityLifecycleService, times(2)).checkSecurityMappingVersion(predicateCaptor.capture());
|
||||
verify(securityIndex, times(2)).checkMappingVersion(predicateCaptor.capture());
|
||||
verifyVersionPredicate(principal, predicateCaptor.getValue());
|
||||
verifyNoMoreInteractions(usersStore);
|
||||
}
|
||||
|
@ -165,10 +169,10 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
reservedRealm.doLookupUser(principal, listener);
|
||||
final User user = listener.actionGet();
|
||||
assertEquals(expectedUser, user);
|
||||
verify(securityLifecycleService).isSecurityIndexExisting();
|
||||
verify(securityIndex).indexExists();
|
||||
|
||||
final ArgumentCaptor<Predicate> predicateCaptor = ArgumentCaptor.forClass(Predicate.class);
|
||||
verify(securityLifecycleService).checkSecurityMappingVersion(predicateCaptor.capture());
|
||||
verify(securityIndex).checkMappingVersion(predicateCaptor.capture());
|
||||
verifyVersionPredicate(principal, predicateCaptor.getValue());
|
||||
|
||||
PlainActionFuture<User> future = new PlainActionFuture<>();
|
||||
|
@ -199,7 +203,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
new AnonymousUser(Settings.EMPTY), securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
final User expectedUser = randomReservedUser(true);
|
||||
final String principal = expectedUser.principal();
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
final RuntimeException e = new RuntimeException("store threw");
|
||||
doAnswer((i) -> {
|
||||
ActionListener callback = (ActionListener) i.getArguments()[1];
|
||||
|
@ -212,11 +216,11 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
ElasticsearchSecurityException securityException = expectThrows(ElasticsearchSecurityException.class, future::actionGet);
|
||||
assertThat(securityException.getMessage(), containsString("failed to lookup"));
|
||||
|
||||
verify(securityLifecycleService).isSecurityIndexExisting();
|
||||
verify(securityIndex).indexExists();
|
||||
verify(usersStore).getReservedUserInfo(eq(principal), any(ActionListener.class));
|
||||
|
||||
final ArgumentCaptor<Predicate> predicateCaptor = ArgumentCaptor.forClass(Predicate.class);
|
||||
verify(securityLifecycleService).checkSecurityMappingVersion(predicateCaptor.capture());
|
||||
verify(securityIndex).checkMappingVersion(predicateCaptor.capture());
|
||||
verifyVersionPredicate(principal, predicateCaptor.getValue());
|
||||
|
||||
verifyNoMoreInteractions(usersStore);
|
||||
|
@ -269,7 +273,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public void testFailedAuthentication() throws Exception {
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
SecureString password = new SecureString("password".toCharArray());
|
||||
char[] hash = Hasher.BCRYPT.hash(password);
|
||||
ReservedUserInfo userInfo = new ReservedUserInfo(hash, true, false);
|
||||
|
@ -302,7 +306,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
MockSecureSettings mockSecureSettings = new MockSecureSettings();
|
||||
mockSecureSettings.setString("bootstrap.password", "foobar");
|
||||
Settings settings = Settings.builder().setSecureSettings(mockSecureSettings).build();
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
|
||||
final ReservedRealm reservedRealm = new ReservedRealm(mock(Environment.class), settings, usersStore,
|
||||
new AnonymousUser(Settings.EMPTY), securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
|
@ -324,7 +328,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
MockSecureSettings mockSecureSettings = new MockSecureSettings();
|
||||
mockSecureSettings.setString("bootstrap.password", "foobar");
|
||||
Settings settings = Settings.builder().setSecureSettings(mockSecureSettings).build();
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
|
||||
final ReservedRealm reservedRealm = new ReservedRealm(mock(Environment.class), settings, usersStore,
|
||||
new AnonymousUser(Settings.EMPTY), securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
|
@ -351,7 +355,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
MockSecureSettings mockSecureSettings = new MockSecureSettings();
|
||||
mockSecureSettings.setString("bootstrap.password", "foobar");
|
||||
Settings settings = Settings.builder().setSecureSettings(mockSecureSettings).build();
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(false);
|
||||
when(securityIndex.indexExists()).thenReturn(false);
|
||||
|
||||
final ReservedRealm reservedRealm = new ReservedRealm(mock(Environment.class), settings, usersStore,
|
||||
new AnonymousUser(Settings.EMPTY), securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
|
@ -369,7 +373,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
final String password = randomAlphaOfLengthBetween(8, 24);
|
||||
mockSecureSettings.setString("bootstrap.password", password);
|
||||
Settings settings = Settings.builder().setSecureSettings(mockSecureSettings).build();
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(true);
|
||||
when(securityIndex.indexExists()).thenReturn(true);
|
||||
|
||||
final ReservedRealm reservedRealm = new ReservedRealm(mock(Environment.class), settings, usersStore,
|
||||
new AnonymousUser(Settings.EMPTY), securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
|
@ -391,7 +395,7 @@ public class ReservedRealmTests extends ESTestCase {
|
|||
final String password = randomAlphaOfLengthBetween(8, 24);
|
||||
mockSecureSettings.setString("bootstrap.password", password);
|
||||
Settings settings = Settings.builder().setSecureSettings(mockSecureSettings).build();
|
||||
when(securityLifecycleService.isSecurityIndexExisting()).thenReturn(false);
|
||||
when(securityIndex.indexExists()).thenReturn(false);
|
||||
|
||||
final ReservedRealm reservedRealm = new ReservedRealm(mock(Environment.class), settings, usersStore,
|
||||
new AnonymousUser(Settings.EMPTY), securityLifecycleService, new ThreadContext(Settings.EMPTY));
|
||||
|
|
|
@ -30,6 +30,7 @@ import org.elasticsearch.xpack.core.security.user.User;
|
|||
import org.elasticsearch.xpack.security.SecurityLifecycleService;
|
||||
import org.elasticsearch.xpack.security.authc.support.CachingUsernamePasswordRealm;
|
||||
import org.elasticsearch.xpack.security.authc.support.UserRoleMapper;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.hamcrest.Matchers;
|
||||
|
||||
import java.util.Arrays;
|
||||
|
@ -75,7 +76,9 @@ public class NativeRoleMappingStoreTests extends ESTestCase {
|
|||
|
||||
final Client client = mock(Client.class);
|
||||
final SecurityLifecycleService lifecycleService = mock(SecurityLifecycleService.class);
|
||||
when(lifecycleService.isSecurityIndexAvailable()).thenReturn(true);
|
||||
SecurityIndexManager securityIndex = mock(SecurityIndexManager.class);
|
||||
when(lifecycleService.securityIndex()).thenReturn(securityIndex);
|
||||
when(securityIndex.isAvailable()).thenReturn(true);
|
||||
|
||||
final NativeRoleMappingStore store = new NativeRoleMappingStore(Settings.EMPTY, client, lifecycleService) {
|
||||
@Override
|
||||
|
|
|
@ -21,7 +21,7 @@ import java.util.concurrent.CyclicBarrier;
|
|||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
public class IndexLifecycleManagerIntegTests extends SecurityIntegTestCase {
|
||||
public class SecurityIndexManagerIntegTests extends SecurityIntegTestCase {
|
||||
|
||||
public void testConcurrentOperationsTryingToCreateSecurityIndexAndAlias() throws Exception {
|
||||
assertSecurityIndexActive();
|
|
@ -52,17 +52,17 @@ import org.hamcrest.Matchers;
|
|||
import org.junit.Before;
|
||||
|
||||
import static org.elasticsearch.cluster.routing.RecoverySource.StoreRecoverySource.EXISTING_STORE_INSTANCE;
|
||||
import static org.elasticsearch.xpack.security.support.IndexLifecycleManager.TEMPLATE_VERSION_PATTERN;
|
||||
import static org.elasticsearch.xpack.security.support.SecurityIndexManager.TEMPLATE_VERSION_PATTERN;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
public class IndexLifecycleManagerTests extends ESTestCase {
|
||||
public class SecurityIndexManagerTests extends ESTestCase {
|
||||
|
||||
private static final ClusterName CLUSTER_NAME = new ClusterName("index-lifecycle-manager-tests");
|
||||
private static final ClusterState EMPTY_CLUSTER_STATE = new ClusterState.Builder(CLUSTER_NAME).build();
|
||||
public static final String INDEX_NAME = "IndexLifecycleManagerTests";
|
||||
private static final String TEMPLATE_NAME = "IndexLifecycleManagerTests-template";
|
||||
private IndexLifecycleManager manager;
|
||||
public static final String INDEX_NAME = "SecurityIndexManagerTests";
|
||||
private static final String TEMPLATE_NAME = "SecurityIndexManagerTests-template";
|
||||
private SecurityIndexManager manager;
|
||||
private Map<Action<?, ?, ?>, Map<ActionRequest, ActionListener<?>>> actions;
|
||||
|
||||
@Before
|
||||
|
@ -86,7 +86,7 @@ public class IndexLifecycleManagerTests extends ESTestCase {
|
|||
actions.put(action, map);
|
||||
}
|
||||
};
|
||||
manager = new IndexLifecycleManager(Settings.EMPTY, client, INDEX_NAME);
|
||||
manager = new SecurityIndexManager(Settings.EMPTY, client, INDEX_NAME);
|
||||
}
|
||||
|
||||
public void testIndexWithUpToDateMappingAndTemplate() throws IOException {
|
||||
|
@ -221,7 +221,7 @@ public class IndexLifecycleManagerTests extends ESTestCase {
|
|||
|
||||
// index doesn't exist and now exists with wrong format
|
||||
ClusterState.Builder clusterStateBuilder = createClusterState(INDEX_NAME, TEMPLATE_NAME,
|
||||
IndexLifecycleManager.INTERNAL_INDEX_FORMAT - 1);
|
||||
SecurityIndexManager.INTERNAL_INDEX_FORMAT - 1);
|
||||
markShardsAvailable(clusterStateBuilder);
|
||||
manager.clusterChanged(event(clusterStateBuilder));
|
||||
assertTrue(listenerCalled.get());
|
||||
|
@ -235,7 +235,7 @@ public class IndexLifecycleManagerTests extends ESTestCase {
|
|||
|
||||
listenerCalled.set(false);
|
||||
// index doesn't exist and now exists with correct format
|
||||
clusterStateBuilder = createClusterState(INDEX_NAME, TEMPLATE_NAME, IndexLifecycleManager.INTERNAL_INDEX_FORMAT);
|
||||
clusterStateBuilder = createClusterState(INDEX_NAME, TEMPLATE_NAME, SecurityIndexManager.INTERNAL_INDEX_FORMAT);
|
||||
markShardsAvailable(clusterStateBuilder);
|
||||
manager.clusterChanged(event(clusterStateBuilder));
|
||||
assertFalse(listenerCalled.get());
|
||||
|
@ -255,7 +255,7 @@ public class IndexLifecycleManagerTests extends ESTestCase {
|
|||
}
|
||||
|
||||
public static ClusterState.Builder createClusterState(String indexName, String templateName) throws IOException {
|
||||
return createClusterState(indexName, templateName, templateName, IndexLifecycleManager.INTERNAL_INDEX_FORMAT);
|
||||
return createClusterState(indexName, templateName, templateName, SecurityIndexManager.INTERNAL_INDEX_FORMAT);
|
||||
}
|
||||
|
||||
public static ClusterState.Builder createClusterState(String indexName, String templateName, int format) throws IOException {
|
|
@ -23,7 +23,7 @@ import org.elasticsearch.test.rest.ESRestTestCase;
|
|||
import org.elasticsearch.xpack.core.monitoring.exporter.MonitoringTemplateUtils;
|
||||
import org.elasticsearch.xpack.core.watcher.client.WatchSourceBuilder;
|
||||
import org.elasticsearch.xpack.core.watcher.support.xcontent.ObjectPath;
|
||||
import org.elasticsearch.xpack.security.support.IndexLifecycleManager;
|
||||
import org.elasticsearch.xpack.security.support.SecurityIndexManager;
|
||||
import org.elasticsearch.xpack.test.rest.XPackRestTestHelper;
|
||||
import org.elasticsearch.xpack.watcher.actions.logging.LoggingAction;
|
||||
import org.elasticsearch.xpack.watcher.common.text.TextTemplate;
|
||||
|
@ -138,7 +138,7 @@ public class FullClusterRestartIT extends ESRestTestCase {
|
|||
logger.info("settings map {}", settingsMap);
|
||||
if (settingsMap.containsKey("index")) {
|
||||
int format = Integer.parseInt(String.valueOf(((Map<String, Object>)settingsMap.get("index")).get("format")));
|
||||
needsUpgrade = format == IndexLifecycleManager.INTERNAL_INDEX_FORMAT ? false : true;
|
||||
needsUpgrade = format == SecurityIndexManager.INTERNAL_INDEX_FORMAT ? false : true;
|
||||
} else {
|
||||
needsUpgrade = true;
|
||||
}
|
||||
|
|
|
@ -8,6 +8,7 @@ package org.elasticsearch.upgrades;
|
|||
import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
|
||||
import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;
|
||||
|
||||
import org.apache.lucene.util.LuceneTestCase;
|
||||
import org.apache.lucene.util.TimeUnits;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
|
@ -29,6 +30,7 @@ import static java.util.Collections.singletonMap;
|
|||
import static org.hamcrest.Matchers.is;
|
||||
|
||||
@TimeoutSuite(millis = 5 * TimeUnits.MINUTE) // to account for slow as hell VMs
|
||||
@LuceneTestCase.AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/30456")
|
||||
public class UpgradeClusterClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {
|
||||
|
||||
/**
|
||||
|
|
Loading…
Reference in New Issue