From e32c7f1d72ff8eca8b1529c5bfc7a83aff8c6c9b Mon Sep 17 00:00:00 2001 From: David Pilato Date: Fri, 16 Dec 2016 16:45:56 +0100 Subject: [PATCH 01/26] Explain how to use bulk processor in a test context When using a bulk processor in test, you might write something like: ```java BulkProcessor bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() { @Override public void beforeBulk(long executionId, BulkRequest request) {} @Override public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {} @Override public void afterBulk(long executionId, BulkRequest request, Throwable failure) {} }) .setBulkActions(10000) .setFlushInterval(TimeValue.timeValueSeconds(10)) .build(); for (int i = 0; i < 10000; i++) { bulkProcessor.add(new IndexRequest("foo", "bar", "doc_" + i) .source(jsonBuilder().startObject().field("foo", "bar").endObject() )); } bulkProcessor.flush(); client.admin().indices().prepareRefresh("foo").get(); SearchResponse response = client.prepareSearch("foo").get(); // response does not contain any hit ``` The problem is that by default bulkProcessor defines the number of concurrent requests to 1 which is using behind the scene an Async BulkRequestHandler. When you call `flush()` in a test, you expect it to flush all the content of the bulk so you can search for your docs. But because of the async handling, there is a great chance that none of the documents has been indexed yet when you call the `refresh` method. We should advice in our Java guide to explicitly set concurrent requests to `0` so users will use behind the scene the Sync BulkRequestHandler. ```java BulkProcessor bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() { @Override public void beforeBulk(long executionId, BulkRequest request) {} @Override public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {} @Override public void afterBulk(long executionId, BulkRequest request, Throwable failure) {} }) .setBulkActions(5000) .setFlushInterval(TimeValue.timeValueSeconds(10)) .setConcurrentRequests(0) .build(); ``` Closes #22158. --- docs/java-api/docs/bulk.asciidoc | 52 +++++++++++++++++++++++++++----- 1 file changed, 44 insertions(+), 8 deletions(-) diff --git a/docs/java-api/docs/bulk.asciidoc b/docs/java-api/docs/bulk.asciidoc index 288bd8415ab..07849164a68 100644 --- a/docs/java-api/docs/bulk.asciidoc +++ b/docs/java-api/docs/bulk.asciidoc @@ -87,13 +87,24 @@ BulkProcessor bulkProcessor = BulkProcessor.builder( <5> We want to execute the bulk every 10 000 requests <6> We want to flush the bulk every 5mb <7> We want to flush the bulk every 5 seconds whatever the number of requests -<8> Set the number of concurrent requests. A value of 0 means that only a single request will be allowed to be +<8> Set the number of concurrent requests. A value of 0 means that only a single request will be allowed to be executed. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. <9> Set a custom backoff policy which will initially wait for 100ms, increase exponentially and retries up to three times. A retry is attempted whenever one or more bulk item requests have failed with an `EsRejectedExecutionException` which indicates that there were too little compute resources available for processing the request. To disable backoff, pass `BackoffPolicy.noBackoff()`. +By default, `BulkProcessor`: + +* sets bulkActions to `1000` +* sets bulkSize to `5mb` +* does not set flushInterval +* sets concurrentRequests to 1, which means an asynchronous execution of the flush operation. +* sets backoffPolicy to an exponential backoff with 8 retries and a start delay of 50ms. The total wait time is roughly 5.1 seconds. + +[[java-docs-bulk-processor-requests]] +==== Add requests + Then you can simply add your requests to the `BulkProcessor`: [source,java] @@ -102,13 +113,8 @@ bulkProcessor.add(new IndexRequest("twitter", "tweet", "1").source(/* your doc h bulkProcessor.add(new DeleteRequest("twitter", "tweet", "2")); -------------------------------------------------- -By default, `BulkProcessor`: - -* sets bulkActions to `1000` -* sets bulkSize to `5mb` -* does not set flushInterval -* sets concurrentRequests to 1 -* sets backoffPolicy to an exponential backoff with 8 retries and a start delay of 50ms. The total wait time is roughly 5.1 seconds. +[[java-docs-bulk-processor-close]] +==== Closing the Bulk Processor When all documents are loaded to the `BulkProcessor` it can be closed by using `awaitClose` or `close` methods: @@ -129,3 +135,33 @@ Both methods flush any remaining documents and disable all other scheduled flush all bulk requests to complete then returns `true`, if the specified waiting time elapses before all bulk requests complete, `false` is returned. The `close` method doesn't wait for any remaining bulk requests to complete and exits immediately. +[[java-docs-bulk-processor-tests]] +==== Using Bulk Processor in tests + +If you are running tests with elasticsearch and are using the `BulkProcessor` to populate your dataset +you should better set the number of concurrent requests to `0` so the flush operation of the bulk will be executed +in a synchronous manner: + +[source,java] +-------------------------------------------------- +BulkProcessor bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() { /* Listener methods */ }) + .setBulkActions(10000) + .setConcurrentRequests(0) + .build(); + +// Add your requests +bulkProcessor.add(/* Your requests */); + +// Flush any remaining requests +bulkProcessor.flush(); + +// Or close the bulkProcessor if you don't need it anymore +bulkProcessor.close(); + +// Refresh your indices +client.admin().indices().prepareRefresh().get(); + +// Now you can start searching! +client.prepareSearch().get(); +-------------------------------------------------- + From d44de0cecc0a94a40c06b233336272a5ae6e0f01 Mon Sep 17 00:00:00 2001 From: Areek Zillur Date: Fri, 16 Dec 2016 12:06:02 -0500 Subject: [PATCH 02/26] Remove deprecated _suggest endpoint (#22203) In #20305, _suggest endpoint was deprecated in favour of using _search endpoint. This commit removes the dedicated _suggest endpoint entirely from master. --- .../elasticsearch/action/ActionModule.java | 2 - .../rest/action/search/RestSuggestAction.java | 97 ------ .../elasticsearch/search/suggest/Suggest.java | 10 +- .../resources/rest-api-spec/api/suggest.json | 44 --- .../rest-api-spec/test/suggest/10_basic.yaml | 16 - .../test/suggest/110_completion.yaml | 314 ------------------ 6 files changed, 1 insertion(+), 482 deletions(-) delete mode 100644 core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java delete mode 100644 rest-api-spec/src/main/resources/rest-api-spec/api/suggest.json delete mode 100644 rest-api-spec/src/main/resources/rest-api-spec/test/suggest/110_completion.yaml diff --git a/core/src/main/java/org/elasticsearch/action/ActionModule.java b/core/src/main/java/org/elasticsearch/action/ActionModule.java index 1d0f816c52e..a24ed5f8083 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionModule.java +++ b/core/src/main/java/org/elasticsearch/action/ActionModule.java @@ -310,7 +310,6 @@ import org.elasticsearch.rest.action.search.RestExplainAction; import org.elasticsearch.rest.action.search.RestMultiSearchAction; import org.elasticsearch.rest.action.search.RestSearchAction; import org.elasticsearch.rest.action.search.RestSearchScrollAction; -import org.elasticsearch.rest.action.search.RestSuggestAction; import org.elasticsearch.threadpool.ThreadPool; import static java.util.Collections.unmodifiableList; @@ -550,7 +549,6 @@ public class ActionModule extends AbstractModule { registerRestHandler(handlers, RestMultiGetAction.class); registerRestHandler(handlers, RestDeleteAction.class); registerRestHandler(handlers, org.elasticsearch.rest.action.document.RestCountAction.class); - registerRestHandler(handlers, RestSuggestAction.class); registerRestHandler(handlers, RestTermVectorsAction.class); registerRestHandler(handlers, RestMultiTermVectorsAction.class); registerRestHandler(handlers, RestBulkAction.class); diff --git a/core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java b/core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java deleted file mode 100644 index e1b4f945e89..00000000000 --- a/core/src/main/java/org/elasticsearch/rest/action/search/RestSuggestAction.java +++ /dev/null @@ -1,97 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.rest.action.search; - -import org.elasticsearch.action.search.SearchRequest; -import org.elasticsearch.action.search.SearchResponse; -import org.elasticsearch.action.support.IndicesOptions; -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.Strings; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.query.QueryParseContext; -import org.elasticsearch.rest.BaseRestHandler; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.RestStatus; -import org.elasticsearch.rest.action.RestBuilderListener; -import org.elasticsearch.search.SearchRequestParsers; -import org.elasticsearch.search.builder.SearchSourceBuilder; -import org.elasticsearch.search.suggest.Suggest; -import org.elasticsearch.search.suggest.SuggestBuilder; - -import java.io.IOException; - -import static org.elasticsearch.rest.RestRequest.Method.GET; -import static org.elasticsearch.rest.RestRequest.Method.POST; -import static org.elasticsearch.rest.action.RestActions.buildBroadcastShardsHeader; - -public class RestSuggestAction extends BaseRestHandler { - - private final SearchRequestParsers searchRequestParsers; - - @Inject - public RestSuggestAction(Settings settings, RestController controller, - SearchRequestParsers searchRequestParsers) { - super(settings); - this.searchRequestParsers = searchRequestParsers; - controller.registerAsDeprecatedHandler(POST, "/_suggest", this, - "[POST /_suggest] is deprecated! Use [POST /_search] instead.", deprecationLogger); - controller.registerAsDeprecatedHandler(GET, "/_suggest", this, - "[GET /_suggest] is deprecated! Use [GET /_search] instead.", deprecationLogger); - controller.registerAsDeprecatedHandler(POST, "/{index}/_suggest", this, - "[POST /{index}/_suggest] is deprecated! Use [POST /{index}/_search] instead.", deprecationLogger); - controller.registerAsDeprecatedHandler(GET, "/{index}/_suggest", this, - "[GET /{index}/_suggest] is deprecated! Use [GET /{index}/_search] instead.", deprecationLogger); - } - - @Override - public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { - final SearchRequest searchRequest = new SearchRequest( - Strings.splitStringByCommaToArray(request.param("index")), new SearchSourceBuilder()); - searchRequest.indicesOptions(IndicesOptions.fromRequest(request, searchRequest.indicesOptions())); - try (XContentParser parser = request.contentOrSourceParamParser()) { - final QueryParseContext context = new QueryParseContext(searchRequestParsers.queryParsers, parser, parseFieldMatcher); - searchRequest.source().suggest(SuggestBuilder.fromXContent(context, searchRequestParsers.suggesters)); - } - searchRequest.routing(request.param("routing")); - searchRequest.preference(request.param("preference")); - return channel -> client.search(searchRequest, new RestBuilderListener(channel) { - @Override - public RestResponse buildResponse(SearchResponse response, XContentBuilder builder) throws Exception { - RestStatus restStatus = RestStatus.status(response.getSuccessfulShards(), - response.getTotalShards(), response.getShardFailures()); - builder.startObject(); - buildBroadcastShardsHeader(builder, request, response.getTotalShards(), - response.getSuccessfulShards(), response.getFailedShards(), response.getShardFailures()); - Suggest suggest = response.getSuggest(); - if (suggest != null) { - suggest.toInnerXContent(builder, request); - } - builder.endObject(); - return new BytesRestResponse(restStatus, builder); - } - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java b/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java index c40b1441000..fc372ee6b2d 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/Suggest.java @@ -150,18 +150,10 @@ public class Suggest implements IterableNAME object - */ - public XContentBuilder toInnerXContent(XContentBuilder builder, Params params) throws IOException { for (Suggestion suggestion : suggestions) { suggestion.toXContent(builder, params); } + builder.endObject(); return builder; } diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/suggest.json b/rest-api-spec/src/main/resources/rest-api-spec/api/suggest.json deleted file mode 100644 index 72ed3aa6db1..00000000000 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/suggest.json +++ /dev/null @@ -1,44 +0,0 @@ -{ - "suggest": { - "documentation": "http://www.elastic.co/guide/en/elasticsearch/reference/master/search-suggesters.html", - "methods": ["POST"], - "url": { - "path": "/_suggest", - "paths": ["/_suggest", "/{index}/_suggest"], - "parts": { - "index": { - "type" : "list", - "description" : "A comma-separated list of index names to restrict the operation; use `_all` or empty string to perform the operation on all indices" - } - }, - "params": { - "ignore_unavailable": { - "type" : "boolean", - "description" : "Whether specified concrete indices should be ignored when unavailable (missing or closed)" - }, - "allow_no_indices": { - "type" : "boolean", - "description" : "Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)" - }, - "expand_wildcards": { - "type" : "enum", - "options" : ["open","closed","none","all"], - "default" : "open", - "description" : "Whether to expand wildcard expression to concrete indices that are open, closed or both." - }, - "preference": { - "type" : "string", - "description" : "Specify the node or shard the operation should be performed on (default: random)" - }, - "routing": { - "type" : "string", - "description" : "Specific routing value" - } - } - }, - "body": { - "description" : "The request definition", - "required" : true - } - } -} \ No newline at end of file diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/10_basic.yaml b/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/10_basic.yaml index 8df87865aae..44ba197f9e0 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/10_basic.yaml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/10_basic.yaml @@ -24,19 +24,3 @@ setup: - match: {suggest.test_suggestion.1.options.0.text: amsterdam} - match: {suggest.test_suggestion.2.options.0.text: meetup} ---- -"Suggest API should have deprecation warning": - - skip: - features: 'warnings' - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - test_suggestion: - text: "The Amsterdma meetpu" - term: - field: body - - - match: {test_suggestion.1.options.0.text: amsterdam} - - match: {test_suggestion.2.options.0.text: meetup} diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/110_completion.yaml b/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/110_completion.yaml deleted file mode 100644 index dbc0b5381ad..00000000000 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/suggest/110_completion.yaml +++ /dev/null @@ -1,314 +0,0 @@ -# This test creates one huge mapping in the setup -# Every test should use its own field to make sure it works - -setup: - - - do: - indices.create: - index: test - body: - mappings: - test: - "properties": - "suggest_1": - "type" : "completion" - "suggest_2": - "type" : "completion" - "suggest_3": - "type" : "completion" - "suggest_4": - "type" : "completion" - "suggest_5a": - "type" : "completion" - "suggest_5b": - "type" : "completion" - "suggest_6": - "type" : "completion" - title: - type: keyword - ---- -"Simple suggestion should work": - - skip: - features: 'warnings' - - - do: - index: - index: test - type: test - id: 1 - body: - suggest_1: "bar" - - - do: - index: - index: test - type: test - id: 2 - body: - suggest_1: "baz" - - - do: - indices.refresh: {} - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "b" - completion: - field: suggest_1 - - - length: { result: 1 } - - length: { result.0.options: 2 } - ---- -"Simple suggestion array should work": - - skip: - features: 'warnings' - - - do: - index: - index: test - type: test - id: 1 - body: - suggest_2: ["bar", "foo"] - - - do: - indices.refresh: {} - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "f" - completion: - field: suggest_2 - - - length: { result: 1 } - - length: { result.0.options: 1 } - - match: { result.0.options.0.text: "foo" } - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "b" - completion: - field: suggest_2 - - - length: { result: 1 } - - length: { result.0.options: 1 } - - match: { result.0.options.0.text: "bar" } - ---- -"Suggestion entry should work": - - skip: - features: 'warnings' - - - do: - index: - index: test - type: test - id: 1 - body: - suggest_3: - input: "bar" - weight: 2 - - - do: - index: - index: test - type: test - id: 2 - body: - suggest_3: - input: "baz" - weight: 3 - - - do: - indices.refresh: {} - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "b" - completion: - field: suggest_3 - - - length: { result: 1 } - - length: { result.0.options: 2 } - - match: { result.0.options.0.text: "baz" } - - match: { result.0.options.1.text: "bar" } - ---- -"Suggestion entry array should work": - - skip: - features: 'warnings' - - - do: - index: - index: test - type: test - id: 1 - body: - suggest_4: - - input: "bar" - weight: 3 - - input: "fo" - weight: 3 - - - do: - index: - index: test - type: test - id: 2 - body: - suggest_4: - - input: "baz" - weight: 2 - - input: "foo" - weight: 1 - - - do: - indices.refresh: {} - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "b" - completion: - field: suggest_4 - - - length: { result: 1 } - - length: { result.0.options: 2 } - - match: { result.0.options.0.text: "bar" } - - match: { result.0.options.1.text: "baz" } - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "f" - completion: - field: suggest_4 - - - length: { result: 1 } - - length: { result.0.options: 2 } - - match: { result.0.options.0.text: "fo" } - - match: { result.0.options.1.text: "foo" } - ---- -"Multiple Completion fields should work": - - skip: - features: 'warnings' - - - do: - index: - index: test - type: test - id: 1 - body: - suggest_5a: "bar" - suggest_5b: "baz" - - - do: - indices.refresh: {} - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "b" - completion: - field: suggest_5a - - - length: { result: 1 } - - length: { result.0.options: 1 } - - match: { result.0.options.0.text: "bar" } - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "b" - completion: - field: suggest_5b - - - length: { result: 1 } - - length: { result.0.options: 1 } - - match: { result.0.options.0.text: "baz" } - ---- -"Suggestions with source should work": - - skip: - features: 'warnings' - - - do: - index: - index: test - type: test - id: 1 - body: - suggest_6: - input: "bar" - weight: 2 - title: "title_bar" - count: 4 - - - do: - index: - index: test - type: test - id: 2 - body: - suggest_6: - input: "baz" - weight: 3 - title: "title_baz" - count: 3 - - - do: - indices.refresh: {} - - - do: - warnings: - - "[POST /_suggest] is deprecated! Use [POST /_search] instead." - suggest: - body: - result: - text: "b" - completion: - field: suggest_6 - - - length: { result: 1 } - - length: { result.0.options: 2 } - - match: { result.0.options.0.text: "baz" } - - match: { result.0.options.0._index: "test" } - - match: { result.0.options.0._type: "test" } - - match: { result.0.options.0._source.title: "title_baz" } - - match: { result.0.options.0._source.count: 3 } - - match: { result.0.options.1.text: "bar" } - - match: { result.0.options.1._index: "test" } - - match: { result.0.options.1._type: "test" } - - match: { result.0.options.1._source.title: "title_bar" } - - match: { result.0.options.1._source.count: 4 } From bb3716794632b4ac91d76434089415eb825f157a Mon Sep 17 00:00:00 2001 From: Tal Levy Date: Fri, 16 Dec 2016 10:17:27 -0800 Subject: [PATCH 03/26] Enables the ability to inject serialized json fields into root of document. (#22179) The JSON processor has an optional field called "target_field". If you don't specify target_field then target_field becomes what you specified as "field". There isn't anyway to add the fields to the root of a document. By setting `add_to_root`, now serialized fields will be inserted into the top-level fields of the ingest document. Closes #21898. --- docs/reference/ingest/ingest-node.asciidoc | 1 + .../ingest/common/JsonProcessor.java | 33 ++++++++++++++++--- .../common/JsonProcessorFactoryTests.java | 25 ++++++++++++++ .../ingest/common/JsonProcessorTests.java | 29 ++++++++++++++-- 4 files changed, 81 insertions(+), 7 deletions(-) diff --git a/docs/reference/ingest/ingest-node.asciidoc b/docs/reference/ingest/ingest-node.asciidoc index ce92e7c8e74..3cdcfd5d2cd 100644 --- a/docs/reference/ingest/ingest-node.asciidoc +++ b/docs/reference/ingest/ingest-node.asciidoc @@ -1473,6 +1473,7 @@ Converts a JSON string into a structured JSON object. | Name | Required | Default | Description | `field` | yes | - | The field to be parsed | `target_field` | no | `field` | The field to insert the converted structured object into +| `add_to_root` | no | false | Flag that forces the serialized json to be injected into the top level of the document. `target_field` must not be set when this option is chosen. |====== [source,js] diff --git a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/JsonProcessor.java b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/JsonProcessor.java index 024c3aef941..cb734e7bef4 100644 --- a/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/JsonProcessor.java +++ b/modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/JsonProcessor.java @@ -28,6 +28,8 @@ import org.elasticsearch.ingest.Processor; import java.util.Map; +import static org.elasticsearch.ingest.ConfigurationUtils.newConfigurationException; + /** * Processor that serializes a string-valued field into a * map of maps. @@ -38,11 +40,13 @@ public final class JsonProcessor extends AbstractProcessor { private final String field; private final String targetField; + private final boolean addToRoot; - JsonProcessor(String tag, String field, String targetField) { + JsonProcessor(String tag, String field, String targetField, boolean addToRoot) { super(tag); this.field = field; this.targetField = targetField; + this.addToRoot = addToRoot; } public String getField() { @@ -53,12 +57,22 @@ public final class JsonProcessor extends AbstractProcessor { return targetField; } + boolean isAddToRoot() { + return addToRoot; + } + @Override public void execute(IngestDocument document) throws Exception { String stringValue = document.getFieldValue(field, String.class); try { Map mapValue = JsonXContent.jsonXContent.createParser(stringValue).map(); - document.setFieldValue(targetField, mapValue); + if (addToRoot) { + for (Map.Entry entry : mapValue.entrySet()) { + document.setFieldValue(entry.getKey(), entry.getValue()); + } + } else { + document.setFieldValue(targetField, mapValue); + } } catch (JsonParseException e) { throw new IllegalArgumentException(e); } @@ -74,8 +88,19 @@ public final class JsonProcessor extends AbstractProcessor { public JsonProcessor create(Map registry, String processorTag, Map config) throws Exception { String field = ConfigurationUtils.readStringProperty(TYPE, processorTag, config, "field"); - String targetField = ConfigurationUtils.readStringProperty(TYPE, processorTag, config, "target_field", field); - return new JsonProcessor(processorTag, field, targetField); + String targetField = ConfigurationUtils.readOptionalStringProperty(TYPE, processorTag, config, "target_field"); + boolean addToRoot = ConfigurationUtils.readBooleanProperty(TYPE, processorTag, config, "add_to_root", false); + + if (addToRoot && targetField != null) { + throw newConfigurationException(TYPE, processorTag, "target_field", + "Cannot set a target field while also setting `add_to_root` to true"); + } + + if (targetField == null) { + targetField = field; + } + + return new JsonProcessor(processorTag, field, targetField, addToRoot); } } } diff --git a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorFactoryTests.java b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorFactoryTests.java index 6b935b8795c..456b31f8720 100644 --- a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorFactoryTests.java +++ b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorFactoryTests.java @@ -48,6 +48,19 @@ public class JsonProcessorFactoryTests extends ESTestCase { assertThat(jsonProcessor.getTargetField(), equalTo(randomTargetField)); } + public void testCreateWithAddToRoot() throws Exception { + String processorTag = randomAsciiOfLength(10); + String randomField = randomAsciiOfLength(10); + Map config = new HashMap<>(); + config.put("field", randomField); + config.put("add_to_root", true); + JsonProcessor jsonProcessor = FACTORY.create(null, processorTag, config); + assertThat(jsonProcessor.getTag(), equalTo(processorTag)); + assertThat(jsonProcessor.getField(), equalTo(randomField)); + assertThat(jsonProcessor.getTargetField(), equalTo(randomField)); + assertTrue(jsonProcessor.isAddToRoot()); + } + public void testCreateWithDefaultTarget() throws Exception { String processorTag = randomAsciiOfLength(10); String randomField = randomAsciiOfLength(10); @@ -66,4 +79,16 @@ public class JsonProcessorFactoryTests extends ESTestCase { () -> FACTORY.create(null, processorTag, config)); assertThat(exception.getMessage(), equalTo("[field] required property is missing")); } + + public void testCreateWithBothTargetFieldAndAddToRoot() throws Exception { + String randomField = randomAsciiOfLength(10); + String randomTargetField = randomAsciiOfLength(5); + Map config = new HashMap<>(); + config.put("field", randomField); + config.put("target_field", randomTargetField); + config.put("add_to_root", true); + ElasticsearchException exception = expectThrows(ElasticsearchParseException.class, + () -> FACTORY.create(null, randomAsciiOfLength(10), config)); + assertThat(exception.getMessage(), equalTo("[target_field] Cannot set a target field while also setting `add_to_root` to true")); + } } diff --git a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorTests.java b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorTests.java index c62ebbb12ab..2b2c521417c 100644 --- a/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorTests.java +++ b/modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/JsonProcessorTests.java @@ -39,7 +39,7 @@ public class JsonProcessorTests extends ESTestCase { String processorTag = randomAsciiOfLength(3); String randomField = randomAsciiOfLength(3); String randomTargetField = randomAsciiOfLength(2); - JsonProcessor jsonProcessor = new JsonProcessor(processorTag, randomField, randomTargetField); + JsonProcessor jsonProcessor = new JsonProcessor(processorTag, randomField, randomTargetField, false); Map document = new HashMap<>(); Map randomJsonMap = RandomDocumentPicks.randomSource(random()); @@ -54,7 +54,7 @@ public class JsonProcessorTests extends ESTestCase { } public void testInvalidJson() { - JsonProcessor jsonProcessor = new JsonProcessor("tag", "field", "target_field"); + JsonProcessor jsonProcessor = new JsonProcessor("tag", "field", "target_field", false); Map document = new HashMap<>(); document.put("field", "invalid json"); IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), document); @@ -66,11 +66,34 @@ public class JsonProcessorTests extends ESTestCase { } public void testFieldMissing() { - JsonProcessor jsonProcessor = new JsonProcessor("tag", "field", "target_field"); + JsonProcessor jsonProcessor = new JsonProcessor("tag", "field", "target_field", false); Map document = new HashMap<>(); IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), document); Exception exception = expectThrows(IllegalArgumentException.class, () -> jsonProcessor.execute(ingestDocument)); assertThat(exception.getMessage(), equalTo("field [field] not present as part of path [field]")); } + + @SuppressWarnings("unchecked") + public void testAddToRoot() throws Exception { + String processorTag = randomAsciiOfLength(3); + String randomTargetField = randomAsciiOfLength(2); + JsonProcessor jsonProcessor = new JsonProcessor(processorTag, "a", randomTargetField, true); + Map document = new HashMap<>(); + + String json = "{\"a\": 1, \"b\": 2}"; + document.put("a", json); + document.put("c", "see"); + + IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), document); + jsonProcessor.execute(ingestDocument); + + Map expected = new HashMap<>(); + expected.put("a", 1); + expected.put("b", 2); + expected.put("c", "see"); + IngestDocument expectedIngestDocument = RandomDocumentPicks.randomIngestDocument(random(), expected); + + assertIngestDocument(ingestDocument, expectedIngestDocument); + } } From 2265be69d244b04b9c36dde6c6f66aa295691c67 Mon Sep 17 00:00:00 2001 From: Luca Cavanna Date: Fri, 16 Dec 2016 19:33:12 +0100 Subject: [PATCH 04/26] Deprecate XContentType auto detection methods in XContentFactory (#22181) With recent changes to our parsing code we have drastically reduced the places where we auto-detect the content type from the input. The usage of these methods spread in our codebase for no reason, given that in most of the cases we know the content type upfront and we don't need any auto-detection mechanism. Deprecating these methods is a way to try and make sure that these methods are carefully used, and hopefully not introduced in newly written code. We have yet to fix the REST layer to read the Content-Type header, which is the long term solution, but for now we just want to make sure that the usage of these methods doesn't spread any further. Relates to #19388 --- .../common/xcontent/XContentFactory.java | 53 +++++++++++++++++-- 1 file changed, 50 insertions(+), 3 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java index a5350e3c662..dc9d1c493a3 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java @@ -141,7 +141,12 @@ public class XContentFactory { /** * Guesses the content type based on the provided char sequence. + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. */ + @Deprecated public static XContentType xContentType(CharSequence content) { int length = content.length() < GUESS_HEADER_LENGTH ? content.length() : GUESS_HEADER_LENGTH; if (length == 0) { @@ -174,8 +179,13 @@ public class XContentFactory { } /** - * Guesses the content (type) based on the provided char sequence. + * Guesses the content (type) based on the provided char sequence and returns the corresponding {@link XContent} + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. */ + @Deprecated public static XContent xContent(CharSequence content) { XContentType type = xContentType(content); if (type == null) { @@ -185,15 +195,24 @@ public class XContentFactory { } /** - * Guesses the content type based on the provided bytes. + * Guesses the content type based on the provided bytes and returns the corresponding {@link XContent} + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. */ + @Deprecated public static XContent xContent(byte[] data) { return xContent(data, 0, data.length); } /** - * Guesses the content type based on the provided bytes. + * Guesses the content type based on the provided bytes and returns the corresponding {@link XContent} + * + * @deprecated guessing the content type should not be needed ideally. We should rather know the content type upfront or read it + * from headers. Till we fixed the REST layer to read the Content-Type header, that should be the only place where guessing is needed. */ + @Deprecated public static XContent xContent(byte[] data, int offset, int length) { XContentType type = xContentType(data, offset, length); if (type == null) { @@ -204,14 +223,24 @@ public class XContentFactory { /** * Guesses the content type based on the provided bytes. + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. */ + @Deprecated public static XContentType xContentType(byte[] data) { return xContentType(data, 0, data.length); } /** * Guesses the content type based on the provided input stream without consuming it. + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. */ + @Deprecated public static XContentType xContentType(InputStream si) throws IOException { if (si.markSupported() == false) { throw new IllegalArgumentException("Cannot guess the xcontent type without mark/reset support on " + si.getClass()); @@ -228,11 +257,24 @@ public class XContentFactory { /** * Guesses the content type based on the provided bytes. + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. */ + @Deprecated public static XContentType xContentType(byte[] data, int offset, int length) { return xContentType(new BytesArray(data, offset, length)); } + /** + * Guesses the content type based on the provided bytes and returns the corresponding {@link XContent} + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. + */ + @Deprecated public static XContent xContent(BytesReference bytes) { XContentType type = xContentType(bytes); if (type == null) { @@ -243,7 +285,12 @@ public class XContentFactory { /** * Guesses the content type based on the provided bytes. + * + * @deprecated the content type should not be guessed except for few cases where we effectively don't know the content type. + * The REST layer should move to reading the Content-Type header instead. There are other places where auto-detection may be needed. + * This method is deprecated to prevent usages of it from spreading further without specific reasons. */ + @Deprecated public static XContentType xContentType(BytesReference bytes) { int length = bytes.length(); if (length == 0) { From 30806af6bdde6ef0cf26a633b822de769161d2c0 Mon Sep 17 00:00:00 2001 From: Jason Tedor Date: Fri, 16 Dec 2016 18:20:11 -0500 Subject: [PATCH 05/26] Rename bootstrap.seccomp to bootstrap.system_call_filter We try to install a system call filter on various operating systems (Linux, macOS, BSD, Solaris, and Windows) but the setting (bootstrap.seccomp) to control this is named after the Linux implementation (seccomp). This commit replaces this setting with bootstrap.system_call_filter. For backwards compatibility reasons, we fallback to bootstrap.seccomp and log a deprecation message if bootstrap.seccomp is set. We intend to remove this fallback in 6.0.0. Note that now is the time to make this change it's likely that most users are not making this setting anyway as prior to version 5.2.0 (currently unreleased) it was not necessary to configure anything to enable a node to start up if the system call filter failed to install (we marched on anyway) but starting in 5.2.0 it will be necessary in this case. Relates #22226 --- .../main/java/org/elasticsearch/bootstrap/Bootstrap.java | 6 +++++- .../java/org/elasticsearch/bootstrap/BootstrapChecks.java | 6 +++--- .../java/org/elasticsearch/bootstrap/BootstrapSettings.java | 4 ++-- .../org/elasticsearch/common/settings/ClusterSettings.java | 2 +- .../org/elasticsearch/bootstrap/BootstrapCheckTests.java | 6 +++--- .../org/elasticsearch/bootstrap/BootstrapSettingsTests.java | 2 +- docs/reference/setup/bootstrap-checks.asciidoc | 2 +- 7 files changed, 16 insertions(+), 12 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java index aadb0c3593a..b440ece38cd 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java @@ -30,11 +30,13 @@ import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.StringHelper; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.Version; +import org.elasticsearch.cli.ExitCodes; import org.elasticsearch.cli.Terminal; import org.elasticsearch.cli.UserException; import org.elasticsearch.common.PidFile; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.inject.CreationException; +import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.logging.LogConfigurator; import org.elasticsearch.common.logging.Loggers; @@ -56,7 +58,9 @@ import java.net.URISyntaxException; import java.nio.file.Path; import java.security.NoSuchAlgorithmException; import java.util.List; +import java.util.Locale; import java.util.Map; +import java.util.Objects; import java.util.concurrent.CountDownLatch; /** @@ -177,7 +181,7 @@ final class Bootstrap { initializeNatives( environment.tmpFile(), BootstrapSettings.MEMORY_LOCK_SETTING.get(settings), - BootstrapSettings.SECCOMP_SETTING.get(settings), + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(settings), BootstrapSettings.CTRLHANDLER_SETTING.get(settings)); // initialize probes before the security manager is installed diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java index bd62db6a9d8..930c6afdc90 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java @@ -166,7 +166,7 @@ final class BootstrapChecks { } checks.add(new ClientJvmCheck()); checks.add(new UseSerialGCCheck()); - checks.add(new SystemCallFilterCheck(BootstrapSettings.SECCOMP_SETTING.get(settings))); + checks.add(new SystemCallFilterCheck(BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(settings))); checks.add(new OnErrorCheck()); checks.add(new OnOutOfMemoryErrorCheck()); checks.add(new G1GCCheck()); @@ -521,7 +521,7 @@ final class BootstrapChecks { "OnError [%s] requires forking but is prevented by system call filters ([%s=true]);" + " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError", onError(), - BootstrapSettings.SECCOMP_SETTING.getKey()); + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.getKey()); } } @@ -546,7 +546,7 @@ final class BootstrapChecks { "OnOutOfMemoryError [%s] requires forking but is prevented by system call filters ([%s=true]);" + " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError", onOutOfMemoryError(), - BootstrapSettings.SECCOMP_SETTING.getKey()); + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.getKey()); } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java index e8015d83af3..fce50ef2e6f 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapSettings.java @@ -33,8 +33,8 @@ public final class BootstrapSettings { public static final Setting MEMORY_LOCK_SETTING = Setting.boolSetting("bootstrap.memory_lock", false, Property.NodeScope); - public static final Setting SECCOMP_SETTING = - Setting.boolSetting("bootstrap.seccomp", true, Property.NodeScope); + public static final Setting SYSTEM_CALL_FILTER_SETTING = + Setting.boolSetting("bootstrap.system_call_filter", true, Property.NodeScope); public static final Setting CTRLHANDLER_SETTING = Setting.boolSetting("bootstrap.ctrlhandler", true, Property.NodeScope); diff --git a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java index e00b4bc44f1..c9106b8cdba 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java @@ -390,7 +390,7 @@ public final class ClusterSettings extends AbstractScopedSettings { PluginsService.MANDATORY_SETTING, BootstrapSettings.SECURITY_FILTER_BAD_DEFAULTS_SETTING, BootstrapSettings.MEMORY_LOCK_SETTING, - BootstrapSettings.SECCOMP_SETTING, + BootstrapSettings.SYSTEM_CALL_FILTER_SETTING, BootstrapSettings.CTRLHANDLER_SETTING, IndexingMemoryController.INDEX_BUFFER_SIZE_SETTING, IndexingMemoryController.MIN_INDEX_BUFFER_SIZE_SETTING, diff --git a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java index 95d5afcb402..c1f97438894 100644 --- a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java +++ b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java @@ -492,8 +492,8 @@ public class BootstrapCheckTests extends ESTestCase { e -> assertThat( e.getMessage(), containsString( - "OnError [" + command + "] requires forking but is prevented by system call filters ([bootstrap.seccomp=true]);" - + " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError"))); + "OnError [" + command + "] requires forking but is prevented by system call filters " + + "([bootstrap.system_call_filter=true]); upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError"))); } public void testOnOutOfMemoryErrorCheck() throws NodeValidationException { @@ -521,7 +521,7 @@ public class BootstrapCheckTests extends ESTestCase { e.getMessage(), containsString( "OnOutOfMemoryError [" + command + "]" - + " requires forking but is prevented by system call filters ([bootstrap.seccomp=true]);" + + " requires forking but is prevented by system call filters ([bootstrap.system_call_filter=true]);" + " upgrade to at least Java 8u92 and use ExitOnOutOfMemoryError"))); } diff --git a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapSettingsTests.java b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapSettingsTests.java index 128c82e9533..fb3d3628dd1 100644 --- a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapSettingsTests.java +++ b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapSettingsTests.java @@ -27,7 +27,7 @@ public class BootstrapSettingsTests extends ESTestCase { public void testDefaultSettings() { assertTrue(BootstrapSettings.SECURITY_FILTER_BAD_DEFAULTS_SETTING.get(Settings.EMPTY)); assertFalse(BootstrapSettings.MEMORY_LOCK_SETTING.get(Settings.EMPTY)); - assertTrue(BootstrapSettings.SECCOMP_SETTING.get(Settings.EMPTY)); + assertTrue(BootstrapSettings.SYSTEM_CALL_FILTER_SETTING.get(Settings.EMPTY)); assertTrue(BootstrapSettings.CTRLHANDLER_SETTING.get(Settings.EMPTY)); } diff --git a/docs/reference/setup/bootstrap-checks.asciidoc b/docs/reference/setup/bootstrap-checks.asciidoc index 834cb7b2f43..f766e2dacb2 100644 --- a/docs/reference/setup/bootstrap-checks.asciidoc +++ b/docs/reference/setup/bootstrap-checks.asciidoc @@ -156,7 +156,7 @@ The system call filter check ensures that if system call filters are enabled, then they were successfully installed. To pass the system call filter check you must either fix any configuration errors on your system that prevented system call filters from installing (check your logs), or *at your own risk* disable -system call filters by setting `bootstrap.seccomp` to `false`. +system call filters by setting `bootstrap.system_call_filter` to `false`. === OnError and OnOutOfMemoryError checks From f7d43132b2d050e795508e9218e68229843ff386 Mon Sep 17 00:00:00 2001 From: Jason Tedor Date: Fri, 16 Dec 2016 18:30:19 -0500 Subject: [PATCH 06/26] Refer to system call filter instead of seccomp Today in the codebase we refer to seccomp everywhere instead of system call filter even if we are not specifically referring to Linux. This commit is a purely mechanical change to refer to system call filter where appropriate instead of the general seccomp, and only leaves seccomp in place when actually referring to the Linux implementation. Relates #22243 --- .../elasticsearch/bootstrap/Bootstrap.java | 8 +-- .../bootstrap/BootstrapChecks.java | 12 ++-- .../bootstrap/BootstrapInfo.java | 20 +++---- .../elasticsearch/bootstrap/JNANatives.java | 14 ++--- .../org/elasticsearch/bootstrap/Natives.java | 10 ++-- .../org/elasticsearch/bootstrap/Spawner.java | 2 +- .../{Seccomp.java => SystemCallFilter.java} | 7 +-- .../bootstrap/BootstrapCheckTests.java | 57 +++++++++---------- .../elasticsearch/bootstrap/SpawnerTests.java | 3 +- ...pTests.java => SystemCallFilterTests.java} | 20 +++---- .../bootstrap/SpawnerNoBootstrapTests.java | 5 +- 11 files changed, 78 insertions(+), 80 deletions(-) rename core/src/main/java/org/elasticsearch/bootstrap/{Seccomp.java => SystemCallFilter.java} (99%) rename qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/{SeccompTests.java => SystemCallFilterTests.java} (89%) diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java index b440ece38cd..7cac8415e6e 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java @@ -97,7 +97,7 @@ final class Bootstrap { } /** initialize native resources */ - public static void initializeNatives(Path tmpFile, boolean mlockAll, boolean seccomp, boolean ctrlHandler) { + public static void initializeNatives(Path tmpFile, boolean mlockAll, boolean systemCallFilter, boolean ctrlHandler) { final Logger logger = Loggers.getLogger(Bootstrap.class); // check if the user is running as root, and bail @@ -105,9 +105,9 @@ final class Bootstrap { throw new RuntimeException("can not run elasticsearch as root"); } - // enable secure computing mode - if (seccomp) { - Natives.trySeccomp(tmpFile); + // enable system call filter + if (systemCallFilter) { + Natives.tryInstallSystemCallFilter(tmpFile); } // mlockall if requested diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java index 930c6afdc90..cb1b93aef39 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapChecks.java @@ -463,12 +463,12 @@ final class BootstrapChecks { @Override public boolean check() { - return areSystemCallFiltersEnabled && !isSeccompInstalled(); + return areSystemCallFiltersEnabled && !isSystemCallFilterInstalled(); } // visible for testing - boolean isSeccompInstalled() { - return Natives.isSeccompInstalled(); + boolean isSystemCallFilterInstalled() { + return Natives.isSystemCallFilterInstalled(); } @Override @@ -483,12 +483,12 @@ final class BootstrapChecks { @Override public boolean check() { - return isSeccompInstalled() && mightFork(); + return isSystemCallFilterInstalled() && mightFork(); } // visible for testing - boolean isSeccompInstalled() { - return Natives.isSeccompInstalled(); + boolean isSystemCallFilterInstalled() { + return Natives.isSystemCallFilterInstalled(); } // visible for testing diff --git a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java index 791836bf8a4..3ff6639de9a 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/BootstrapInfo.java @@ -24,16 +24,16 @@ import org.elasticsearch.common.SuppressForbidden; import java.util.Dictionary; import java.util.Enumeration; -/** - * Exposes system startup information +/** + * Exposes system startup information */ @SuppressForbidden(reason = "exposes read-only view of system properties") public final class BootstrapInfo { /** no instantiation */ private BootstrapInfo() {} - - /** + + /** * Returns true if we successfully loaded native libraries. *

* If this returns false, then native operations such as locking @@ -42,19 +42,19 @@ public final class BootstrapInfo { public static boolean isNativesAvailable() { return Natives.JNA_AVAILABLE; } - - /** + + /** * Returns true if we were able to lock the process's address space. */ public static boolean isMemoryLocked() { return Natives.isMemoryLocked(); } - + /** - * Returns true if secure computing mode is enabled (supported systems only) + * Returns true if system call filter is installed (supported systems only) */ - public static boolean isSeccompInstalled() { - return Natives.isSeccompInstalled(); + public static boolean isSystemCallFilterInstalled() { + return Natives.isSystemCallFilterInstalled(); } /** diff --git a/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java b/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java index 5f3e357ff5f..d4e11af71ac 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/JNANatives.java @@ -43,11 +43,11 @@ class JNANatives { // Set to true, in case native mlockall call was successful static boolean LOCAL_MLOCKALL = false; - // Set to true, in case native seccomp call was successful - static boolean LOCAL_SECCOMP = false; + // Set to true, in case native system call filter install was successful + static boolean LOCAL_SYSTEM_CALL_FILTER = false; // Set to true, in case policy can be applied to all threads of the process (even existing ones) // otherwise they are only inherited for new threads (ES app threads) - static boolean LOCAL_SECCOMP_ALL = false; + static boolean LOCAL_SYSTEM_CALL_FILTER_ALL = false; // set to the maximum number of threads that can be created for // the user ID that owns the running Elasticsearch process static long MAX_NUMBER_OF_THREADS = -1; @@ -210,12 +210,12 @@ class JNANatives { } } - static void trySeccomp(Path tmpFile) { + static void tryInstallSystemCallFilter(Path tmpFile) { try { - int ret = Seccomp.init(tmpFile); - LOCAL_SECCOMP = true; + int ret = SystemCallFilter.init(tmpFile); + LOCAL_SYSTEM_CALL_FILTER = true; if (ret == 1) { - LOCAL_SECCOMP_ALL = true; + LOCAL_SYSTEM_CALL_FILTER_ALL = true; } } catch (Exception e) { // this is likely to happen unless the kernel is newish, its a best effort at the moment diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Natives.java b/core/src/main/java/org/elasticsearch/bootstrap/Natives.java index 9fad34e329f..ad6ec985ca1 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Natives.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Natives.java @@ -91,12 +91,12 @@ final class Natives { return JNANatives.LOCAL_MLOCKALL; } - static void trySeccomp(Path tmpFile) { + static void tryInstallSystemCallFilter(Path tmpFile) { if (!JNA_AVAILABLE) { - logger.warn("cannot install syscall filters because JNA is not available"); + logger.warn("cannot install system call filter because JNA is not available"); return; } - JNANatives.trySeccomp(tmpFile); + JNANatives.tryInstallSystemCallFilter(tmpFile); } static void trySetMaxNumberOfThreads() { @@ -115,10 +115,10 @@ final class Natives { JNANatives.trySetMaxSizeVirtualMemory(); } - static boolean isSeccompInstalled() { + static boolean isSystemCallFilterInstalled() { if (!JNA_AVAILABLE) { return false; } - return JNANatives.LOCAL_SECCOMP; + return JNANatives.LOCAL_SYSTEM_CALL_FILTER; } } diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java b/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java index a518f32bb40..44cf2d2b0aa 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Spawner.java @@ -34,7 +34,7 @@ import java.util.List; import java.util.Locale; /** - * Spawns native plugin controller processes if present. Will only work prior to seccomp being set up. + * Spawns native plugin controller processes if present. Will only work prior to a system call filter being installed. */ final class Spawner implements Closeable { diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Seccomp.java b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java similarity index 99% rename from core/src/main/java/org/elasticsearch/bootstrap/Seccomp.java rename to core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java index a510e964b7e..38951c510db 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Seccomp.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java @@ -43,8 +43,7 @@ import java.util.List; import java.util.Map; /** - * Installs a limited form of secure computing mode, - * to filters system calls to block process execution. + * Installs a system call filter to block process execution. *

* This is supported on Linux, Solaris, FreeBSD, OpenBSD, Mac OS X, and Windows. *

@@ -91,8 +90,8 @@ import java.util.Map; * https://docs.oracle.com/cd/E23824_01/html/821-1456/prbac-2.html */ // not an example of how to write code!!! -final class Seccomp { - private static final Logger logger = Loggers.getLogger(Seccomp.class); +final class SystemCallFilter { + private static final Logger logger = Loggers.getLogger(SystemCallFilter.class); // Linux implementation, based on seccomp(2) or prctl(2) with bpf filtering diff --git a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java index c1f97438894..bc810254920 100644 --- a/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java +++ b/core/src/test/java/org/elasticsearch/bootstrap/BootstrapCheckTests.java @@ -409,11 +409,11 @@ public class BootstrapCheckTests extends ESTestCase { } public void testSystemCallFilterCheck() throws NodeValidationException { - final AtomicBoolean isSecompInstalled = new AtomicBoolean(); + final AtomicBoolean isSystemCallFilterInstalled = new AtomicBoolean(); final BootstrapChecks.SystemCallFilterCheck systemCallFilterEnabledCheck = new BootstrapChecks.SystemCallFilterCheck(true) { @Override - boolean isSeccompInstalled() { - return isSecompInstalled.get(); + boolean isSystemCallFilterInstalled() { + return isSystemCallFilterInstalled.get(); } }; @@ -425,28 +425,28 @@ public class BootstrapCheckTests extends ESTestCase { containsString("system call filters failed to install; " + "check the logs and fix your configuration or disable system call filters at your own risk")); - isSecompInstalled.set(true); + isSystemCallFilterInstalled.set(true); BootstrapChecks.check(true, Collections.singletonList(systemCallFilterEnabledCheck), "testSystemCallFilterCheck"); final BootstrapChecks.SystemCallFilterCheck systemCallFilterNotEnabledCheck = new BootstrapChecks.SystemCallFilterCheck(false) { @Override - boolean isSeccompInstalled() { - return isSecompInstalled.get(); + boolean isSystemCallFilterInstalled() { + return isSystemCallFilterInstalled.get(); } }; - isSecompInstalled.set(false); + isSystemCallFilterInstalled.set(false); BootstrapChecks.check(true, Collections.singletonList(systemCallFilterNotEnabledCheck), "testSystemCallFilterCheck"); - isSecompInstalled.set(true); + isSystemCallFilterInstalled.set(true); BootstrapChecks.check(true, Collections.singletonList(systemCallFilterNotEnabledCheck), "testSystemCallFilterCheck"); } public void testMightForkCheck() throws NodeValidationException { - final AtomicBoolean isSeccompInstalled = new AtomicBoolean(); + final AtomicBoolean isSystemCallFilterInstalled = new AtomicBoolean(); final AtomicBoolean mightFork = new AtomicBoolean(); final BootstrapChecks.MightForkCheck check = new BootstrapChecks.MightForkCheck() { @Override - boolean isSeccompInstalled() { - return isSeccompInstalled.get(); + boolean isSystemCallFilterInstalled() { + return isSystemCallFilterInstalled.get(); } @Override @@ -462,19 +462,19 @@ public class BootstrapCheckTests extends ESTestCase { runMightForkTest( check, - isSeccompInstalled, + isSystemCallFilterInstalled, () -> mightFork.set(false), () -> mightFork.set(true), e -> assertThat(e.getMessage(), containsString("error"))); } public void testOnErrorCheck() throws NodeValidationException { - final AtomicBoolean isSeccompInstalled = new AtomicBoolean(); + final AtomicBoolean isSystemCallFilterInstalled = new AtomicBoolean(); final AtomicReference onError = new AtomicReference<>(); final BootstrapChecks.MightForkCheck check = new BootstrapChecks.OnErrorCheck() { @Override - boolean isSeccompInstalled() { - return isSeccompInstalled.get(); + boolean isSystemCallFilterInstalled() { + return isSystemCallFilterInstalled.get(); } @Override @@ -486,7 +486,7 @@ public class BootstrapCheckTests extends ESTestCase { final String command = randomAsciiOfLength(16); runMightForkTest( check, - isSeccompInstalled, + isSystemCallFilterInstalled, () -> onError.set(randomBoolean() ? "" : null), () -> onError.set(command), e -> assertThat( @@ -497,12 +497,12 @@ public class BootstrapCheckTests extends ESTestCase { } public void testOnOutOfMemoryErrorCheck() throws NodeValidationException { - final AtomicBoolean isSeccompInstalled = new AtomicBoolean(); + final AtomicBoolean isSystemCallFilterInstalled = new AtomicBoolean(); final AtomicReference onOutOfMemoryError = new AtomicReference<>(); final BootstrapChecks.MightForkCheck check = new BootstrapChecks.OnOutOfMemoryErrorCheck() { @Override - boolean isSeccompInstalled() { - return isSeccompInstalled.get(); + boolean isSystemCallFilterInstalled() { + return isSystemCallFilterInstalled.get(); } @Override @@ -514,7 +514,7 @@ public class BootstrapCheckTests extends ESTestCase { final String command = randomAsciiOfLength(16); runMightForkTest( check, - isSeccompInstalled, + isSystemCallFilterInstalled, () -> onOutOfMemoryError.set(randomBoolean() ? "" : null), () -> onOutOfMemoryError.set(command), e -> assertThat( @@ -527,15 +527,15 @@ public class BootstrapCheckTests extends ESTestCase { private void runMightForkTest( final BootstrapChecks.MightForkCheck check, - final AtomicBoolean isSeccompInstalled, + final AtomicBoolean isSystemCallFilterInstalled, final Runnable disableMightFork, final Runnable enableMightFork, final Consumer consumer) throws NodeValidationException { final String methodName = Thread.currentThread().getStackTrace()[2].getMethodName(); - // if seccomp is disabled, nothing should happen - isSeccompInstalled.set(false); + // if system call filter is disabled, nothing should happen + isSystemCallFilterInstalled.set(false); if (randomBoolean()) { disableMightFork.run(); } else { @@ -543,16 +543,15 @@ public class BootstrapCheckTests extends ESTestCase { } BootstrapChecks.check(true, Collections.singletonList(check), methodName); - // if seccomp is enabled, but we will not fork, nothing should + // if system call filter is enabled, but we will not fork, nothing should // happen - isSeccompInstalled.set(true); + isSystemCallFilterInstalled.set(true); disableMightFork.run(); BootstrapChecks.check(true, Collections.singletonList(check), methodName); - // if seccomp is enabled, and we might fork, the check should - // be enforced, regardless of bootstrap checks being enabled or - // not - isSeccompInstalled.set(true); + // if system call filter is enabled, and we might fork, the check should be enforced, regardless of bootstrap checks being enabled + // or not + isSystemCallFilterInstalled.set(true); enableMightFork.run(); final NodeValidationException e = expectThrows( diff --git a/core/src/test/java/org/elasticsearch/bootstrap/SpawnerTests.java b/core/src/test/java/org/elasticsearch/bootstrap/SpawnerTests.java index 680bd4f5562..58c112ba96d 100644 --- a/core/src/test/java/org/elasticsearch/bootstrap/SpawnerTests.java +++ b/core/src/test/java/org/elasticsearch/bootstrap/SpawnerTests.java @@ -25,7 +25,7 @@ import org.elasticsearch.test.ESTestCase; import java.util.Locale; /** - * Doesn't actually test spawning a process, as seccomp is installed before tests run and forbids it. + * Doesn't actually test spawning a process, as a system call filter is installed before tests run and forbids it. */ public class SpawnerTests extends ESTestCase { @@ -48,4 +48,5 @@ public class SpawnerTests extends ESTestCase { assertEquals("windows-x86_64", Spawner.makePlatformName("Windows 8.1", "amd64")); assertEquals("sunos-x86_64", Spawner.makePlatformName("SunOS", "amd64")); } + } diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/SeccompTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/SystemCallFilterTests.java similarity index 89% rename from qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/SeccompTests.java rename to qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/SystemCallFilterTests.java index d028dfd573a..244f98356d9 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/SeccompTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/bootstrap/SystemCallFilterTests.java @@ -22,30 +22,30 @@ package org.elasticsearch.bootstrap; import org.apache.lucene.util.Constants; import org.elasticsearch.test.ESTestCase; -/** Simple tests seccomp filter is working. */ -public class SeccompTests extends ESTestCase { - +/** Simple tests system call filter is working. */ +public class SystemCallFilterTests extends ESTestCase { + /** command to try to run in tests */ static final String EXECUTABLE = Constants.WINDOWS ? "calc" : "ls"; @Override public void setUp() throws Exception { super.setUp(); - assumeTrue("requires seccomp filter installation", Natives.isSeccompInstalled()); + assumeTrue("requires system call filter installation", Natives.isSystemCallFilterInstalled()); // otherwise security manager will block the execution, no fun assumeTrue("cannot test with security manager enabled", System.getSecurityManager() == null); // otherwise, since we don't have TSYNC support, rules are not applied to the test thread // (randomizedrunner class initialization happens in its own thread, after the test thread is created) // instead we just forcefully run it for the test thread here. - if (!JNANatives.LOCAL_SECCOMP_ALL) { + if (!JNANatives.LOCAL_SYSTEM_CALL_FILTER_ALL) { try { - Seccomp.init(createTempDir()); + SystemCallFilter.init(createTempDir()); } catch (Exception e) { - throw new RuntimeException("unable to forcefully apply seccomp to test thread", e); + throw new RuntimeException("unable to forcefully apply system call filter to test thread", e); } } } - + public void testNoExecution() throws Exception { try { Runtime.getRuntime().exec(EXECUTABLE); @@ -63,11 +63,11 @@ public class SeccompTests extends ESTestCase { at java.lang.UNIXProcess.(UNIXProcess.java:248) at java.lang.ProcessImpl.start(ProcessImpl.java:134) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) - ... + ... */ } } - + // make sure thread inherits this too (its documented that way) public void testNoExecutionFromThread() throws Exception { Thread t = new Thread() { diff --git a/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java b/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java index d1556d02758..743d2408b9d 100644 --- a/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java +++ b/qa/no-bootstrap-tests/src/test/java/org/elasticsearch/bootstrap/SpawnerNoBootstrapTests.java @@ -39,9 +39,8 @@ import java.util.concurrent.TimeUnit; /** * Create a simple "daemon controller", put it in the right place and check that it runs. * - * Extends LuceneTestCase rather than ESTestCase as ESTestCase installs seccomp, and that - * prevents the Spawner class doing its job. Also needs to run in a separate JVM to other - * tests that extend ESTestCase for the same reason. + * Extends LuceneTestCase rather than ESTestCase as ESTestCase installs a system call filter, and that prevents the Spawner class doing its + * job. Also needs to run in a separate JVM to other tests that extend ESTestCase for the same reason. */ public class SpawnerNoBootstrapTests extends LuceneTestCase { From 9e5cedae2385750e3f01b85e9bc83a25a865195c Mon Sep 17 00:00:00 2001 From: Ryan Ernst Date: Fri, 16 Dec 2016 22:18:56 -0800 Subject: [PATCH 07/26] Fix line lengths in renamed seccomp file --- .../resources/checkstyle_suppressions.xml | 1 - .../bootstrap/SystemCallFilter.java | 27 ++++++++++++------- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/buildSrc/src/main/resources/checkstyle_suppressions.xml b/buildSrc/src/main/resources/checkstyle_suppressions.xml index 4bbd52affdd..eba6dbfc819 100644 --- a/buildSrc/src/main/resources/checkstyle_suppressions.xml +++ b/buildSrc/src/main/resources/checkstyle_suppressions.xml @@ -216,7 +216,6 @@ - diff --git a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java index 38951c510db..e6c5b2e6dd1 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/SystemCallFilter.java @@ -268,7 +268,8 @@ final class SystemCallFilter { // we couldn't link methods, could be some really ancient kernel (e.g. < 2.1.57) or some bug if (linux_libc == null) { - throw new UnsupportedOperationException("seccomp unavailable: could not link methods. requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); + throw new UnsupportedOperationException("seccomp unavailable: could not link methods. requires kernel 3.5+ " + + "with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); } // pure paranoia: @@ -318,7 +319,8 @@ final class SystemCallFilter { switch (errno) { case ENOSYS: break; // ok case EINVAL: break; // ok - default: throw new UnsupportedOperationException("seccomp(SECCOMP_SET_MODE_FILTER, BOGUS_FLAG): " + JNACLibrary.strerror(errno)); + default: throw new UnsupportedOperationException("seccomp(SECCOMP_SET_MODE_FILTER, BOGUS_FLAG): " + + JNACLibrary.strerror(errno)); } } @@ -345,7 +347,8 @@ final class SystemCallFilter { int errno = Native.getLastError(); if (errno == EINVAL) { // friendly error, this will be the typical case for an old kernel - throw new UnsupportedOperationException("seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); + throw new UnsupportedOperationException("seccomp unavailable: requires kernel 3.5+ with" + + " CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in"); } else { throw new UnsupportedOperationException("prctl(PR_GET_NO_NEW_PRIVS): " + JNACLibrary.strerror(errno)); } @@ -357,7 +360,8 @@ final class SystemCallFilter { default: int errno = Native.getLastError(); if (errno == EINVAL) { - throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); + throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP not compiled into kernel," + + " CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); } else { throw new UnsupportedOperationException("prctl(PR_GET_SECCOMP): " + JNACLibrary.strerror(errno)); } @@ -367,7 +371,8 @@ final class SystemCallFilter { int errno = Native.getLastError(); switch (errno) { case EFAULT: break; // available - case EINVAL: throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP_FILTER not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); + case EINVAL: throw new UnsupportedOperationException("seccomp unavailable: CONFIG_SECCOMP_FILTER not" + + " compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed"); default: throw new UnsupportedOperationException("prctl(PR_SET_SECCOMP): " + JNACLibrary.strerror(errno)); } } @@ -379,10 +384,12 @@ final class SystemCallFilter { // check it worked if (linux_prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0) != 1) { - throw new UnsupportedOperationException("seccomp filter did not really succeed: prctl(PR_GET_NO_NEW_PRIVS): " + JNACLibrary.strerror(Native.getLastError())); + throw new UnsupportedOperationException("seccomp filter did not really succeed: prctl(PR_GET_NO_NEW_PRIVS): " + + JNACLibrary.strerror(Native.getLastError())); } - // BPF installed to check arch, limit, then syscall. See https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt for details. + // BPF installed to check arch, limit, then syscall. + // See https://www.kernel.org/doc/Documentation/prctl/seccomp_filter.txt for details. SockFilter insns[] = { /* 1 */ BPF_STMT(BPF_LD + BPF_W + BPF_ABS, SECCOMP_DATA_ARCH_OFFSET), // /* 2 */ BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, arch.audit, 0, 7), // if (arch != audit) goto fail; @@ -407,7 +414,8 @@ final class SystemCallFilter { method = 0; int errno1 = Native.getLastError(); if (logger.isDebugEnabled()) { - logger.debug("seccomp(SECCOMP_SET_MODE_FILTER): {}, falling back to prctl(PR_SET_SECCOMP)...", JNACLibrary.strerror(errno1)); + logger.debug("seccomp(SECCOMP_SET_MODE_FILTER): {}, falling back to prctl(PR_SET_SECCOMP)...", + JNACLibrary.strerror(errno1)); } if (linux_prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, pointer, 0, 0) != 0) { int errno2 = Native.getLastError(); @@ -418,7 +426,8 @@ final class SystemCallFilter { // now check that the filter was really installed, we should be in filter mode. if (linux_prctl(PR_GET_SECCOMP, 0, 0, 0, 0) != 2) { - throw new UnsupportedOperationException("seccomp filter installation did not really succeed. seccomp(PR_GET_SECCOMP): " + JNACLibrary.strerror(Native.getLastError())); + throw new UnsupportedOperationException("seccomp filter installation did not really succeed. seccomp(PR_GET_SECCOMP): " + + JNACLibrary.strerror(Native.getLastError())); } logger.debug("Linux seccomp filter installation successful, threads: [{}]", method == 1 ? "all" : "app" ); From 0b338bf52394e49c6f4e653e8a6b7479dc71b561 Mon Sep 17 00:00:00 2001 From: Simon Willnauer Date: Sat, 17 Dec 2016 11:45:55 +0100 Subject: [PATCH 08/26] Cleanup random stats serialization code (#22223) Some of our stats serialization code duplicates complicated seriazliation logic or could use existing building blocks from StreamOutput/Input. This commit cleans up some of the serialization code. --- .../admin/indices/stats/CommonStats.java | 199 +++--------------- .../indices/stats/IndicesStatsResponse.java | 10 +- .../admin/indices/stats/ShardStats.java | 3 +- .../node/TransportBroadcastByNodeAction.java | 31 +-- .../common/FieldMemoryStats.java | 132 ++++++++++++ .../common/io/stream/StreamOutput.java | 30 ++- .../index/cache/query/QueryCacheStats.java | 7 - .../index/engine/CommitStats.java | 7 - .../index/engine/SegmentsStats.java | 11 +- .../index/fielddata/FieldDataStats.java | 93 +++----- .../index/fielddata/ShardFieldData.java | 4 +- .../elasticsearch/index/flush/FlushStats.java | 6 - .../org/elasticsearch/index/get/GetStats.java | 6 - .../elasticsearch/index/merge/MergeStats.java | 6 - .../index/refresh/RefreshStats.java | 6 - .../index/search/stats/SearchStats.java | 34 +-- .../elasticsearch/index/shard/DocsStats.java | 6 - .../index/shard/IndexingStats.java | 30 +-- .../elasticsearch/index/store/StoreStats.java | 6 - .../index/warmer/WarmerStats.java | 6 - .../org/elasticsearch/ingest/IngestStats.java | 2 +- .../completion/CompletionFieldStats.java | 3 +- .../suggest/completion/CompletionStats.java | 83 ++------ .../common/FieldMemoryStatsTests.java | 102 +++++++++ .../common/io/stream/BytesStreamsTests.java | 16 ++ .../index/fielddata/FieldDataStatsTests.java | 45 ++++ .../suggest/stats/CompletionsStatsTests.java | 45 ++++ .../indices/stats/IndexStatsIT.java | 33 ++- .../suggest/CompletionSuggestSearchIT.java | 3 +- 29 files changed, 488 insertions(+), 477 deletions(-) create mode 100644 core/src/main/java/org/elasticsearch/common/FieldMemoryStats.java create mode 100644 core/src/test/java/org/elasticsearch/common/FieldMemoryStatsTests.java create mode 100644 core/src/test/java/org/elasticsearch/index/fielddata/FieldDataStatsTests.java create mode 100644 core/src/test/java/org/elasticsearch/index/suggest/stats/CompletionsStatsTests.java diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java index ce90858f49a..b5e91ddf2a1 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/CommonStats.java @@ -46,6 +46,9 @@ import org.elasticsearch.indices.IndicesQueryCache; import org.elasticsearch.search.suggest.completion.CompletionStats; import java.io.IOException; +import java.util.Arrays; +import java.util.Objects; +import java.util.stream.Stream; public class CommonStats implements Writeable, ToXContent { @@ -225,45 +228,19 @@ public class CommonStats implements Writeable, ToXContent { } public CommonStats(StreamInput in) throws IOException { - if (in.readBoolean()) { - docs = DocsStats.readDocStats(in); - } - if (in.readBoolean()) { - store = StoreStats.readStoreStats(in); - } - if (in.readBoolean()) { - indexing = IndexingStats.readIndexingStats(in); - } - if (in.readBoolean()) { - get = GetStats.readGetStats(in); - } - if (in.readBoolean()) { - search = SearchStats.readSearchStats(in); - } - if (in.readBoolean()) { - merge = MergeStats.readMergeStats(in); - } - if (in.readBoolean()) { - refresh = RefreshStats.readRefreshStats(in); - } - if (in.readBoolean()) { - flush = FlushStats.readFlushStats(in); - } - if (in.readBoolean()) { - warmer = WarmerStats.readWarmerStats(in); - } - if (in.readBoolean()) { - queryCache = QueryCacheStats.readQueryCacheStats(in); - } - if (in.readBoolean()) { - fieldData = FieldDataStats.readFieldDataStats(in); - } - if (in.readBoolean()) { - completion = CompletionStats.readCompletionStats(in); - } - if (in.readBoolean()) { - segments = SegmentsStats.readSegmentsStats(in); - } + docs = in.readOptionalStreamable(DocsStats::new); + store = in.readOptionalStreamable(StoreStats::new); + indexing = in.readOptionalStreamable(IndexingStats::new); + get = in.readOptionalStreamable(GetStats::new); + search = in.readOptionalStreamable(SearchStats::new); + merge = in.readOptionalStreamable(MergeStats::new); + refresh = in.readOptionalStreamable(RefreshStats::new); + flush = in.readOptionalStreamable(FlushStats::new); + warmer = in.readOptionalStreamable(WarmerStats::new); + queryCache = in.readOptionalStreamable(QueryCacheStats::new); + fieldData = in.readOptionalStreamable(FieldDataStats::new); + completion = in.readOptionalStreamable(CompletionStats::new); + segments = in.readOptionalStreamable(SegmentsStats::new); translog = in.readOptionalStreamable(TranslogStats::new); requestCache = in.readOptionalStreamable(RequestCacheStats::new); recoveryStats = in.readOptionalStreamable(RecoveryStats::new); @@ -271,84 +248,19 @@ public class CommonStats implements Writeable, ToXContent { @Override public void writeTo(StreamOutput out) throws IOException { - if (docs == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - docs.writeTo(out); - } - if (store == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - store.writeTo(out); - } - if (indexing == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - indexing.writeTo(out); - } - if (get == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - get.writeTo(out); - } - if (search == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - search.writeTo(out); - } - if (merge == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - merge.writeTo(out); - } - if (refresh == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - refresh.writeTo(out); - } - if (flush == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - flush.writeTo(out); - } - if (warmer == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - warmer.writeTo(out); - } - if (queryCache == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - queryCache.writeTo(out); - } - if (fieldData == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - fieldData.writeTo(out); - } - if (completion == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - completion.writeTo(out); - } - if (segments == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - segments.writeTo(out); - } + out.writeOptionalStreamable(docs); + out.writeOptionalStreamable(store); + out.writeOptionalStreamable(indexing); + out.writeOptionalStreamable(get); + out.writeOptionalStreamable(search); + out.writeOptionalStreamable(merge); + out.writeOptionalStreamable(refresh); + out.writeOptionalStreamable(flush); + out.writeOptionalStreamable(warmer); + out.writeOptionalStreamable(queryCache); + out.writeOptionalStreamable(fieldData); + out.writeOptionalStreamable(completion); + out.writeOptionalStreamable(segments); out.writeOptionalStreamable(translog); out.writeOptionalStreamable(requestCache); out.writeOptionalStreamable(recoveryStats); @@ -590,53 +502,12 @@ public class CommonStats implements Writeable, ToXContent { // note, requires a wrapping object @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - if (docs != null) { - docs.toXContent(builder, params); - } - if (store != null) { - store.toXContent(builder, params); - } - if (indexing != null) { - indexing.toXContent(builder, params); - } - if (get != null) { - get.toXContent(builder, params); - } - if (search != null) { - search.toXContent(builder, params); - } - if (merge != null) { - merge.toXContent(builder, params); - } - if (refresh != null) { - refresh.toXContent(builder, params); - } - if (flush != null) { - flush.toXContent(builder, params); - } - if (warmer != null) { - warmer.toXContent(builder, params); - } - if (queryCache != null) { - queryCache.toXContent(builder, params); - } - if (fieldData != null) { - fieldData.toXContent(builder, params); - } - if (completion != null) { - completion.toXContent(builder, params); - } - if (segments != null) { - segments.toXContent(builder, params); - } - if (translog != null) { - translog.toXContent(builder, params); - } - if (requestCache != null) { - requestCache.toXContent(builder, params); - } - if (recoveryStats != null) { - recoveryStats.toXContent(builder, params); + final Stream stream = Arrays.stream(new ToXContent[] { + docs, store, indexing, get, search, merge, refresh, flush, warmer, queryCache, + fieldData, completion, segments, translog, requestCache, recoveryStats}) + .filter(Objects::nonNull); + for (ToXContent toXContent : ((Iterable)stream::iterator)) { + toXContent.toXContent(builder, params); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java index 839c27e0b8a..5b2c024c6b8 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/IndicesStatsResponse.java @@ -135,19 +135,13 @@ public class IndicesStatsResponse extends BroadcastResponse implements ToXConten @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - shards = new ShardStats[in.readVInt()]; - for (int i = 0; i < shards.length; i++) { - shards[i] = ShardStats.readShardStats(in); - } + shards = in.readArray(ShardStats::readShardStats, (size) -> new ShardStats[size]); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeVInt(shards.length); - for (ShardStats shard : shards) { - shard.writeTo(out); - } + out.writeArray(shards); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java index c503da12317..877db0579a0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.io.stream.Writeable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.index.engine.CommitStats; @@ -32,7 +33,7 @@ import org.elasticsearch.index.shard.ShardPath; import java.io.IOException; -public class ShardStats implements Streamable, ToXContent { +public class ShardStats implements Streamable, Writeable, ToXContent { private ShardRouting shardRouting; private CommonStats commonStats; @Nullable diff --git a/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java b/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java index 98c962b3eec..9f11b9b5a70 100644 --- a/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java @@ -505,11 +505,7 @@ public abstract class TransportBroadcastByNodeAction(size); - for (int i = 0; i < size; i++) { - shards.add(new ShardRouting(in)); - } + shards = in.readList(ShardRouting::new); nodeId = in.readString(); } @@ -517,11 +513,7 @@ public abstract class TransportBroadcastByNodeAction(resultsSize); - for (; resultsSize > 0; resultsSize--) { - final ShardOperationResult result = in.readBoolean() ? readShardResult(in) : null; - results.add(result); - } + results = in.readList((stream) -> stream.readBoolean() ? readShardResult(stream) : null); if (in.readBoolean()) { - int failureShards = in.readVInt(); - exceptions = new ArrayList<>(failureShards); - for (int i = 0; i < failureShards; i++) { - exceptions.add(new BroadcastShardOperationFailedException(in)); - } + exceptions = in.readList(BroadcastShardOperationFailedException::new); } else { exceptions = null; } @@ -594,11 +577,7 @@ public abstract class TransportBroadcastByNodeActionfield -> memory size mappings + */ +public final class FieldMemoryStats implements Writeable, Iterable>{ + + private final ObjectLongHashMap stats; + + /** + * Creates a new FieldMemoryStats instance + */ + public FieldMemoryStats(ObjectLongHashMap stats) { + this.stats = Objects.requireNonNull(stats, "status must be non-null"); + assert !stats.containsKey(null); + } + + /** + * Creates a new FieldMemoryStats instance from a stream + */ + public FieldMemoryStats(StreamInput input) throws IOException { + int size = input.readVInt(); + stats = new ObjectLongHashMap<>(size); + for (int i = 0; i < size; i++) { + stats.put(input.readString(), input.readVLong()); + } + } + + /** + * Adds / merges the given field memory stats into this stats instance + */ + public void add(FieldMemoryStats fieldMemoryStats) { + for (ObjectLongCursor entry : fieldMemoryStats.stats) { + stats.addTo(entry.key, entry.value); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(stats.size()); + for (ObjectLongCursor entry : stats) { + out.writeString(entry.key); + out.writeVLong(entry.value); + } + } + + /** + * Generates x-content into the given builder for each of the fields in this stats instance + * @param builder the builder to generated on + * @param key the top level key for this stats object + * @param rawKey the raw byte key for each of the fields byte sizes + * @param readableKey the readable key for each of the fields byte sizes + */ + public void toXContent(XContentBuilder builder, String key, String rawKey, String readableKey) throws IOException { + builder.startObject(key); + for (ObjectLongCursor entry : stats) { + builder.startObject(entry.key); + builder.byteSizeField(rawKey, readableKey, entry.value); + builder.endObject(); + } + builder.endObject(); + } + + /** + * Creates a deep copy of this stats instance + */ + public FieldMemoryStats copy() { + return new FieldMemoryStats(stats.clone()); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + FieldMemoryStats that = (FieldMemoryStats) o; + return Objects.equals(stats, that.stats); + } + + @Override + public int hashCode() { + return Objects.hash(stats); + } + + @Override + public Iterator> iterator() { + return stats.iterator(); + } + + /** + * Returns the fields value in bytes or 0 if it's not present in the stats + */ + public long get(String field) { + return stats.get(field); + } + + /** + * Returns true iff the given field is in the stats + */ + public boolean containsField(String field) { + return stats.containsKey(field); + } +} diff --git a/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java b/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java index 1fdd3e72c83..4fc253cf45d 100644 --- a/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java +++ b/core/src/main/java/org/elasticsearch/common/io/stream/StreamOutput.java @@ -467,16 +467,32 @@ public abstract class StreamOutput extends OutputStream { * @param keyWriter The key writer * @param valueWriter The value writer */ - public void writeMapOfLists(final Map> map, final Writer keyWriter, final Writer valueWriter) + public final void writeMapOfLists(final Map> map, final Writer keyWriter, final Writer valueWriter) throws IOException { - writeVInt(map.size()); - - for (final Map.Entry> entry : map.entrySet()) { - keyWriter.write(this, entry.getKey()); - writeVInt(entry.getValue().size()); - for (final V value : entry.getValue()) { + writeMap(map, keyWriter, (stream, list) -> { + writeVInt(list.size()); + for (final V value : list) { valueWriter.write(this, value); } + }); + } + + /** + * Write a {@link Map} of {@code K}-type keys to {@code V}-type. + *


+     * Map<String, String> map = ...;
+     * out.writeMap(map, StreamOutput::writeString, StreamOutput::writeString);
+     * 
+ * + * @param keyWriter The key writer + * @param valueWriter The value writer + */ + public final void writeMap(final Map map, final Writer keyWriter, final Writer valueWriter) + throws IOException { + writeVInt(map.size()); + for (final Map.Entry entry : map.entrySet()) { + keyWriter.write(this, entry.getKey()); + valueWriter.write(this, entry.getValue()); } } diff --git a/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java b/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java index 33b61a35138..1eff321b47f 100644 --- a/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java +++ b/core/src/main/java/org/elasticsearch/index/cache/query/QueryCacheStats.java @@ -106,13 +106,6 @@ public class QueryCacheStats implements Streamable, ToXContent { return cacheCount - cacheSize; } - public static QueryCacheStats readQueryCacheStats(StreamInput in) throws IOException { - QueryCacheStats stats = new QueryCacheStats(); - stats.readFrom(in); - return stats; - } - - @Override public void readFrom(StreamInput in) throws IOException { ramBytesUsed = in.readLong(); diff --git a/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java b/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java index 48fb8a80eeb..eb2e35a5a23 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java +++ b/core/src/main/java/org/elasticsearch/index/engine/CommitStats.java @@ -49,13 +49,6 @@ public final class CommitStats implements Streamable, ToXContent { } private CommitStats() { - - } - - public static CommitStats readCommitStatsFrom(StreamInput in) throws IOException { - CommitStats commitStats = new CommitStats(); - commitStats.readFrom(in); - return commitStats; } public static CommitStats readOptionalCommitStatsFrom(StreamInput in) throws IOException { diff --git a/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java b/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java index 637beebfec8..ed8e150cd6c 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java +++ b/core/src/main/java/org/elasticsearch/index/engine/SegmentsStats.java @@ -286,12 +286,6 @@ public class SegmentsStats implements Streamable, ToXContent { return maxUnsafeAutoIdTimestamp; } - public static SegmentsStats readSegmentsStats(StreamInput in) throws IOException { - SegmentsStats stats = new SegmentsStats(); - stats.readFrom(in); - return stats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.SEGMENTS); @@ -391,10 +385,9 @@ public class SegmentsStats implements Streamable, ToXContent { out.writeLong(maxUnsafeAutoIdTimestamp); out.writeVInt(fileSizes.size()); - for (Iterator> it = fileSizes.iterator(); it.hasNext();) { - ObjectObjectCursor entry = it.next(); + for (ObjectObjectCursor entry : fileSizes) { out.writeString(entry.key); - out.writeLong(entry.value); + out.writeLong(entry.value.longValue()); } } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java b/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java index 56fe03d4395..6cd2eda5530 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/FieldDataStats.java @@ -19,7 +19,7 @@ package org.elasticsearch.index.fielddata; -import com.carrotsearch.hppc.ObjectLongHashMap; +import org.elasticsearch.common.FieldMemoryStats; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -29,19 +29,25 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; +import java.util.Objects; public class FieldDataStats implements Streamable, ToXContent { + private static final String FIELDDATA = "fielddata"; + private static final String MEMORY_SIZE = "memory_size"; + private static final String MEMORY_SIZE_IN_BYTES = "memory_size_in_bytes"; + private static final String EVICTIONS = "evictions"; + private static final String FIELDS = "fields"; long memorySize; long evictions; @Nullable - ObjectLongHashMap fields; + FieldMemoryStats fields; public FieldDataStats() { } - public FieldDataStats(long memorySize, long evictions, @Nullable ObjectLongHashMap fields) { + public FieldDataStats(long memorySize, long evictions, @Nullable FieldMemoryStats fields) { this.memorySize = memorySize; this.evictions = evictions; this.fields = fields; @@ -52,16 +58,9 @@ public class FieldDataStats implements Streamable, ToXContent { this.evictions += stats.evictions; if (stats.fields != null) { if (fields == null) { - fields = stats.fields.clone(); + fields = stats.fields.copy(); } else { - assert !stats.fields.containsKey(null); - final Object[] keys = stats.fields.keys; - final long[] values = stats.fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - fields.addTo((String) keys[i], values[i]); - } - } + fields.add(stats.fields); } } } @@ -79,78 +78,48 @@ public class FieldDataStats implements Streamable, ToXContent { } @Nullable - public ObjectLongHashMap getFields() { + public FieldMemoryStats getFields() { return fields; } - public static FieldDataStats readFieldDataStats(StreamInput in) throws IOException { - FieldDataStats stats = new FieldDataStats(); - stats.readFrom(in); - return stats; - } - @Override public void readFrom(StreamInput in) throws IOException { memorySize = in.readVLong(); evictions = in.readVLong(); - if (in.readBoolean()) { - int size = in.readVInt(); - fields = new ObjectLongHashMap<>(size); - for (int i = 0; i < size; i++) { - fields.put(in.readString(), in.readVLong()); - } - } + fields = in.readOptionalWriteable(FieldMemoryStats::new); } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVLong(memorySize); out.writeVLong(evictions); - if (fields == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - out.writeVInt(fields.size()); - assert !fields.containsKey(null); - final Object[] keys = fields.keys; - final long[] values = fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - out.writeString((String) keys[i]); - out.writeVLong(values[i]); - } - } - } + out.writeOptionalWriteable(fields); } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(Fields.FIELDDATA); - builder.byteSizeField(Fields.MEMORY_SIZE_IN_BYTES, Fields.MEMORY_SIZE, memorySize); - builder.field(Fields.EVICTIONS, getEvictions()); + builder.startObject(FIELDDATA); + builder.byteSizeField(MEMORY_SIZE_IN_BYTES, MEMORY_SIZE, memorySize); + builder.field(EVICTIONS, getEvictions()); if (fields != null) { - builder.startObject(Fields.FIELDS); - assert !fields.containsKey(null); - final Object[] keys = fields.keys; - final long[] values = fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - builder.startObject((String) keys[i]); - builder.byteSizeField(Fields.MEMORY_SIZE_IN_BYTES, Fields.MEMORY_SIZE, values[i]); - builder.endObject(); - } - } - builder.endObject(); + fields.toXContent(builder, FIELDS, MEMORY_SIZE_IN_BYTES, MEMORY_SIZE); } builder.endObject(); return builder; } - static final class Fields { - static final String FIELDDATA = "fielddata"; - static final String MEMORY_SIZE = "memory_size"; - static final String MEMORY_SIZE_IN_BYTES = "memory_size_in_bytes"; - static final String EVICTIONS = "evictions"; - static final String FIELDS = "fields"; + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + FieldDataStats that = (FieldDataStats) o; + return memorySize == that.memorySize && + evictions == that.evictions && + Objects.equals(fields, that.fields); + } + + @Override + public int hashCode() { + return Objects.hash(memorySize, evictions, fields); } } diff --git a/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java b/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java index d8eaaaf448e..6dd9552b690 100644 --- a/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java +++ b/core/src/main/java/org/elasticsearch/index/fielddata/ShardFieldData.java @@ -21,6 +21,7 @@ package org.elasticsearch.index.fielddata; import com.carrotsearch.hppc.ObjectLongHashMap; import org.apache.lucene.util.Accountable; +import org.elasticsearch.common.FieldMemoryStats; import org.elasticsearch.common.metrics.CounterMetric; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; @@ -45,7 +46,8 @@ public class ShardFieldData implements IndexFieldDataCache.Listener { } } } - return new FieldDataStats(totalMetric.count(), evictionsMetric.count(), fieldTotals); + return new FieldDataStats(totalMetric.count(), evictionsMetric.count(), fieldTotals == null ? null : + new FieldMemoryStats(fieldTotals)); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java b/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java index 600651ad306..ac9a4a5c9a1 100644 --- a/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java +++ b/core/src/main/java/org/elasticsearch/index/flush/FlushStats.java @@ -81,12 +81,6 @@ public class FlushStats implements Streamable, ToXContent { return new TimeValue(totalTimeInMillis); } - public static FlushStats readFlushStats(StreamInput in) throws IOException { - FlushStats flushStats = new FlushStats(); - flushStats.readFrom(in); - return flushStats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.FLUSH); diff --git a/core/src/main/java/org/elasticsearch/index/get/GetStats.java b/core/src/main/java/org/elasticsearch/index/get/GetStats.java index ed7057d33f0..5a386b85330 100644 --- a/core/src/main/java/org/elasticsearch/index/get/GetStats.java +++ b/core/src/main/java/org/elasticsearch/index/get/GetStats.java @@ -134,12 +134,6 @@ public class GetStats implements Streamable, ToXContent { static final String CURRENT = "current"; } - public static GetStats readGetStats(StreamInput in) throws IOException { - GetStats stats = new GetStats(); - stats.readFrom(in); - return stats; - } - @Override public void readFrom(StreamInput in) throws IOException { existsCount = in.readVLong(); diff --git a/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java b/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java index 845b035623d..b129d8e8db9 100644 --- a/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java +++ b/core/src/main/java/org/elasticsearch/index/merge/MergeStats.java @@ -182,12 +182,6 @@ public class MergeStats implements Streamable, ToXContent { return new ByteSizeValue(currentSizeInBytes); } - public static MergeStats readMergeStats(StreamInput in) throws IOException { - MergeStats stats = new MergeStats(); - stats.readFrom(in); - return stats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.MERGES); diff --git a/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java b/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java index 9b9b4673acc..3a3edd10dcc 100644 --- a/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java +++ b/core/src/main/java/org/elasticsearch/index/refresh/RefreshStats.java @@ -81,12 +81,6 @@ public class RefreshStats implements Streamable, ToXContent { return new TimeValue(totalTimeInMillis); } - public static RefreshStats readRefreshStats(StreamInput in) throws IOException { - RefreshStats refreshStats = new RefreshStats(); - refreshStats.readFrom(in); - return refreshStats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.REFRESH); diff --git a/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java b/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java index 3959a697fd0..824ca598ae2 100644 --- a/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java +++ b/core/src/main/java/org/elasticsearch/index/search/stats/SearchStats.java @@ -19,6 +19,7 @@ package org.elasticsearch.index.search.stats; +import org.elasticsearch.action.support.ToXContentToBytes; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -32,7 +33,7 @@ import java.io.IOException; import java.util.HashMap; import java.util.Map; -public class SearchStats implements Streamable, ToXContent { +public class SearchStats extends ToXContentToBytes implements Streamable { public static class Stats implements Streamable, ToXContent { @@ -338,22 +339,12 @@ public class SearchStats implements Streamable, ToXContent { static final String SUGGEST_CURRENT = "suggest_current"; } - public static SearchStats readSearchStats(StreamInput in) throws IOException { - SearchStats searchStats = new SearchStats(); - searchStats.readFrom(in); - return searchStats; - } - @Override public void readFrom(StreamInput in) throws IOException { totalStats = Stats.readStats(in); openContexts = in.readVLong(); if (in.readBoolean()) { - int size = in.readVInt(); - groupStats = new HashMap<>(size); - for (int i = 0; i < size; i++) { - groupStats.put(in.readString(), Stats.readStats(in)); - } + groupStats = in.readMap(StreamInput::readString, Stats::readStats); } } @@ -365,24 +356,7 @@ public class SearchStats implements Streamable, ToXContent { out.writeBoolean(false); } else { out.writeBoolean(true); - out.writeVInt(groupStats.size()); - for (Map.Entry entry : groupStats.entrySet()) { - out.writeString(entry.getKey()); - entry.getValue().writeTo(out); - } - } - } - - @Override - public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; + out.writeMap(groupStats, StreamOutput::writeString, (stream, stats) -> stats.writeTo(stream)); } } } diff --git a/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java b/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java index f8132d557bb..5ee5ac66083 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java +++ b/core/src/main/java/org/elasticsearch/index/shard/DocsStats.java @@ -57,12 +57,6 @@ public class DocsStats implements Streamable, ToXContent { return this.deleted; } - public static DocsStats readDocStats(StreamInput in) throws IOException { - DocsStats docsStats = new DocsStats(); - docsStats.readFrom(in); - return docsStats; - } - @Override public void readFrom(StreamInput in) throws IOException { count = in.readVLong(); diff --git a/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java b/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java index ba7eafc1a67..c94062e0657 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java +++ b/core/src/main/java/org/elasticsearch/index/shard/IndexingStats.java @@ -143,11 +143,7 @@ public class IndexingStats implements Streamable, ToXContent { indexCount = in.readVLong(); indexTimeInMillis = in.readVLong(); indexCurrent = in.readVLong(); - - if(in.getVersion().onOrAfter(Version.V_2_1_0)){ - indexFailedCount = in.readVLong(); - } - + indexFailedCount = in.readVLong(); deleteCount = in.readVLong(); deleteTimeInMillis = in.readVLong(); deleteCurrent = in.readVLong(); @@ -161,11 +157,7 @@ public class IndexingStats implements Streamable, ToXContent { out.writeVLong(indexCount); out.writeVLong(indexTimeInMillis); out.writeVLong(indexCurrent); - - if(out.getVersion().onOrAfter(Version.V_2_1_0)) { - out.writeVLong(indexFailedCount); - } - + out.writeVLong(indexFailedCount); out.writeVLong(deleteCount); out.writeVLong(deleteTimeInMillis); out.writeVLong(deleteCurrent); @@ -283,21 +275,11 @@ public class IndexingStats implements Streamable, ToXContent { static final String THROTTLED_TIME = "throttle_time"; } - public static IndexingStats readIndexingStats(StreamInput in) throws IOException { - IndexingStats indexingStats = new IndexingStats(); - indexingStats.readFrom(in); - return indexingStats; - } - @Override public void readFrom(StreamInput in) throws IOException { totalStats = Stats.readStats(in); if (in.readBoolean()) { - int size = in.readVInt(); - typeStats = new HashMap<>(size); - for (int i = 0; i < size; i++) { - typeStats.put(in.readString(), Stats.readStats(in)); - } + typeStats = in.readMap(StreamInput::readString, Stats::readStats); } } @@ -308,11 +290,7 @@ public class IndexingStats implements Streamable, ToXContent { out.writeBoolean(false); } else { out.writeBoolean(true); - out.writeVInt(typeStats.size()); - for (Map.Entry entry : typeStats.entrySet()) { - out.writeString(entry.getKey()); - entry.getValue().writeTo(out); - } + out.writeMap(typeStats, StreamOutput::writeString, (stream, stats) -> stats.writeTo(stream)); } } } diff --git a/core/src/main/java/org/elasticsearch/index/store/StoreStats.java b/core/src/main/java/org/elasticsearch/index/store/StoreStats.java index d5e50513f3a..422508d8237 100644 --- a/core/src/main/java/org/elasticsearch/index/store/StoreStats.java +++ b/core/src/main/java/org/elasticsearch/index/store/StoreStats.java @@ -65,12 +65,6 @@ public class StoreStats implements Streamable, ToXContent { return size(); } - public static StoreStats readStoreStats(StreamInput in) throws IOException { - StoreStats store = new StoreStats(); - store.readFrom(in); - return store; - } - @Override public void readFrom(StreamInput in) throws IOException { sizeInBytes = in.readVLong(); diff --git a/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java b/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java index 233dbf4f5fe..21dec0f62a0 100644 --- a/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java +++ b/core/src/main/java/org/elasticsearch/index/warmer/WarmerStats.java @@ -86,12 +86,6 @@ public class WarmerStats implements Streamable, ToXContent { return new TimeValue(totalTimeInMillis); } - public static WarmerStats readWarmerStats(StreamInput in) throws IOException { - WarmerStats refreshStats = new WarmerStats(); - refreshStats.readFrom(in); - return refreshStats; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(Fields.WARMER); diff --git a/core/src/main/java/org/elasticsearch/ingest/IngestStats.java b/core/src/main/java/org/elasticsearch/ingest/IngestStats.java index dee806e0230..add02a5da90 100644 --- a/core/src/main/java/org/elasticsearch/ingest/IngestStats.java +++ b/core/src/main/java/org/elasticsearch/ingest/IngestStats.java @@ -54,7 +54,7 @@ public class IngestStats implements Writeable, ToXContent { @Override public void writeTo(StreamOutput out) throws IOException { totalStats.writeTo(out); - out.writeVLong(statsPerPipeline.size()); + out.writeVInt(statsPerPipeline.size()); for (Map.Entry entry : statsPerPipeline.entrySet()) { out.writeString(entry.getKey()); entry.getValue().writeTo(out); diff --git a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionFieldStats.java b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionFieldStats.java index e5e1b1b9199..8b5761a7e9a 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionFieldStats.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionFieldStats.java @@ -27,6 +27,7 @@ import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.Terms; import org.apache.lucene.search.suggest.document.CompletionTerms; import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.FieldMemoryStats; import org.elasticsearch.common.regex.Regex; import java.io.IOException; @@ -64,6 +65,6 @@ public class CompletionFieldStats { throw new ElasticsearchException(ioe); } } - return new CompletionStats(sizeInBytes, completionFields); + return new CompletionStats(sizeInBytes, completionFields == null ? null : new FieldMemoryStats(completionFields)); } } diff --git a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionStats.java b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionStats.java index efea5915766..c123d46fe45 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionStats.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/completion/CompletionStats.java @@ -18,7 +18,7 @@ */ package org.elasticsearch.search.suggest.completion; -import com.carrotsearch.hppc.ObjectLongHashMap; +import org.elasticsearch.common.FieldMemoryStats; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -31,15 +31,19 @@ import java.io.IOException; public class CompletionStats implements Streamable, ToXContent { - private long sizeInBytes; + private static final String COMPLETION = "completion"; + private static final String SIZE_IN_BYTES = "size_in_bytes"; + private static final String SIZE = "size"; + private static final String FIELDS = "fields"; + private long sizeInBytes; @Nullable - private ObjectLongHashMap fields; + private FieldMemoryStats fields; public CompletionStats() { } - public CompletionStats(long size, @Nullable ObjectLongHashMap fields) { + public CompletionStats(long size, @Nullable FieldMemoryStats fields) { this.sizeInBytes = size; this.fields = fields; } @@ -52,98 +56,43 @@ public class CompletionStats implements Streamable, ToXContent { return new ByteSizeValue(sizeInBytes); } - public ObjectLongHashMap getFields() { + public FieldMemoryStats getFields() { return fields; } @Override public void readFrom(StreamInput in) throws IOException { sizeInBytes = in.readVLong(); - if (in.readBoolean()) { - int size = in.readVInt(); - fields = new ObjectLongHashMap<>(size); - for (int i = 0; i < size; i++) { - fields.put(in.readString(), in.readVLong()); - } - } + fields = in.readOptionalWriteable(FieldMemoryStats::new); } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVLong(sizeInBytes); - if (fields == null) { - out.writeBoolean(false); - } else { - out.writeBoolean(true); - out.writeVInt(fields.size()); - - assert !fields.containsKey(null); - final Object[] keys = fields.keys; - final long[] values = fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - out.writeString((String) keys[i]); - out.writeVLong(values[i]); - } - } - } + out.writeOptionalWriteable(fields); } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(Fields.COMPLETION); - builder.byteSizeField(Fields.SIZE_IN_BYTES, Fields.SIZE, sizeInBytes); + builder.startObject(COMPLETION); + builder.byteSizeField(SIZE_IN_BYTES, SIZE, sizeInBytes); if (fields != null) { - builder.startObject(Fields.FIELDS); - - assert !fields.containsKey(null); - final Object[] keys = fields.keys; - final long[] values = fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - builder.startObject((String) keys[i]); - builder.byteSizeField(Fields.SIZE_IN_BYTES, Fields.SIZE, values[i]); - builder.endObject(); - } - } - builder.endObject(); + fields.toXContent(builder, FIELDS, SIZE_IN_BYTES, SIZE); } builder.endObject(); return builder; } - public static CompletionStats readCompletionStats(StreamInput in) throws IOException { - CompletionStats stats = new CompletionStats(); - stats.readFrom(in); - return stats; - } - - static final class Fields { - static final String COMPLETION = "completion"; - static final String SIZE_IN_BYTES = "size_in_bytes"; - static final String SIZE = "size"; - static final String FIELDS = "fields"; - } - public void add(CompletionStats completion) { if (completion == null) { return; } - sizeInBytes += completion.getSizeInBytes(); - if (completion.fields != null) { if (fields == null) { - fields = completion.fields.clone(); + fields = completion.fields.copy(); } else { - assert !completion.fields.containsKey(null); - final Object[] keys = completion.fields.keys; - final long[] values = completion.fields.values; - for (int i = 0; i < keys.length; i++) { - if (keys[i] != null) { - fields.addTo((String) keys[i], values[i]); - } - } + fields.add(completion.fields); } } } diff --git a/core/src/test/java/org/elasticsearch/common/FieldMemoryStatsTests.java b/core/src/test/java/org/elasticsearch/common/FieldMemoryStatsTests.java new file mode 100644 index 00000000000..74427281894 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/common/FieldMemoryStatsTests.java @@ -0,0 +1,102 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.common; + +import com.carrotsearch.hppc.ObjectLongHashMap; +import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; + +public class FieldMemoryStatsTests extends ESTestCase { + + public void testSerialize() throws IOException { + FieldMemoryStats stats = randomFieldMemoryStats(); + BytesStreamOutput out = new BytesStreamOutput(); + stats.writeTo(out); + StreamInput input = out.bytes().streamInput(); + FieldMemoryStats read = new FieldMemoryStats(input); + assertEquals(-1, input.read()); + assertEquals(stats, read); + } + + public void testHashCodeEquals() { + FieldMemoryStats stats = randomFieldMemoryStats(); + assertEquals(stats, stats); + assertEquals(stats.hashCode(), stats.hashCode()); + ObjectLongHashMap map1 = new ObjectLongHashMap<>(); + map1.put("bar", 1); + FieldMemoryStats stats1 = new FieldMemoryStats(map1); + ObjectLongHashMap map2 = new ObjectLongHashMap<>(); + map2.put("foo", 2); + FieldMemoryStats stats2 = new FieldMemoryStats(map2); + + ObjectLongHashMap map3 = new ObjectLongHashMap<>(); + map3.put("foo", 2); + map3.put("bar", 1); + FieldMemoryStats stats3 = new FieldMemoryStats(map3); + + ObjectLongHashMap map4 = new ObjectLongHashMap<>(); + map4.put("foo", 2); + map4.put("bar", 1); + FieldMemoryStats stats4 = new FieldMemoryStats(map4); + + assertNotEquals(stats1, stats2); + assertNotEquals(stats1, stats3); + assertNotEquals(stats2, stats3); + assertEquals(stats4, stats3); + + stats1.add(stats2); + assertEquals(stats1, stats3); + assertEquals(stats1, stats4); + assertEquals(stats1.hashCode(), stats3.hashCode()); + } + + public void testAdd() { + ObjectLongHashMap map1 = new ObjectLongHashMap<>(); + map1.put("bar", 1); + FieldMemoryStats stats1 = new FieldMemoryStats(map1); + ObjectLongHashMap map2 = new ObjectLongHashMap<>(); + map2.put("foo", 2); + FieldMemoryStats stats2 = new FieldMemoryStats(map2); + + ObjectLongHashMap map3 = new ObjectLongHashMap<>(); + map3.put("bar", 1); + FieldMemoryStats stats3 = new FieldMemoryStats(map3); + stats3.add(stats1); + + ObjectLongHashMap map4 = new ObjectLongHashMap<>(); + map4.put("foo", 2); + map4.put("bar", 2); + FieldMemoryStats stats4 = new FieldMemoryStats(map4); + assertNotEquals(stats3, stats4); + stats3.add(stats2); + assertEquals(stats3, stats4); + } + + public static FieldMemoryStats randomFieldMemoryStats() { + ObjectLongHashMap map = new ObjectLongHashMap<>(); + int keys = randomIntBetween(1, 1000); + for (int i = 0; i < keys; i++) { + map.put(randomRealisticUnicodeOfCodepointLengthBetween(1, 10), randomPositiveLong()); + } + return new FieldMemoryStats(map); + } +} diff --git a/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java b/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java index 866a02476e7..d1340af0b22 100644 --- a/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java +++ b/core/src/test/java/org/elasticsearch/common/io/stream/BytesStreamsTests.java @@ -456,6 +456,22 @@ public class BytesStreamsTests extends ESTestCase { out.close(); } + public void testWriteMap() throws IOException { + final int size = randomIntBetween(0, 100); + final Map expected = new HashMap<>(randomIntBetween(0, 100)); + for (int i = 0; i < size; ++i) { + expected.put(randomAsciiOfLength(2), randomAsciiOfLength(5)); + } + + final BytesStreamOutput out = new BytesStreamOutput(); + out.writeMap(expected, StreamOutput::writeString, StreamOutput::writeString); + final StreamInput in = StreamInput.wrap(BytesReference.toBytes(out.bytes())); + final Map loaded = in.readMap(StreamInput::readString, StreamInput::readString); + + assertThat(loaded.size(), equalTo(expected.size())); + assertThat(expected, equalTo(loaded)); + } + public void testWriteMapOfLists() throws IOException { final int size = randomIntBetween(0, 5); final Map> expected = new HashMap<>(size); diff --git a/core/src/test/java/org/elasticsearch/index/fielddata/FieldDataStatsTests.java b/core/src/test/java/org/elasticsearch/index/fielddata/FieldDataStatsTests.java new file mode 100644 index 00000000000..54881bad884 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/fielddata/FieldDataStatsTests.java @@ -0,0 +1,45 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.fielddata; + +import org.elasticsearch.common.FieldMemoryStats; +import org.elasticsearch.common.FieldMemoryStatsTests; +import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; + +public class FieldDataStatsTests extends ESTestCase { + + public void testSerialize() throws IOException { + FieldMemoryStats map = randomBoolean() ? null : FieldMemoryStatsTests.randomFieldMemoryStats(); + FieldDataStats stats = new FieldDataStats(randomPositiveLong(), randomPositiveLong(), map == null ? null : + map); + BytesStreamOutput out = new BytesStreamOutput(); + stats.writeTo(out); + FieldDataStats read = new FieldDataStats(); + StreamInput input = out.bytes().streamInput(); + read.readFrom(input); + assertEquals(-1, input.read()); + assertEquals(stats.evictions, read.evictions); + assertEquals(stats.memorySize, read.memorySize); + assertEquals(stats.getFields(), read.getFields()); + } +} diff --git a/core/src/test/java/org/elasticsearch/index/suggest/stats/CompletionsStatsTests.java b/core/src/test/java/org/elasticsearch/index/suggest/stats/CompletionsStatsTests.java new file mode 100644 index 00000000000..71ffffdcbff --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/suggest/stats/CompletionsStatsTests.java @@ -0,0 +1,45 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.index.suggest.stats; + +import org.elasticsearch.common.FieldMemoryStats; +import org.elasticsearch.common.FieldMemoryStatsTests; +import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.search.suggest.completion.CompletionStats; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; + +public class CompletionsStatsTests extends ESTestCase { + + public void testSerialize() throws IOException { + FieldMemoryStats map = randomBoolean() ? null : FieldMemoryStatsTests.randomFieldMemoryStats(); + CompletionStats stats = new CompletionStats(randomPositiveLong(), map == null ? null : + map); + BytesStreamOutput out = new BytesStreamOutput(); + stats.writeTo(out); + CompletionStats read = new CompletionStats(); + StreamInput input = out.bytes().streamInput(); + read.readFrom(input); + assertEquals(-1, input.read()); + assertEquals(stats.getSizeInBytes(), read.getSizeInBytes()); + assertEquals(stats.getFields(), read.getFields()); + } +} diff --git a/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java b/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java index af0b43358c8..3d9e66755eb 100644 --- a/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java +++ b/core/src/test/java/org/elasticsearch/indices/stats/IndexStatsIT.java @@ -44,7 +44,6 @@ import org.elasticsearch.index.VersionType; import org.elasticsearch.index.cache.query.QueryCacheStats; import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.query.QueryBuilders; -import org.elasticsearch.index.store.IndexStore; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.indices.IndicesQueryCache; import org.elasticsearch.indices.IndicesRequestCache; @@ -737,29 +736,29 @@ public class IndexStatsIT extends ESIntegTestCase { stats = builder.setFieldDataFields("bar").execute().actionGet(); assertThat(stats.getTotal().fieldData.getMemorySizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("bar"), is(true)); + assertThat(stats.getTotal().fieldData.getFields().containsField("bar"), is(true)); assertThat(stats.getTotal().fieldData.getFields().get("bar"), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("baz"), is(false)); + assertThat(stats.getTotal().fieldData.getFields().containsField("baz"), is(false)); stats = builder.setFieldDataFields("bar", "baz").execute().actionGet(); assertThat(stats.getTotal().fieldData.getMemorySizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("bar"), is(true)); + assertThat(stats.getTotal().fieldData.getFields().containsField("bar"), is(true)); assertThat(stats.getTotal().fieldData.getFields().get("bar"), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("baz"), is(true)); + assertThat(stats.getTotal().fieldData.getFields().containsField("baz"), is(true)); assertThat(stats.getTotal().fieldData.getFields().get("baz"), greaterThan(0L)); stats = builder.setFieldDataFields("*").execute().actionGet(); assertThat(stats.getTotal().fieldData.getMemorySizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("bar"), is(true)); + assertThat(stats.getTotal().fieldData.getFields().containsField("bar"), is(true)); assertThat(stats.getTotal().fieldData.getFields().get("bar"), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("baz"), is(true)); + assertThat(stats.getTotal().fieldData.getFields().containsField("baz"), is(true)); assertThat(stats.getTotal().fieldData.getFields().get("baz"), greaterThan(0L)); stats = builder.setFieldDataFields("*r").execute().actionGet(); assertThat(stats.getTotal().fieldData.getMemorySizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("bar"), is(true)); + assertThat(stats.getTotal().fieldData.getFields().containsField("bar"), is(true)); assertThat(stats.getTotal().fieldData.getFields().get("bar"), greaterThan(0L)); - assertThat(stats.getTotal().fieldData.getFields().containsKey("baz"), is(false)); + assertThat(stats.getTotal().fieldData.getFields().containsField("baz"), is(false)); } @@ -782,29 +781,29 @@ public class IndexStatsIT extends ESIntegTestCase { stats = builder.setCompletionFields("bar.completion").execute().actionGet(); assertThat(stats.getTotal().completion.getSizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("bar.completion"), is(true)); + assertThat(stats.getTotal().completion.getFields().containsField("bar.completion"), is(true)); assertThat(stats.getTotal().completion.getFields().get("bar.completion"), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("baz.completion"), is(false)); + assertThat(stats.getTotal().completion.getFields().containsField("baz.completion"), is(false)); stats = builder.setCompletionFields("bar.completion", "baz.completion").execute().actionGet(); assertThat(stats.getTotal().completion.getSizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("bar.completion"), is(true)); + assertThat(stats.getTotal().completion.getFields().containsField("bar.completion"), is(true)); assertThat(stats.getTotal().completion.getFields().get("bar.completion"), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("baz.completion"), is(true)); + assertThat(stats.getTotal().completion.getFields().containsField("baz.completion"), is(true)); assertThat(stats.getTotal().completion.getFields().get("baz.completion"), greaterThan(0L)); stats = builder.setCompletionFields("*").execute().actionGet(); assertThat(stats.getTotal().completion.getSizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("bar.completion"), is(true)); + assertThat(stats.getTotal().completion.getFields().containsField("bar.completion"), is(true)); assertThat(stats.getTotal().completion.getFields().get("bar.completion"), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("baz.completion"), is(true)); + assertThat(stats.getTotal().completion.getFields().containsField("baz.completion"), is(true)); assertThat(stats.getTotal().completion.getFields().get("baz.completion"), greaterThan(0L)); stats = builder.setCompletionFields("*r*").execute().actionGet(); assertThat(stats.getTotal().completion.getSizeInBytes(), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("bar.completion"), is(true)); + assertThat(stats.getTotal().completion.getFields().containsField("bar.completion"), is(true)); assertThat(stats.getTotal().completion.getFields().get("bar.completion"), greaterThan(0L)); - assertThat(stats.getTotal().completion.getFields().containsKey("baz.completion"), is(false)); + assertThat(stats.getTotal().completion.getFields().containsField("baz.completion"), is(false)); } diff --git a/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java b/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java index 74920fb8fc7..a86b63c1a59 100644 --- a/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java +++ b/core/src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchIT.java @@ -32,6 +32,7 @@ import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; import org.elasticsearch.action.index.IndexRequestBuilder; import org.elasticsearch.action.search.SearchPhaseExecutionException; import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.common.FieldMemoryStats; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.Fuzziness; import org.elasticsearch.common.xcontent.XContentBuilder; @@ -750,7 +751,7 @@ public class CompletionSuggestSearchIT extends ESIntegTestCase { // regexes IndicesStatsResponse regexFieldStats = client().admin().indices().prepareStats(INDEX).setIndices(INDEX).setCompletion(true).setCompletionFields("*").get(); - ObjectLongHashMap fields = regexFieldStats.getIndex(INDEX).getPrimaries().completion.getFields(); + FieldMemoryStats fields = regexFieldStats.getIndex(INDEX).getPrimaries().completion.getFields(); long regexSizeInBytes = fields.get(FIELD) + fields.get(otherField); assertThat(regexSizeInBytes, is(totalSizeInBytes)); } From 1f3eb068d54f6752e0dac155f8137b6eabc725bb Mon Sep 17 00:00:00 2001 From: Simon Willnauer Date: Sat, 17 Dec 2016 11:49:57 +0100 Subject: [PATCH 09/26] Add infrastructure to manage network connections outside of Transport/TransportService (#22194) Some expert users like UnicastZenPing today establishes real connections to nodes during it's ping phase that can be used by other parts of the system. Yet, this is potentially dangerous and undesirable unless the nodes have been fully verified and should be connected to in the case of a cluster state update or if we join a newly elected master. For use-cases like this, this change adds the infrastructure to manually handle connections that are not publicly available on the node ie. should not be managed by `Transport`/`TransportSerivce` --- .../transport/TransportService.java | 35 +++++++++++++-- .../discovery/zen/UnicastZenPingTests.java | 3 +- .../TransportServiceHandshakeTests.java | 28 +++++++++--- .../test/transport/MockTransportService.java | 44 +++++++++++++++++-- .../AbstractSimpleTransportTestCase.java | 8 ++-- 5 files changed, 102 insertions(+), 16 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/transport/TransportService.java b/core/src/main/java/org/elasticsearch/transport/TransportService.java index a02b763f2d9..8884177ba63 100644 --- a/core/src/main/java/org/elasticsearch/transport/TransportService.java +++ b/core/src/main/java/org/elasticsearch/transport/TransportService.java @@ -290,6 +290,9 @@ public class TransportService extends AbstractLifecycleComponent { return transport.getLocalAddresses(); } + /** + * Returns true iff the given node is already connected. + */ public boolean nodeConnected(DiscoveryNode node) { return node.equals(localNode) || transport.nodeConnected(node); } @@ -311,6 +314,20 @@ public class TransportService extends AbstractLifecycleComponent { transport.connectToNode(node, connectionProfile); } + /** + * Establishes and returns a new connection to the given node. The connection is NOT maintained by this service, it's the callers + * responsibility to close the connection once it goes out of scope. + * @param node the node to connect to + * @param profile the connection profile to use + */ + public Transport.Connection openConnection(final DiscoveryNode node, ConnectionProfile profile) throws IOException { + if (node.equals(localNode)) { + return localNodeConnection; + } else { + return transport.openConnection(node, profile); + } + } + /** * Lightly connect to the specified node, returning updated node * information. The handshake will fail if the cluster name on the @@ -337,7 +354,19 @@ public class TransportService extends AbstractLifecycleComponent { return handshakeNode; } - private DiscoveryNode handshake( + /** + * Executes a high-level handshake using the given connection + * and returns the discovery node of the node the connection + * was established with. The handshake will fail if the cluster + * name on the target node mismatches the local cluster name. + * + * @param connection the connection to a specific node + * @param handshakeTimeout handshake timeout + * @return the connected node + * @throws ConnectTransportException if the connection failed + * @throws IllegalStateException if the handshake failed + */ + public DiscoveryNode handshake( final Transport.Connection connection, final long handshakeTimeout) throws ConnectTransportException { final HandshakeResponse response; @@ -465,7 +494,7 @@ public class TransportService extends AbstractLifecycleComponent { } } - final void sendRequest(final Transport.Connection connection, final String action, + public final void sendRequest(final Transport.Connection connection, final String action, final TransportRequest request, final TransportRequestOptions options, TransportResponseHandler handler) { @@ -477,7 +506,7 @@ public class TransportService extends AbstractLifecycleComponent { * Returns either a real transport connection or a local node connection if we are using the local node optimization. * @throws NodeNotConnectedException if the given node is not connected */ - private Transport.Connection getConnection(DiscoveryNode node) { + public Transport.Connection getConnection(DiscoveryNode node) { if (Objects.requireNonNull(node, "node must be non-null").equals(localNode)) { return localNodeConnection; } else { diff --git a/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java b/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java index 9886abb900a..de8d1a562e8 100644 --- a/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java +++ b/core/src/test/java/org/elasticsearch/discovery/zen/UnicastZenPingTests.java @@ -40,6 +40,7 @@ import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.VersionUtils; import org.elasticsearch.test.junit.annotations.TestLogging; +import org.elasticsearch.test.transport.MockTransportService; import org.elasticsearch.threadpool.TestThreadPool; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.MockTcpTransport; @@ -571,7 +572,7 @@ public class UnicastZenPingTests extends ESTestCase { final BiFunction supplier) { final Transport transport = supplier.apply(settings, version); final TransportService transportService = - new TransportService(settings, transport, threadPool, TransportService.NOOP_TRANSPORT_INTERCEPTOR, null); + new MockTransportService(settings, transport, threadPool, TransportService.NOOP_TRANSPORT_INTERCEPTOR, null); transportService.start(); transportService.acceptIncomingRequests(); final ConcurrentMap counters = ConcurrentCollections.newConcurrentMap(); diff --git a/core/src/test/java/org/elasticsearch/transport/TransportServiceHandshakeTests.java b/core/src/test/java/org/elasticsearch/transport/TransportServiceHandshakeTests.java index 16735f34efe..fd756f6790e 100644 --- a/core/src/test/java/org/elasticsearch/transport/TransportServiceHandshakeTests.java +++ b/core/src/test/java/org/elasticsearch/transport/TransportServiceHandshakeTests.java @@ -113,14 +113,24 @@ public class TransportServiceHandshakeTests extends ESTestCase { emptyMap(), emptySet(), Version.CURRENT.minimumCompatibilityVersion()); + try (Transport.Connection connection = handleA.transportService.openConnection(discoveryNode, ConnectionProfile.LIGHT_PROFILE)){ + DiscoveryNode connectedNode = handleA.transportService.handshake(connection, timeout); + assertNotNull(connectedNode); + // the name and version should be updated + assertEquals(connectedNode.getName(), "TS_B"); + assertEquals(connectedNode.getVersion(), handleB.discoveryNode.getVersion()); + assertFalse(handleA.transportService.nodeConnected(discoveryNode)); + } + DiscoveryNode connectedNode = - handleA.transportService.connectToNodeAndHandshake(discoveryNode, timeout); + handleA.transportService.connectToNodeAndHandshake(discoveryNode, timeout); assertNotNull(connectedNode); // the name and version should be updated assertEquals(connectedNode.getName(), "TS_B"); assertEquals(connectedNode.getVersion(), handleB.discoveryNode.getVersion()); assertTrue(handleA.transportService.nodeConnected(discoveryNode)); + } public void testMismatchedClusterName() { @@ -133,8 +143,12 @@ public class TransportServiceHandshakeTests extends ESTestCase { emptyMap(), emptySet(), Version.CURRENT.minimumCompatibilityVersion()); - IllegalStateException ex = expectThrows(IllegalStateException.class, () -> handleA.transportService.connectToNodeAndHandshake( - discoveryNode, timeout)); + IllegalStateException ex = expectThrows(IllegalStateException.class, () -> { + try (Transport.Connection connection = handleA.transportService.openConnection(discoveryNode, + ConnectionProfile.LIGHT_PROFILE)) { + handleA.transportService.handshake(connection, timeout); + } + }); assertThat(ex.getMessage(), containsString("handshake failed, mismatched cluster name [Cluster [b]]")); assertFalse(handleA.transportService.nodeConnected(discoveryNode)); } @@ -150,8 +164,12 @@ public class TransportServiceHandshakeTests extends ESTestCase { emptyMap(), emptySet(), Version.CURRENT.minimumCompatibilityVersion()); - IllegalStateException ex = expectThrows(IllegalStateException.class, () -> handleA.transportService.connectToNodeAndHandshake( - discoveryNode, timeout)); + IllegalStateException ex = expectThrows(IllegalStateException.class, () -> { + try (Transport.Connection connection = handleA.transportService.openConnection(discoveryNode, + ConnectionProfile.LIGHT_PROFILE)) { + handleA.transportService.handshake(connection, timeout); + } + }); assertThat(ex.getMessage(), containsString("handshake failed, incompatible version")); assertFalse(handleA.transportService.nodeConnected(discoveryNode)); } diff --git a/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java b/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java index dd05457cec1..a35a1919cb7 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java +++ b/test/framework/src/main/java/org/elasticsearch/test/transport/MockTransportService.java @@ -57,11 +57,13 @@ import java.io.IOException; import java.net.UnknownHostException; import java.util.Arrays; import java.util.Collections; +import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Queue; import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.LinkedBlockingDeque; @@ -80,6 +82,7 @@ import java.util.concurrent.atomic.AtomicBoolean; */ public final class MockTransportService extends TransportService { + private final Map> openConnections = new HashMap<>(); public static class TestPlugin extends Plugin { @Override @@ -553,9 +556,7 @@ public final class MockTransportService extends TransportService { } @Override - public void close() { - transport.close(); - } + public void close() { transport.close(); } @Override public Map profileBoundAddresses() { @@ -701,4 +702,41 @@ public final class MockTransportService extends TransportService { } return transport; } + + @Override + public Transport.Connection openConnection(DiscoveryNode node, ConnectionProfile profile) throws IOException { + FilteredConnection filteredConnection = new FilteredConnection(super.openConnection(node, profile)) { + final AtomicBoolean closed = new AtomicBoolean(false); + @Override + public void close() throws IOException { + try { + super.close(); + } finally { + if (closed.compareAndSet(false, true)) { + synchronized (openConnections) { + List connections = openConnections.get(node); + boolean remove = connections.remove(this); + assert remove; + if (connections.isEmpty()) { + openConnections.remove(node); + } + } + } + } + + } + }; + synchronized (openConnections) { + List connections = openConnections.computeIfAbsent(node, + (n) -> new CopyOnWriteArrayList<>()); + connections.add(filteredConnection); + } + return filteredConnection; + } + + @Override + protected void doClose() { + super.doClose(); + assert openConnections.size() == 0 : "still open connections: " + openConnections; + } } diff --git a/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java b/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java index 542dcfb8b8b..283ae288d29 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java @@ -1351,8 +1351,8 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { // all is well } - try { - serviceB.connectToNodeAndHandshake(nodeA, 100); + try (Transport.Connection connection = serviceB.openConnection(nodeA, ConnectionProfile.LIGHT_PROFILE)){ + serviceB.handshake(connection, 100); fail("exception should be thrown"); } catch (IllegalStateException e) { // all is well @@ -1409,8 +1409,8 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { // all is well } - try { - serviceB.connectToNodeAndHandshake(nodeA, 100); + try (Transport.Connection connection = serviceB.openConnection(nodeA, ConnectionProfile.LIGHT_PROFILE)){ + serviceB.handshake(connection, 100); fail("exception should be thrown"); } catch (IllegalStateException e) { // all is well From 58d73bae74fd41e8a588e2ea0ab8b66adf072d9f Mon Sep 17 00:00:00 2001 From: Jason Tedor Date: Sat, 17 Dec 2016 09:20:46 -0500 Subject: [PATCH 10/26] Tighten sequence numbers recovery This commit touches addresses issues related to recovery and sequence numbers: - A sequence number can be assigned and a Lucene commit created with a maximum sequence number at least as large as that sequence number, yet the operation corresponding to that sequence number can be missing from both the Lucene commit and the translog. This means that upon recovery the local checkpoint will be stuck at or below this missing sequence number. To address this, we force the local checkpoint to the maximum sequence number in the Lucene commit when opening the engine. Note that there can still be gaps in the history in the translog but we do not address those here. - The global checkpoint is transferred to the target shard at the end of peer recovery. - Additionally, we reenable the relocation integration tests. Lastly, this work uncovered some bugs in the assignment of sequence numbers on replica operations: - setting the sequence number on replica write requests was missing, very likely introduced as a result of resolving merge conflicts - handling operations that arrive out of order on a replica and have a version conflict with a previous operation were never marked as processed Relates #22212 --- .../action/bulk/TransportShardBulkAction.java | 14 +- .../action/delete/TransportDeleteAction.java | 1 + .../index/engine/InternalEngine.java | 143 ++++++++---- .../index/seqno/GlobalCheckpointService.java | 8 +- .../recovery/PeerRecoveryTargetService.java | 6 +- .../RecoveryFinalizeRecoveryRequest.java | 20 +- .../recovery/RecoverySourceHandler.java | 2 +- .../indices/recovery/RecoveryTarget.java | 3 +- .../recovery/RecoveryTargetHandler.java | 12 +- .../recovery/RemoteRecoveryTargetHandler.java | 4 +- .../index/engine/InternalEngineTests.java | 209 +++++++++++++++++- .../RecoveryDuringReplicationTests.java | 5 +- .../elasticsearch/recovery/RelocationIT.java | 29 +-- .../elasticsearch/test/ESIntegTestCase.java | 4 +- 14 files changed, 368 insertions(+), 92 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java index b1fe096a564..cef89e1ce78 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java @@ -50,6 +50,7 @@ import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.mapper.MapperParsingException; +import org.elasticsearch.index.seqno.GlobalCheckpointSyncAction; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardClosedException; @@ -150,6 +151,7 @@ public class TransportShardBulkAction extends TransportWriteAction seqNoService().getLocalCheckpoint()); assert translog.getGeneration() != null; } catch (IOException | TranslogCorruptedException e) { throw new EngineCreationFailureException(shardId, "failed to create engine", e); @@ -412,7 +423,7 @@ public class InternalEngine extends Engine { @Override public GetResult get(Get get, Function searcherFactory) throws EngineException { - try (ReleasableLock lock = readLock.acquire()) { + try (ReleasableLock ignored = readLock.acquire()) { ensureOpen(); if (get.realtime()) { VersionValue versionValue = versionMap.getUnderLock(get.uid()); @@ -434,11 +445,28 @@ public class InternalEngine extends Engine { } } - private boolean checkVersionConflict( - final Operation op, - final long currentVersion, - final long expectedVersion, - final boolean deleted) { + /** + * Checks for version conflicts. If a version conflict exists, the optional return value represents the operation result. Otherwise, if + * no conflicts are found, the optional return value is not present. + * + * @param the result type + * @param op the operation + * @param currentVersion the current version + * @param expectedVersion the expected version + * @param deleted {@code true} if the current version is not found or represents a delete + * @param onSuccess if there is a version conflict that can be ignored, the result of the operation + * @param onFailure if there is a version conflict that can not be ignored, the result of the operation + * @return if there is a version conflict, the optional value is present and represents the operation result, otherwise the return value + * is not present + */ + private Optional checkVersionConflict( + final Operation op, + final long currentVersion, + final long expectedVersion, + final boolean deleted, + final Supplier onSuccess, + final Function onFailure) { + final T result; if (op.versionType() == VersionType.FORCE) { if (engineConfig.getIndexSettings().getIndexVersionCreated().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { // If index was created in 5.0 or later, 'force' is not allowed at all @@ -452,14 +480,22 @@ public class InternalEngine extends Engine { if (op.versionType().isVersionConflictForWrites(currentVersion, expectedVersion, deleted)) { if (op.origin().isRecovery()) { // version conflict, but okay - return true; + result = onSuccess.get(); } else { // fatal version conflict - throw new VersionConflictEngineException(shardId, op.type(), op.id(), + final VersionConflictEngineException e = + new VersionConflictEngineException( + shardId, + op.type(), + op.id(), op.versionType().explainConflictForWrites(currentVersion, expectedVersion, deleted)); + result = onFailure.apply(e); } + + return Optional.of(result); + } else { + return Optional.empty(); } - return false; } private long checkDeletedAndGCed(VersionValue versionValue) { @@ -475,7 +511,7 @@ public class InternalEngine extends Engine { @Override public IndexResult index(Index index) { IndexResult result; - try (ReleasableLock lock = readLock.acquire()) { + try (ReleasableLock ignored = readLock.acquire()) { ensureOpen(); if (index.origin().isRecovery()) { // Don't throttle recovery operations @@ -573,7 +609,7 @@ public class InternalEngine extends Engine { assert assertSequenceNumber(index.origin(), index.seqNo()); final Translog.Location location; final long updatedVersion; - IndexResult indexResult = null; + long seqNo = index.seqNo(); try (Releasable ignored = acquireLock(index.uid())) { lastWriteNanos = index.startTime(); /* if we have an autoGeneratedID that comes into the engine we can potentially optimize @@ -638,28 +674,33 @@ public class InternalEngine extends Engine { } } final long expectedVersion = index.version(); - if (checkVersionConflict(index, currentVersion, expectedVersion, deleted)) { - // skip index operation because of version conflict on recovery - indexResult = new IndexResult(expectedVersion, SequenceNumbersService.UNASSIGNED_SEQ_NO, false); + final Optional checkVersionConflictResult = + checkVersionConflict( + index, + currentVersion, + expectedVersion, + deleted, + () -> new IndexResult(currentVersion, index.seqNo(), false), + e -> new IndexResult(e, currentVersion, index.seqNo())); + + final IndexResult indexResult; + if (checkVersionConflictResult.isPresent()) { + indexResult = checkVersionConflictResult.get(); } else { - final long seqNo; + // no version conflict if (index.origin() == Operation.Origin.PRIMARY) { - seqNo = seqNoService.generateSeqNo(); - } else { - seqNo = index.seqNo(); + seqNo = seqNoService().generateSeqNo(); } + + /** + * Update the document's sequence number and primary term; the sequence number here is derived here from either the sequence + * number service if this is on the primary, or the existing document's sequence number if this is on the replica. The + * primary term here has already been set, see IndexShard#prepareIndex where the Engine$Index operation is created. + */ + index.parsedDoc().updateSeqID(seqNo, index.primaryTerm()); updatedVersion = index.versionType().updateVersion(currentVersion, expectedVersion); index.parsedDoc().version().setLongValue(updatedVersion); - // Update the document's sequence number and primary term, the - // sequence number here is derived here from either the sequence - // number service if this is on the primary, or the existing - // document's sequence number if this is on the replica. The - // primary term here has already been set, see - // IndexShard.prepareIndex where the Engine.Index operation is - // created - index.parsedDoc().updateSeqID(seqNo, index.primaryTerm()); - if (currentVersion == Versions.NOT_FOUND && forceUpdateDocument == false) { // document does not exists, we can optimize for create, but double check if assertions are running assert assertDocDoesNotExist(index, canOptimizeAddDocument == false); @@ -669,8 +710,8 @@ public class InternalEngine extends Engine { } indexResult = new IndexResult(updatedVersion, seqNo, deleted); location = index.origin() != Operation.Origin.LOCAL_TRANSLOG_RECOVERY - ? translog.add(new Translog.Index(index, indexResult)) - : null; + ? translog.add(new Translog.Index(index, indexResult)) + : null; versionMap.putUnderLock(index.uid().bytes(), new VersionValue(updatedVersion)); indexResult.setTranslogLocation(location); } @@ -678,8 +719,8 @@ public class InternalEngine extends Engine { indexResult.freeze(); return indexResult; } finally { - if (indexResult != null && indexResult.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) { - seqNoService.markSeqNoAsCompleted(indexResult.getSeqNo()); + if (seqNo != SequenceNumbersService.UNASSIGNED_SEQ_NO) { + seqNoService().markSeqNoAsCompleted(seqNo); } } @@ -724,7 +765,7 @@ public class InternalEngine extends Engine { @Override public DeleteResult delete(Delete delete) { DeleteResult result; - try (ReleasableLock lock = readLock.acquire()) { + try (ReleasableLock ignored = readLock.acquire()) { ensureOpen(); // NOTE: we don't throttle this when merges fall behind because delete-by-id does not create new segments: result = innerDelete(delete); @@ -748,7 +789,7 @@ public class InternalEngine extends Engine { final Translog.Location location; final long updatedVersion; final boolean found; - DeleteResult deleteResult = null; + long seqNo = delete.seqNo(); try (Releasable ignored = acquireLock(delete.uid())) { lastWriteNanos = delete.startTime(); final long currentVersion; @@ -764,32 +805,40 @@ public class InternalEngine extends Engine { } final long expectedVersion = delete.version(); - if (checkVersionConflict(delete, currentVersion, expectedVersion, deleted)) { - // skip executing delete because of version conflict on recovery - deleteResult = new DeleteResult(expectedVersion, SequenceNumbersService.UNASSIGNED_SEQ_NO, true); + + final Optional result = + checkVersionConflict( + delete, + currentVersion, + expectedVersion, + deleted, + () -> new DeleteResult(expectedVersion, delete.seqNo(), true), + e -> new DeleteResult(e, expectedVersion, delete.seqNo())); + + final DeleteResult deleteResult; + if (result.isPresent()) { + deleteResult = result.get(); } else { - final long seqNo; if (delete.origin() == Operation.Origin.PRIMARY) { - seqNo = seqNoService.generateSeqNo(); - } else { - seqNo = delete.seqNo(); + seqNo = seqNoService().generateSeqNo(); } + updatedVersion = delete.versionType().updateVersion(currentVersion, expectedVersion); found = deleteIfFound(delete.uid(), currentVersion, deleted, versionValue); deleteResult = new DeleteResult(updatedVersion, seqNo, found); location = delete.origin() != Operation.Origin.LOCAL_TRANSLOG_RECOVERY - ? translog.add(new Translog.Delete(delete, deleteResult)) - : null; + ? translog.add(new Translog.Delete(delete, deleteResult)) + : null; versionMap.putUnderLock(delete.uid().bytes(), - new DeleteVersionValue(updatedVersion, engineConfig.getThreadPool().estimatedTimeInMillis())); + new DeleteVersionValue(updatedVersion, engineConfig.getThreadPool().estimatedTimeInMillis())); deleteResult.setTranslogLocation(location); } deleteResult.setTook(System.nanoTime() - delete.startTime()); deleteResult.freeze(); return deleteResult; } finally { - if (deleteResult != null && deleteResult.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) { - seqNoService.markSeqNoAsCompleted(deleteResult.getSeqNo()); + if (seqNo != SequenceNumbersService.UNASSIGNED_SEQ_NO) { + seqNoService().markSeqNoAsCompleted(seqNo); } } } diff --git a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointService.java b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointService.java index 18d15707e40..e8aa0cdeb89 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointService.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointService.java @@ -149,12 +149,14 @@ public class GlobalCheckpointService extends AbstractIndexShardComponent { * updates the global checkpoint on a replica shard (after it has been updated by the primary). */ synchronized void updateCheckpointOnReplica(long globalCheckpoint) { + /* + * The global checkpoint here is a local knowledge which is updated under the mandate of the primary. It can happen that the primary + * information is lagging compared to a replica (e.g., if a replica is promoted to primary but has stale info relative to other + * replica shards). In these cases, the local knowledge of the global checkpoint could be higher than sync from the lagging primary. + */ if (this.globalCheckpoint <= globalCheckpoint) { this.globalCheckpoint = globalCheckpoint; logger.trace("global checkpoint updated from primary to [{}]", globalCheckpoint); - } else { - throw new IllegalArgumentException("global checkpoint from primary should never decrease. current [" + - this.globalCheckpoint + "], got [" + globalCheckpoint + "]"); } } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java index 84f35ebca43..5f59af04b50 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/PeerRecoveryTargetService.java @@ -312,9 +312,9 @@ public class PeerRecoveryTargetService extends AbstractComponent implements Inde @Override public void messageReceived(RecoveryFinalizeRecoveryRequest request, TransportChannel channel) throws Exception { - try (RecoveriesCollection.RecoveryRef recoveryRef = onGoingRecoveries.getRecoverySafe(request.recoveryId(), request.shardId())) - { - recoveryRef.status().finalizeRecovery(); + try (RecoveriesCollection.RecoveryRef recoveryRef = + onGoingRecoveries.getRecoverySafe(request.recoveryId(), request.shardId())) { + recoveryRef.status().finalizeRecovery(request.globalCheckpoint()); } channel.sendResponse(TransportResponse.Empty.INSTANCE); } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java index dca50f3f816..eaace661077 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryFinalizeRecoveryRequest.java @@ -19,8 +19,10 @@ package org.elasticsearch.indices.recovery; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.transport.TransportRequest; @@ -29,15 +31,16 @@ import java.io.IOException; public class RecoveryFinalizeRecoveryRequest extends TransportRequest { private long recoveryId; - private ShardId shardId; + private long globalCheckpoint; public RecoveryFinalizeRecoveryRequest() { } - RecoveryFinalizeRecoveryRequest(long recoveryId, ShardId shardId) { + RecoveryFinalizeRecoveryRequest(final long recoveryId, final ShardId shardId, final long globalCheckpoint) { this.recoveryId = recoveryId; this.shardId = shardId; + this.globalCheckpoint = globalCheckpoint; } public long recoveryId() { @@ -48,11 +51,20 @@ public class RecoveryFinalizeRecoveryRequest extends TransportRequest { return shardId; } + public long globalCheckpoint() { + return globalCheckpoint; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); recoveryId = in.readLong(); shardId = ShardId.readShardId(in); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + globalCheckpoint = in.readZLong(); + } else { + globalCheckpoint = SequenceNumbersService.UNASSIGNED_SEQ_NO; + } } @Override @@ -60,5 +72,9 @@ public class RecoveryFinalizeRecoveryRequest extends TransportRequest { super.writeTo(out); out.writeLong(recoveryId); shardId.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + out.writeZLong(globalCheckpoint); + } } + } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java index 8b540fcb508..fa1a9a7979a 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java @@ -391,8 +391,8 @@ public class RecoverySourceHandler { StopWatch stopWatch = new StopWatch().start(); logger.trace("[{}][{}] finalizing recovery to {}", indexName, shardId, request.targetNode()); cancellableThreads.execute(() -> { - recoveryTarget.finalizeRecovery(); shard.markAllocationIdAsInSync(recoveryTarget.getTargetAllocationId()); + recoveryTarget.finalizeRecovery(shard.getGlobalCheckpoint()); }); if (request.isPrimaryRelocation()) { diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java index 1ae1d494ca6..67ee1a5ac9a 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java @@ -333,7 +333,8 @@ public class RecoveryTarget extends AbstractRefCounted implements RecoveryTarget } @Override - public void finalizeRecovery() { + public void finalizeRecovery(final long globalCheckpoint) { + indexShard().updateGlobalCheckpointOnReplica(globalCheckpoint); final IndexShard indexShard = indexShard(); indexShard.finalizeRecovery(); } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTargetHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTargetHandler.java index 86afa498e62..5cbfb4a53a5 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTargetHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryTargetHandler.java @@ -39,11 +39,12 @@ public interface RecoveryTargetHandler { void prepareForTranslogOperations(int totalTranslogOps, long maxUnsafeAutoIdTimestamp) throws IOException; /** - * The finalize request clears unreferenced translog files, refreshes the engine now that - * new segments are available, and enables garbage collection of - * tombstone files. - **/ - void finalizeRecovery(); + * The finalize request refreshes the engine now that new segments are available, enables garbage collection of tombstone files, and + * updates the global checkpoint. + * + * @param globalCheckpoint the global checkpoint on the recovery source + */ + void finalizeRecovery(long globalCheckpoint); /** * Blockingly waits for cluster state with at least clusterStateVersion to be available @@ -82,4 +83,5 @@ public interface RecoveryTargetHandler { * @return the allocation id of the target shard. */ String getTargetAllocationId(); + } diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java index ef6b7c2f901..5fa1ca22c70 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RemoteRecoveryTargetHandler.java @@ -86,9 +86,9 @@ public class RemoteRecoveryTargetHandler implements RecoveryTargetHandler { } @Override - public void finalizeRecovery() { + public void finalizeRecovery(final long globalCheckpoint) { transportService.submitRequest(targetNode, PeerRecoveryTargetService.Actions.FINALIZE, - new RecoveryFinalizeRecoveryRequest(recoveryId, shardId), + new RecoveryFinalizeRecoveryRequest(recoveryId, shardId, globalCheckpoint), TransportRequestOptions.builder().withTimeout(recoverySettings.internalActionLongTimeout()).build(), EmptyTransportResponseHandler.INSTANCE_SAME).txGet(); } diff --git a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java index 340ea745aae..22746aaf2a1 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java @@ -27,7 +27,6 @@ import org.apache.logging.log4j.core.LogEvent; import org.apache.logging.log4j.core.appender.AbstractAppender; import org.apache.logging.log4j.core.filter.RegexFilter; import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.analysis.Tokenizer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.codecs.Codec; import org.apache.lucene.document.Field; @@ -74,6 +73,7 @@ import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.support.TransportActions; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Randomness; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.Tuple; @@ -130,7 +130,6 @@ import org.hamcrest.MatcherAssert; import org.junit.After; import org.junit.Before; -import java.io.IOError; import java.io.IOException; import java.io.InputStream; import java.nio.charset.Charset; @@ -153,11 +152,15 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; +import java.util.function.LongSupplier; import java.util.function.Supplier; +import java.util.stream.Collectors; import static java.util.Collections.emptyMap; +import static org.elasticsearch.index.engine.Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.PRIMARY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.REPLICA; +import static org.elasticsearch.index.engine.Engine.Operation.Origin.PEER_RECOVERY; import static org.hamcrest.CoreMatchers.instanceOf; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; @@ -319,12 +322,27 @@ public class InternalEngineTests extends ESTestCase { } protected InternalEngine createEngine(IndexSettings indexSettings, Store store, Path translogPath, MergePolicy mergePolicy, Supplier indexWriterSupplier) throws IOException { + return createEngine(indexSettings, store, translogPath, mergePolicy, indexWriterSupplier, null); + } + + protected InternalEngine createEngine( + IndexSettings indexSettings, + Store store, + Path translogPath, + MergePolicy mergePolicy, + Supplier indexWriterSupplier, + Supplier sequenceNumbersServiceSupplier) throws IOException { EngineConfig config = config(indexSettings, store, translogPath, mergePolicy, IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null); InternalEngine internalEngine = new InternalEngine(config) { @Override IndexWriter createWriter(boolean create) throws IOException { return (indexWriterSupplier != null) ? indexWriterSupplier.get() : super.createWriter(create); } + + @Override + public SequenceNumbersService seqNoService() { + return (sequenceNumbersServiceSupplier != null) ? sequenceNumbersServiceSupplier.get() : super.seqNoService(); + } }; if (config.getOpenMode() == EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG) { internalEngine.recoverFromTranslog(); @@ -2914,6 +2932,193 @@ public class InternalEngineTests extends ESTestCase { searchResult.close(); } + public void testSequenceNumberAdvancesToMaxSeqOnEngineOpenOnPrimary() throws BrokenBarrierException, InterruptedException, IOException { + engine.close(); + final int docs = randomIntBetween(1, 32); + InternalEngine initialEngine = null; + try { + final CountDownLatch latch = new CountDownLatch(1); + final CyclicBarrier barrier = new CyclicBarrier(2); + final AtomicBoolean skip = new AtomicBoolean(); + final AtomicLong expectedLocalCheckpoint = new AtomicLong(SequenceNumbersService.NO_OPS_PERFORMED); + final List threads = new ArrayList<>(); + final SequenceNumbersService seqNoService = + new SequenceNumbersService( + shardId, + defaultSettings, + SequenceNumbersService.NO_OPS_PERFORMED, + SequenceNumbersService.NO_OPS_PERFORMED, + SequenceNumbersService.UNASSIGNED_SEQ_NO) { + @Override + public long generateSeqNo() { + final long seqNo = super.generateSeqNo(); + if (skip.get()) { + try { + barrier.await(); + latch.await(); + } catch (BrokenBarrierException | InterruptedException e) { + throw new RuntimeException(e); + } + } else { + if (expectedLocalCheckpoint.get() + 1 == seqNo) { + expectedLocalCheckpoint.set(seqNo); + } + } + return seqNo; + } + }; + initialEngine = createEngine(defaultSettings, store, primaryTranslogDir, newMergePolicy(), null, () -> seqNoService); + final InternalEngine finalInitialEngine = initialEngine; + for (int i = 0; i < docs; i++) { + final String id = Integer.toString(i); + final Term uid = newUid(id); + final ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocumentWithTextField(), SOURCE, null); + + skip.set(randomBoolean()); + final Thread thread = new Thread(() -> finalInitialEngine.index(new Engine.Index(uid, doc))); + thread.start(); + if (skip.get()) { + threads.add(thread); + barrier.await(); + } else { + thread.join(); + } + } + + assertThat(initialEngine.seqNoService().getLocalCheckpoint(), equalTo(expectedLocalCheckpoint.get())); + assertThat(initialEngine.seqNoService().getMaxSeqNo(), equalTo((long) (docs - 1))); + initialEngine.flush(true, true); + + latch.countDown(); + for (final Thread thread : threads) { + thread.join(); + } + } finally { + IOUtils.close(initialEngine); + } + + try (final Engine recoveringEngine = + new InternalEngine(copy(initialEngine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG))) { + assertThat(recoveringEngine.seqNoService().getLocalCheckpoint(), greaterThanOrEqualTo((long) (docs - 1))); + } + } + + public void testSequenceNumberAdvancesToMaxSeqNoOnEngineOpenOnReplica() throws IOException { + final long v = Versions.MATCH_ANY; + final VersionType t = VersionType.INTERNAL; + final long ts = IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP; + final int docs = randomIntBetween(1, 32); + InternalEngine initialEngine = null; + try { + initialEngine = engine; + for (int i = 0; i < docs; i++) { + final String id = Integer.toString(i); + final Term uid = newUid(id); + final ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocumentWithTextField(), SOURCE, null); + // create a gap at sequence number 3 * i + 1 + initialEngine.index(new Engine.Index(uid, doc, 3 * i, 1, v, t, REPLICA, System.nanoTime(), ts, false)); + initialEngine.delete(new Engine.Delete("type", id, uid, 3 * i + 2, 1, v, t, REPLICA, System.nanoTime())); + } + + // bake the commit with the local checkpoint stuck at 0 and gaps all along the way up to the max sequence number + assertThat(initialEngine.seqNoService().getLocalCheckpoint(), equalTo((long) 0)); + assertThat(initialEngine.seqNoService().getMaxSeqNo(), equalTo((long) (3 * (docs - 1) + 2))); + initialEngine.flush(true, true); + + for (int i = 0; i < docs; i++) { + final String id = Integer.toString(i); + final Term uid = newUid(id); + final ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocumentWithTextField(), SOURCE, null); + initialEngine.index(new Engine.Index(uid, doc, 3 * i + 1, 1, v, t, REPLICA, System.nanoTime(), ts, false)); + } + } finally { + IOUtils.close(initialEngine); + } + + try (final Engine recoveringEngine = + new InternalEngine(copy(initialEngine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG))) { + assertThat(recoveringEngine.seqNoService().getLocalCheckpoint(), greaterThanOrEqualTo((long) (3 * (docs - 1) + 2 - 1))); + } + } + + public void testOutOfOrderSequenceNumbersWithVersionConflict() throws IOException { + final List operations = new ArrayList<>(); + + final int numberOfOperations = randomIntBetween(16, 32); + final Term uid = newUid("1"); + final Document document = testDocumentWithTextField(); + final AtomicLong sequenceNumber = new AtomicLong(); + final Engine.Operation.Origin origin = randomFrom(LOCAL_TRANSLOG_RECOVERY, PEER_RECOVERY, PRIMARY, REPLICA); + final LongSupplier sequenceNumberSupplier = + origin == PRIMARY ? () -> SequenceNumbersService.UNASSIGNED_SEQ_NO : sequenceNumber::getAndIncrement; + document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); + final ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_1, null); + for (int i = 0; i < numberOfOperations; i++) { + if (randomBoolean()) { + final Engine.Index index = new Engine.Index( + uid, + doc, + sequenceNumberSupplier.getAsLong(), + 1, + i, + VersionType.EXTERNAL, + origin, + System.nanoTime(), + IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, + false); + operations.add(index); + } else { + final Engine.Delete delete = new Engine.Delete( + "test", + "1", + uid, + sequenceNumberSupplier.getAsLong(), + 1, + i, + VersionType.EXTERNAL, + origin, + System.nanoTime()); + operations.add(delete); + } + } + + final boolean exists = operations.get(operations.size() - 1) instanceof Engine.Index; + Randomness.shuffle(operations); + + for (final Engine.Operation operation : operations) { + if (operation instanceof Engine.Index) { + engine.index((Engine.Index) operation); + } else { + engine.delete((Engine.Delete) operation); + } + } + + final long expectedLocalCheckpoint; + if (origin == PRIMARY) { + // we can only advance as far as the number of operations that did not conflict + int count = 0; + + // each time the version increments as we walk the list, that counts as a successful operation + long version = -1; + for (int i = 0; i < numberOfOperations; i++) { + if (operations.get(i).version() >= version) { + count++; + version = operations.get(i).version(); + } + } + + // sequence numbers start at zero, so the expected local checkpoint is the number of successful operations minus one + expectedLocalCheckpoint = count - 1; + } else { + expectedLocalCheckpoint = numberOfOperations - 1; + } + + assertThat(engine.seqNoService().getLocalCheckpoint(), equalTo(expectedLocalCheckpoint)); + try (final Engine.GetResult result = engine.get(new Engine.Get(true, uid))) { + assertThat(result.exists(), equalTo(exists)); + } + } + /** * Return a tuple representing the sequence ID for the given {@code Get} * operation. The first value in the tuple is the sequence number, the diff --git a/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java b/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java index bafe6350a51..93b20633cf1 100644 --- a/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java +++ b/core/src/test/java/org/elasticsearch/index/replication/RecoveryDuringReplicationTests.java @@ -109,13 +109,14 @@ public class RecoveryDuringReplicationTests extends ESIndexLevelReplicationTestC } @Override - public void finalizeRecovery() { + public void finalizeRecovery(long globalCheckpoint) { if (hasBlocked() == false) { // it maybe that not ops have been transferred, block now blockIfNeeded(RecoveryState.Stage.TRANSLOG); } blockIfNeeded(RecoveryState.Stage.FINALIZE); - super.finalizeRecovery(); + super.finalizeRecovery(globalCheckpoint); } + } } diff --git a/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java b/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java index f4de4bdea0b..bbd1ad56bd0 100644 --- a/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java +++ b/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java @@ -23,14 +23,12 @@ import com.carrotsearch.hppc.IntHashSet; import com.carrotsearch.hppc.procedures.IntProcedure; import org.apache.lucene.index.IndexFileNames; import org.apache.lucene.util.English; -import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.action.admin.indices.stats.IndexShardStats; import org.elasticsearch.action.admin.indices.stats.IndexStats; import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse; import org.elasticsearch.action.admin.indices.stats.ShardStats; import org.elasticsearch.action.index.IndexRequestBuilder; -import org.elasticsearch.action.search.SearchPhaseExecutionException; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.Client; import org.elasticsearch.cluster.ClusterState; @@ -64,7 +62,6 @@ import org.elasticsearch.test.MockIndexEventListener; import org.elasticsearch.test.junit.annotations.TestLogging; import org.elasticsearch.test.transport.MockTransportService; import org.elasticsearch.transport.Transport; -import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportRequest; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportService; @@ -98,8 +95,6 @@ import static org.hamcrest.Matchers.startsWith; @ClusterScope(scope = Scope.TEST, numDataNodes = 0) @TestLogging("_root:DEBUG,org.elasticsearch.indices.recovery:TRACE,org.elasticsearch.index.shard.service:TRACE") -@LuceneTestCase.AwaitsFix(bugUrl = "primary relocation needs to transfer the global check point. otherwise the new primary sends a " + - "an unknown global checkpoint during sync, causing assertions to trigger") public class RelocationIT extends ESIntegTestCase { private final TimeValue ACCEPTABLE_RELOCATION_TIME = new TimeValue(5, TimeUnit.MINUTES); @@ -183,15 +178,13 @@ public class RelocationIT extends ESIntegTestCase { clusterHealthResponse = client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForNoRelocatingShards(true).setTimeout(ACCEPTABLE_RELOCATION_TIME).execute().actionGet(); assertThat(clusterHealthResponse.isTimedOut(), equalTo(false)); - clusterHealthResponse = client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForNoRelocatingShards(true).setTimeout(ACCEPTABLE_RELOCATION_TIME).execute().actionGet(); - assertThat(clusterHealthResponse.isTimedOut(), equalTo(false)); logger.info("--> verifying count again..."); client().admin().indices().prepareRefresh().execute().actionGet(); assertThat(client().prepareSearch("test").setSize(0).execute().actionGet().getHits().totalHits(), equalTo(20L)); } - @TestLogging("action.index:TRACE,action.bulk:TRACE,action.search:TRACE") + @TestLogging("org.elasticsearch.action.index:TRACE,org.elasticsearch.action.bulk:TRACE,org.elasticsearch.action.search:TRACE") public void testRelocationWhileIndexingRandom() throws Exception { int numberOfRelocations = scaledRandomIntBetween(1, rarely() ? 10 : 4); int numberOfReplicas = randomBoolean() ? 0 : 1; @@ -210,12 +203,12 @@ public class RelocationIT extends ESIntegTestCase { ).get(); - for (int i = 1; i < numberOfNodes; i++) { - logger.info("--> starting [node{}] ...", i + 1); - nodes[i] = internalCluster().startNode(); - if (i != numberOfNodes - 1) { + for (int i = 2; i <= numberOfNodes; i++) { + logger.info("--> starting [node{}] ...", i); + nodes[i - 1] = internalCluster().startNode(); + if (i != numberOfNodes) { ClusterHealthResponse healthResponse = client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID) - .setWaitForNodes(Integer.toString(i + 1)).setWaitForGreenStatus().execute().actionGet(); + .setWaitForNodes(Integer.toString(i)).setWaitForGreenStatus().execute().actionGet(); assertThat(healthResponse.isTimedOut(), equalTo(false)); } } @@ -246,8 +239,6 @@ public class RelocationIT extends ESIntegTestCase { } ClusterHealthResponse clusterHealthResponse = client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForNoRelocatingShards(true).setTimeout(ACCEPTABLE_RELOCATION_TIME).execute().actionGet(); assertThat(clusterHealthResponse.isTimedOut(), equalTo(false)); - clusterHealthResponse = client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForNoRelocatingShards(true).setTimeout(ACCEPTABLE_RELOCATION_TIME).execute().actionGet(); - assertThat(clusterHealthResponse.isTimedOut(), equalTo(false)); indexer.pauseIndexing(); logger.info("--> DONE relocate the shard from {} to {}", fromNode, toNode); } @@ -261,7 +252,6 @@ public class RelocationIT extends ESIntegTestCase { logger.info("--> searching the index"); boolean ranOnce = false; for (int i = 0; i < 10; i++) { - try { logger.info("--> START search test round {}", i + 1); SearchHits hits = client().prepareSearch("test").setQuery(matchAllQuery()).setSize((int) indexer.totalIndexedDocs()).storedFields().execute().actionGet().getHits(); ranOnce = true; @@ -283,10 +273,7 @@ public class RelocationIT extends ESIntegTestCase { } assertThat(hits.totalHits(), equalTo(indexer.totalIndexedDocs())); logger.info("--> DONE search test round {}", i + 1); - } catch (SearchPhaseExecutionException ex) { - // TODO: the first run fails with this failure, waiting for relocating nodes set to 0 is not enough? - logger.warn("Got exception while searching.", ex); - } + } if (!ranOnce) { fail(); @@ -294,6 +281,7 @@ public class RelocationIT extends ESIntegTestCase { } } + @TestLogging("org.elasticsearch.action.index:TRACE,org.elasticsearch.action.bulk:TRACE,org.elasticsearch.action.search:TRACE") public void testRelocationWhileRefreshing() throws Exception { int numberOfRelocations = scaledRandomIntBetween(1, rarely() ? 10 : 4); int numberOfReplicas = randomBoolean() ? 0 : 1; @@ -464,6 +452,7 @@ public class RelocationIT extends ESIntegTestCase { } } + @TestLogging("org.elasticsearch.action.index:TRACE,org.elasticsearch.action.bulk:TRACE,org.elasticsearch.action.search:TRACE") public void testIndexAndRelocateConcurrently() throws ExecutionException, InterruptedException { int halfNodes = randomIntBetween(1, 3); Settings[] nodeSettings = Stream.concat( diff --git a/test/framework/src/main/java/org/elasticsearch/test/ESIntegTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/ESIntegTestCase.java index 7ff288f3a6b..cb31ac6028b 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/ESIntegTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/ESIntegTestCase.java @@ -1323,7 +1323,7 @@ public abstract class ESIntegTestCase extends ESTestCase { /** * Indexes the given {@link IndexRequestBuilder} instances randomly. It shuffles the given builders and either - * indexes they in a blocking or async fashion. This is very useful to catch problems that relate to internal document + * indexes them in a blocking or async fashion. This is very useful to catch problems that relate to internal document * ids or index segment creations. Some features might have bug when a given document is the first or the last in a * segment or if only one document is in a segment etc. This method prevents issues like this by randomizing the index * layout. @@ -1339,7 +1339,7 @@ public abstract class ESIntegTestCase extends ESTestCase { /** * Indexes the given {@link IndexRequestBuilder} instances randomly. It shuffles the given builders and either - * indexes they in a blocking or async fashion. This is very useful to catch problems that relate to internal document + * indexes them in a blocking or async fashion. This is very useful to catch problems that relate to internal document * ids or index segment creations. Some features might have bug when a given document is the first or the last in a * segment or if only one document is in a segment etc. This method prevents issues like this by randomizing the index * layout. From b78f7bc51d07bc73b8d2c41f45595b61bff9edcf Mon Sep 17 00:00:00 2001 From: Boaz Leskes Date: Sun, 18 Dec 2016 08:05:59 +0100 Subject: [PATCH 11/26] InternalEngine should use global checkpoint when committing the translog relates to #22212 --- .../org/elasticsearch/index/engine/InternalEngine.java | 2 +- .../index/engine/InternalEngineTests.java | 10 ++++++---- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java index 905b08f2bd3..2957b9ab064 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java +++ b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java @@ -188,7 +188,7 @@ public class InternalEngine extends Engine { seqNoService().markSeqNoAsCompleted(seqNoService().getLocalCheckpoint() + 1); } indexWriter = writer; - translog = openTranslog(engineConfig, writer, () -> seqNoService().getLocalCheckpoint()); + translog = openTranslog(engineConfig, writer, () -> seqNoService().getGlobalCheckpoint()); assert translog.getGeneration() != null; } catch (IOException | TranslogCorruptedException e) { throw new EngineCreationFailureException(shardId, "failed to create engine", e); diff --git a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java index 22746aaf2a1..523efb6829c 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java @@ -154,13 +154,12 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; import java.util.function.LongSupplier; import java.util.function.Supplier; -import java.util.stream.Collectors; import static java.util.Collections.emptyMap; import static org.elasticsearch.index.engine.Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY; +import static org.elasticsearch.index.engine.Engine.Operation.Origin.PEER_RECOVERY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.PRIMARY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.REPLICA; -import static org.elasticsearch.index.engine.Engine.Operation.Origin.PEER_RECOVERY; import static org.hamcrest.CoreMatchers.instanceOf; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; @@ -1694,8 +1693,10 @@ public class InternalEngineTests extends ESTestCase { } } - replicaLocalCheckpoint = - rarely() ? replicaLocalCheckpoint : randomIntBetween(Math.toIntExact(replicaLocalCheckpoint), Math.toIntExact(primarySeqNo)); + if (randomInt(10) < 3) { + // only update rarely as we do it every doc + replicaLocalCheckpoint = randomIntBetween(Math.toIntExact(replicaLocalCheckpoint), Math.toIntExact(primarySeqNo)); + } initialEngine.seqNoService().updateLocalCheckpointForShard("primary", initialEngine.seqNoService().getLocalCheckpoint()); initialEngine.seqNoService().updateLocalCheckpointForShard("replica", replicaLocalCheckpoint); @@ -1707,6 +1708,7 @@ public class InternalEngineTests extends ESTestCase { } } + logger.info("localcheckpoint {}, global {}", replicaLocalCheckpoint, primarySeqNo); initialEngine.seqNoService().updateGlobalCheckpointOnPrimary(); globalCheckpoint = initialEngine.seqNoService().getGlobalCheckpoint(); From ccfeac8dd5bed28f047b1ce9caf26ab3ae96cc28 Mon Sep 17 00:00:00 2001 From: Simon Willnauer Date: Sun, 18 Dec 2016 09:26:53 +0100 Subject: [PATCH 12/26] Remove `doHandshake` test-only settings from TcpTransport (#22241) In #22094 we introduce a test-only setting to simulate transport impls that don't support handshakes. This commit implements the same logic without a setting. --- .../elasticsearch/transport/TcpTransport.java | 38 +++++----------- .../netty4/SimpleNetty4TransportTests.java | 22 +++++++-- .../AbstractSimpleTransportTestCase.java | 45 +++++++++++++------ .../transport/MockTcpTransportTests.java | 17 ++++++- 4 files changed, 77 insertions(+), 45 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/transport/TcpTransport.java b/core/src/main/java/org/elasticsearch/transport/TcpTransport.java index cd23fc9b074..2e8cb4f65ce 100644 --- a/core/src/main/java/org/elasticsearch/transport/TcpTransport.java +++ b/core/src/main/java/org/elasticsearch/transport/TcpTransport.java @@ -148,9 +148,6 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i Setting.byteSizeSetting("transport.tcp.receive_buffer_size", NetworkService.TcpSettings.TCP_RECEIVE_BUFFER_SIZE, Setting.Property.NodeScope); - // test-setting only - static final Setting CONNECTION_HANDSHAKE = Setting.boolSetting("transport.tcp.handshake", true); - private static final long NINETY_PER_HEAP_SIZE = (long) (JvmInfo.jvmInfo().getMem().getHeapMax().getBytes() * 0.9); private static final int PING_DATA_SIZE = -1; protected final boolean blockingClient; @@ -161,7 +158,6 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i protected final ThreadPool threadPool; private final BigArrays bigArrays; protected final NetworkService networkService; - private final boolean doHandshakes; protected volatile TransportServiceAdapter transportServiceAdapter; // node id to actual channel @@ -200,7 +196,6 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i this.transportName = transportName; this.blockingClient = TCP_BLOCKING_CLIENT.get(settings); defaultConnectionProfile = buildDefaultConnectionProfile(settings); - this.doHandshakes = CONNECTION_HANDSHAKE.get(settings); } static ConnectionProfile buildDefaultConnectionProfile(Settings settings) { @@ -463,21 +458,13 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i "failed to connect to [{}], cleaning dangling connections", node), e); throw e; } - if (doHandshakes) { // some tests need to disable this - Channel channel = nodeChannels.channel(TransportRequestOptions.Type.PING); - final TimeValue connectTimeout = connectionProfile.getConnectTimeout() == null ? - defaultConnectionProfile.getConnectTimeout(): - connectionProfile.getConnectTimeout(); - final TimeValue handshakeTimeout = connectionProfile.getHandshakeTimeout() == null ? - connectTimeout : connectionProfile.getHandshakeTimeout(); - Version version = executeHandshake(node, channel, handshakeTimeout); - if (version != null) { - // this is a BWC layer, if we talk to a pre 5.2 node then the handshake is not supported - // this will go away in master once it's all ported to 5.2 but for now we keep this to make - // the backport straight forward - nodeChannels = new NodeChannels(nodeChannels, version); - } - } + Channel channel = nodeChannels.channel(TransportRequestOptions.Type.PING); + final TimeValue connectTimeout = connectionProfile.getConnectTimeout() == null ? + defaultConnectionProfile.getConnectTimeout() : + connectionProfile.getConnectTimeout(); + final TimeValue handshakeTimeout = connectionProfile.getHandshakeTimeout() == null ? + connectTimeout : connectionProfile.getHandshakeTimeout(); + Version version = executeHandshake(node, channel, handshakeTimeout); // we acquire a connection lock, so no way there is an existing connection connectedNodes.put(node, nodeChannels); if (logger.isDebugEnabled()) { @@ -1130,7 +1117,7 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i * @param length the payload length in bytes * @see TcpHeader */ - private BytesReference buildHeader(long requestId, byte status, Version protocolVersion, int length) throws IOException { + final BytesReference buildHeader(long requestId, byte status, Version protocolVersion, int length) throws IOException { try (BytesStreamOutput headerOutput = new BytesStreamOutput(TcpHeader.HEADER_SIZE)) { headerOutput.setVersion(protocolVersion); TcpHeader.writeHeader(headerOutput, requestId, status, protocolVersion, length); @@ -1306,7 +1293,7 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i handleRequest(channel, profileName, streamIn, requestId, messageLengthBytes, version, remoteAddress, status); } else { final TransportResponseHandler handler; - if (TransportStatus.isHandshake(status) && doHandshakes) { + if (TransportStatus.isHandshake(status)) { handler = pendingHandshakes.remove(requestId); } else { TransportResponseHandler theHandler = transportServiceAdapter.onResponseReceived(requestId); @@ -1398,7 +1385,7 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i transportServiceAdapter.onRequestReceived(requestId, action); TransportChannel transportChannel = null; try { - if (TransportStatus.isHandshake(status) && doHandshakes) { + if (TransportStatus.isHandshake(status)) { final VersionHandshakeResponse response = new VersionHandshakeResponse(getCurrentVersion()); sendResponse(version, channel, response, requestId, HANDSHAKE_ACTION_NAME, TransportResponseOptions.EMPTY, TransportStatus.setHandshake((byte)0)); @@ -1509,8 +1496,7 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i } } - // pkg private for testing - final Version executeHandshake(DiscoveryNode node, Channel channel, TimeValue timeout) throws IOException, InterruptedException { + protected Version executeHandshake(DiscoveryNode node, Channel channel, TimeValue timeout) throws IOException, InterruptedException { numHandshakes.inc(); final long requestId = newRequestId(); final HandshakeResponseHandler handler = new HandshakeResponseHandler(channel); @@ -1520,7 +1506,7 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i boolean success = false; try { if (isOpen(channel) == false) { - // we have to protect ourself here since sendRequestToChannel won't barf if the channel is closed. + // we have to protect us here since sendRequestToChannel won't barf if the channel is closed. // it's weird but to change it will cause a lot of impact on the exception handling code all over the codebase. // yet, if we don't check the state here we might have registered a pending handshake handler but the close // listener calling #onChannelClosed might have already run and we are waiting on the latch below unitl we time out. diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java index a7a674007ba..3875cea31ac 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/transport/netty4/SimpleNetty4TransportTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.transport.netty4; +import io.netty.channel.Channel; import org.elasticsearch.Version; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; @@ -26,6 +27,7 @@ import org.elasticsearch.common.network.NetworkService; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.node.Node; @@ -38,6 +40,7 @@ import org.elasticsearch.transport.Transport; import org.elasticsearch.transport.TransportService; import org.elasticsearch.transport.TransportSettings; +import java.io.IOException; import java.net.InetAddress; import java.net.UnknownHostException; import java.util.Collections; @@ -49,10 +52,21 @@ import static org.hamcrest.Matchers.containsString; public class SimpleNetty4TransportTests extends AbstractSimpleTransportTestCase { public static MockTransportService nettyFromThreadPool(Settings settings, ThreadPool threadPool, final Version version, - ClusterSettings clusterSettings) { + ClusterSettings clusterSettings, boolean doHandshake) { NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); Transport transport = new Netty4Transport(settings, threadPool, new NetworkService(settings, Collections.emptyList()), BigArrays.NON_RECYCLING_INSTANCE, namedWriteableRegistry, new NoneCircuitBreakerService()) { + + @Override + protected Version executeHandshake(DiscoveryNode node, Channel channel, TimeValue timeout) throws IOException, + InterruptedException { + if (doHandshake) { + return super.executeHandshake(node, channel, timeout); + } else { + return version.minimumCompatibilityVersion(); + } + } + @Override protected Version getCurrentVersion() { return version; @@ -63,9 +77,9 @@ public class SimpleNetty4TransportTests extends AbstractSimpleTransportTestCase } @Override - protected MockTransportService build(Settings settings, Version version, ClusterSettings clusterSettings) { + protected MockTransportService build(Settings settings, Version version, ClusterSettings clusterSettings, boolean doHandshake) { settings = Settings.builder().put(settings).put(TransportSettings.PORT.getKey(), "0").build(); - MockTransportService transportService = nettyFromThreadPool(settings, threadPool, version, clusterSettings); + MockTransportService transportService = nettyFromThreadPool(settings, threadPool, version, clusterSettings, doHandshake); transportService.start(); return transportService; } @@ -92,7 +106,7 @@ public class SimpleNetty4TransportTests extends AbstractSimpleTransportTestCase .build(); ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); BindTransportException bindTransportException = expectThrows(BindTransportException.class, () -> { - MockTransportService transportService = nettyFromThreadPool(settings, threadPool, Version.CURRENT, clusterSettings); + MockTransportService transportService = nettyFromThreadPool(settings, threadPool, Version.CURRENT, clusterSettings, true); try { transportService.start(); } finally { diff --git a/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java b/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java index 283ae288d29..540420f35e7 100644 --- a/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/transport/AbstractSimpleTransportTestCase.java @@ -29,15 +29,21 @@ import org.elasticsearch.Version; import org.elasticsearch.action.ActionListenerResponseHandler; import org.elasticsearch.action.support.PlainActionFuture; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.io.stream.InputStreamStreamInput; +import org.elasticsearch.common.io.stream.NamedWriteableRegistry; +import org.elasticsearch.common.io.stream.OutputStreamStreamOutput; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.network.NetworkService; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.ConcurrentCollections; +import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.node.Node; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.junit.annotations.TestLogging; @@ -48,13 +54,15 @@ import org.junit.After; import org.junit.Before; import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; import java.io.UncheckedIOException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.ServerSocket; import java.net.Socket; -import java.nio.channels.ClosedChannelException; import java.util.ArrayList; +import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -70,7 +78,6 @@ import java.util.concurrent.atomic.AtomicReference; import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; -import static org.hamcrest.Matchers.anyOf; import static org.hamcrest.Matchers.empty; import static org.hamcrest.Matchers.endsWith; import static org.hamcrest.Matchers.equalTo; @@ -94,7 +101,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { protected volatile DiscoveryNode nodeB; protected volatile MockTransportService serviceB; - protected abstract MockTransportService build(Settings settings, Version version, ClusterSettings clusterSettings); + protected abstract MockTransportService build(Settings settings, Version version, ClusterSettings clusterSettings, boolean doHandshake); @Override @Before @@ -149,7 +156,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { } private MockTransportService buildService(final String name, final Version version, ClusterSettings clusterSettings, - Settings settings, boolean acceptRequests) { + Settings settings, boolean acceptRequests, boolean doHandshake) { MockTransportService service = build( Settings.builder() .put(settings) @@ -158,7 +165,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { .put(TransportService.TRACE_LOG_EXCLUDE_SETTING.getKey(), "NOTHING") .build(), version, - clusterSettings); + clusterSettings, doHandshake); if (acceptRequests) { service.acceptIncomingRequests(); } @@ -166,7 +173,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { } private MockTransportService buildService(final String name, final Version version, ClusterSettings clusterSettings) { - return buildService(name, version, clusterSettings, Settings.EMPTY, true); + return buildService(name, version, clusterSettings, Settings.EMPTY, true, true); } @Override @@ -1463,7 +1470,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { public void testBlockingIncomingRequests() throws Exception { try (TransportService service = buildService("TS_TEST", version0, null, - Settings.builder().put(TcpTransport.CONNECTION_HANDSHAKE.getKey(), false).build(), false)) { + Settings.EMPTY, false, false)) { AtomicBoolean requestProcessed = new AtomicBoolean(false); service.registerRequestHandler("action", TestRequest::new, ThreadPool.Names.SAME, (request, channel) -> { @@ -1475,7 +1482,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { new DiscoveryNode("TS_TEST", "TS_TEST", service.boundAddress().publishAddress(), emptyMap(), emptySet(), version0); serviceA.close(); serviceA = buildService("TS_A", version0, null, - Settings.builder().put(TcpTransport.CONNECTION_HANDSHAKE.getKey(), false).build(), true); + Settings.EMPTY, true, false); serviceA.connectToNode(node); CountDownLatch latch = new CountDownLatch(1); @@ -1583,7 +1590,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { .put(TransportService.TRACE_LOG_EXCLUDE_SETTING.getKey(), "NOTHING") .build(), version0, - null); + null, true); DiscoveryNode nodeC = new DiscoveryNode("TS_C", "TS_C", serviceC.boundAddress().publishAddress(), emptyMap(), emptySet(), version0); serviceC.acceptIncomingRequests(); @@ -1786,7 +1793,7 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { TransportRequestOptions.Type.STATE); // connection with one connection and a large timeout -- should consume the one spot in the backlog queue try (TransportService service = buildService("TS_TPC", Version.CURRENT, null, - Settings.builder().put(TcpTransport.CONNECTION_HANDSHAKE.getKey(), false).build(), true)) { + Settings.EMPTY, true, false)) { service.connectToNode(first, builder.build()); builder.setConnectTimeout(TimeValue.timeValueMillis(1)); final ConnectionProfile profile = builder.build(); @@ -1805,11 +1812,23 @@ public abstract class AbstractSimpleTransportTestCase extends ESTestCase { public void testTcpHandshake() throws IOException, InterruptedException { assumeTrue("only tcp transport has a handshake method", serviceA.getOriginalTransport() instanceof TcpTransport); TcpTransport originalTransport = (TcpTransport) serviceA.getOriginalTransport(); - try (TransportService service = buildService("TS_TPC", Version.CURRENT, null, - Settings.builder().put(TcpTransport.CONNECTION_HANDSHAKE.getKey(), false).build(), true)) { + NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); + + try (MockTcpTransport transport = new MockTcpTransport(Settings.EMPTY, threadPool, BigArrays.NON_RECYCLING_INSTANCE, + new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(Settings.EMPTY, Collections.emptyList())){ + @Override + protected String handleRequest(MockChannel mockChannel, String profileName, StreamInput stream, long requestId, + int messageLengthBytes, Version version, InetSocketAddress remoteAddress, byte status) + throws IOException { + return super.handleRequest(mockChannel, profileName, stream, requestId, messageLengthBytes, version, remoteAddress, + (byte)(status & ~(1<<3))); // we flip the isHanshake bit back and ackt like the handler is not found + } + }) { + transport.transportServiceAdapter(serviceA.new Adapter()); + transport.start(); // this acts like a node that doesn't have support for handshakes DiscoveryNode node = - new DiscoveryNode("TS_TPC", "TS_TPC", service.boundAddress().publishAddress(), emptyMap(), emptySet(), version0); + new DiscoveryNode("TS_TPC", "TS_TPC", transport.boundAddress().publishAddress(), emptyMap(), emptySet(), version0); ConnectTransportException exception = expectThrows(ConnectTransportException.class, () -> serviceA.connectToNode(node)); assertTrue(exception.getCause() instanceof IllegalStateException); assertEquals("handshake failed", exception.getCause().getMessage()); diff --git a/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java b/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java index 7ed12d249a5..2cc84c4c0cd 100644 --- a/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java +++ b/test/framework/src/test/java/org/elasticsearch/transport/MockTcpTransportTests.java @@ -19,22 +19,35 @@ package org.elasticsearch.transport; import org.elasticsearch.Version; +import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.NamedWriteableRegistry; import org.elasticsearch.common.network.NetworkService; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.test.transport.MockTransportService; +import java.io.IOException; import java.util.Collections; public class MockTcpTransportTests extends AbstractSimpleTransportTestCase { @Override - protected MockTransportService build(Settings settings, Version version, ClusterSettings clusterSettings) { + protected MockTransportService build(Settings settings, Version version, ClusterSettings clusterSettings, boolean doHandshake) { NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(Collections.emptyList()); Transport transport = new MockTcpTransport(settings, threadPool, BigArrays.NON_RECYCLING_INSTANCE, - new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(settings, Collections.emptyList()), version); + new NoneCircuitBreakerService(), namedWriteableRegistry, new NetworkService(settings, Collections.emptyList()), version) { + @Override + protected Version executeHandshake(DiscoveryNode node, MockChannel mockChannel, TimeValue timeout) throws IOException, + InterruptedException { + if (doHandshake) { + return super.executeHandshake(node, mockChannel, timeout); + } else { + return version.minimumCompatibilityVersion(); + } + } + }; MockTransportService mockTransportService = new MockTransportService(Settings.EMPTY, transport, threadPool, TransportService.NOOP_TRANSPORT_INTERCEPTOR, clusterSettings); mockTransportService.start(); From 6327e35414c92fe911701b2d7163a330b7370e33 Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Mon, 19 Dec 2016 09:10:58 +0100 Subject: [PATCH 13/26] Change type of ingest doc meta-data field 'TIMESTAMP' to `Date` (#22234) With this commit we change the data type of the 'TIMESTAMP' meta-data field from a formatted date string to a plain `java.util.Date` instance. The main reason for this change is that our benchmarks have indicated that this contributes significantly to the time spent in the ingest pipeline. The overhead in terms of indexing throughput of the ingest pipeline is about 15% and breaks down roughly as follows: * 5% overhead caused by the conversion from `XContent` -> `Map` * 5% overhead caused by the timestamp formatting * 5% overhead caused by the conversion `Map` -> `XContent` Relates #22074 --- .../elasticsearch/ingest/IngestDocument.java | 10 +++------- .../ingest/IngestDocumentTests.java | 20 +++++++------------ docs/reference/migration/migrate_6_0.asciidoc | 3 +++ .../migration/migrate_6_0/ingest.asciidoc | 6 ++++++ 4 files changed, 19 insertions(+), 20 deletions(-) create mode 100644 docs/reference/migration/migrate_6_0/ingest.asciidoc diff --git a/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java b/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java index edb92b6e837..eaae1a3e881 100644 --- a/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java +++ b/core/src/main/java/org/elasticsearch/ingest/IngestDocument.java @@ -27,18 +27,14 @@ import org.elasticsearch.index.mapper.RoutingFieldMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; import org.elasticsearch.index.mapper.TypeFieldMapper; -import java.text.DateFormat; -import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Arrays; import java.util.Base64; import java.util.Date; import java.util.HashMap; import java.util.List; -import java.util.Locale; import java.util.Map; import java.util.Objects; -import java.util.TimeZone; /** * Represents a single document being captured before indexing and holds the source and metadata (like id, type and index). @@ -68,9 +64,7 @@ public final class IngestDocument { } this.ingestMetadata = new HashMap<>(); - DateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZZ", Locale.ROOT); - df.setTimeZone(TimeZone.getTimeZone("UTC")); - this.ingestMetadata.put(TIMESTAMP, df.format(new Date())); + this.ingestMetadata.put(TIMESTAMP, new Date()); } /** @@ -595,6 +589,8 @@ public final class IngestDocument { value instanceof Long || value instanceof Float || value instanceof Double || value instanceof Boolean) { return value; + } else if (value instanceof Date) { + return ((Date) value).clone(); } else { throw new IllegalArgumentException("unexpected value type [" + value.getClass() + "]"); } diff --git a/core/src/test/java/org/elasticsearch/ingest/IngestDocumentTests.java b/core/src/test/java/org/elasticsearch/ingest/IngestDocumentTests.java index e16be95d2e6..1bd5676f474 100644 --- a/core/src/test/java/org/elasticsearch/ingest/IngestDocumentTests.java +++ b/core/src/test/java/org/elasticsearch/ingest/IngestDocumentTests.java @@ -22,21 +22,17 @@ package org.elasticsearch.ingest; import org.elasticsearch.test.ESTestCase; import org.junit.Before; -import java.text.DateFormat; -import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.Date; import java.util.HashMap; import java.util.List; -import java.util.Locale; import java.util.Map; import static org.elasticsearch.ingest.IngestDocumentMatcher.assertIngestDocument; import static org.hamcrest.Matchers.both; import static org.hamcrest.Matchers.containsString; -import static org.hamcrest.Matchers.endsWith; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThanOrEqualTo; import static org.hamcrest.Matchers.instanceOf; @@ -48,13 +44,14 @@ import static org.hamcrest.Matchers.sameInstance; public class IngestDocumentTests extends ESTestCase { + private static final Date BOGUS_TIMESTAMP = new Date(0L); private IngestDocument ingestDocument; @Before public void setIngestDocument() { Map document = new HashMap<>(); Map ingestMap = new HashMap<>(); - ingestMap.put("timestamp", "bogus_timestamp"); + ingestMap.put("timestamp", BOGUS_TIMESTAMP); document.put("_ingest", ingestMap); document.put("foo", "bar"); document.put("int", 123); @@ -86,9 +83,9 @@ public class IngestDocumentTests extends ESTestCase { assertThat(ingestDocument.getFieldValue("_index", String.class), equalTo("index")); assertThat(ingestDocument.getFieldValue("_type", String.class), equalTo("type")); assertThat(ingestDocument.getFieldValue("_id", String.class), equalTo("id")); - assertThat(ingestDocument.getFieldValue("_ingest.timestamp", String.class), - both(notNullValue()).and(not(equalTo("bogus_timestamp")))); - assertThat(ingestDocument.getFieldValue("_source._ingest.timestamp", String.class), equalTo("bogus_timestamp")); + assertThat(ingestDocument.getFieldValue("_ingest.timestamp", Date.class), + both(notNullValue()).and(not(equalTo(BOGUS_TIMESTAMP)))); + assertThat(ingestDocument.getFieldValue("_source._ingest.timestamp", Date.class), equalTo(BOGUS_TIMESTAMP)); } public void testGetSourceObject() { @@ -972,11 +969,8 @@ public class IngestDocumentTests extends ESTestCase { long before = System.currentTimeMillis(); IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random()); long after = System.currentTimeMillis(); - String timestampString = (String) ingestDocument.getIngestMetadata().get("timestamp"); - assertThat(timestampString, notNullValue()); - assertThat(timestampString, endsWith("+0000")); - DateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZZ", Locale.ROOT); - Date timestamp = df.parse(timestampString); + Date timestamp = (Date) ingestDocument.getIngestMetadata().get(IngestDocument.TIMESTAMP); + assertThat(timestamp, notNullValue()); assertThat(timestamp.getTime(), greaterThanOrEqualTo(before)); assertThat(timestamp.getTime(), lessThanOrEqualTo(after)); } diff --git a/docs/reference/migration/migrate_6_0.asciidoc b/docs/reference/migration/migrate_6_0.asciidoc index f1712d29cf9..abc476a7d1b 100644 --- a/docs/reference/migration/migrate_6_0.asciidoc +++ b/docs/reference/migration/migrate_6_0.asciidoc @@ -35,6 +35,7 @@ way to reindex old indices is to use the `reindex` API. * <> * <> * <> +* <> include::migrate_6_0/cat.asciidoc[] @@ -57,3 +58,5 @@ include::migrate_6_0/plugins.asciidoc[] include::migrate_6_0/indices.asciidoc[] include::migrate_6_0/scripting.asciidoc[] + +include::migrate_6_0/ingest.asciidoc[] diff --git a/docs/reference/migration/migrate_6_0/ingest.asciidoc b/docs/reference/migration/migrate_6_0/ingest.asciidoc new file mode 100644 index 00000000000..db2caabe43a --- /dev/null +++ b/docs/reference/migration/migrate_6_0/ingest.asciidoc @@ -0,0 +1,6 @@ +[[breaking_60_ingest_changes]] +=== Ingest changes + +==== Timestamp meta-data field type has changed + +The type of the "timestamp" meta-data field has changed from `java.lang.String` to `java.util.Date`. \ No newline at end of file From 3ce7b119d28e7d1be7284bd804606a65c2d1a383 Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Mon, 19 Dec 2016 09:29:47 +0100 Subject: [PATCH 14/26] Enable strict duplicate checks for all XContent types (#22225) With this commit we enable the Jackson feature 'STRICT_DUPLICATE_DETECTION' by default for all XContent types (not only JSON). We have also changed the name of the system property to disable this feature from `es.json.strict_duplicate_detection` to the now more appropriate name `es.xcontent.strict_duplicate_detection`. Relates elastic/elasticsearch#19614 Relates elastic/elasticsearch#22073 --- .../common/xcontent/XContent.java | 22 +++++++++++++++++ .../common/xcontent/cbor/CborXContent.java | 2 ++ .../common/xcontent/json/JsonXContent.java | 24 +------------------ .../common/xcontent/smile/SmileXContent.java | 2 ++ .../common/xcontent/yaml/YamlXContent.java | 2 ++ .../loader/JsonSettingsLoaderTests.java | 6 ++--- .../loader/YamlSettingsLoaderTests.java | 4 ++++ .../common/xcontent/BaseXContentTestCase.java | 18 ++++++++++++++ .../ConstructingObjectParserTests.java | 4 ++-- .../xcontent/json/JsonXContentTests.java | 10 -------- .../index/query/BoolQueryBuilderTests.java | 5 ++-- .../query/ConstantScoreQueryBuilderTests.java | 5 ++-- .../FunctionScoreQueryBuilderTests.java | 5 ++-- .../aggregations/AggregatorParsingTests.java | 9 +++---- .../phrase/DirectCandidateGeneratorTests.java | 3 ++- .../migration/migrate_6_0/rest.asciidoc | 6 ++--- ...tClientYamlTestFragmentParserTestCase.java | 9 ++++--- .../yaml/parser/DoSectionParserTests.java | 7 ++++++ ...entYamlSuiteRestApiParserFailingTests.java | 9 +++---- 19 files changed, 93 insertions(+), 59 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java index c73f5f19d25..72210f09d9b 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/XContent.java @@ -19,6 +19,7 @@ package org.elasticsearch.common.xcontent; +import org.elasticsearch.common.Booleans; import org.elasticsearch.common.bytes.BytesReference; import java.io.IOException; @@ -33,6 +34,27 @@ import java.util.Set; */ public interface XContent { + /* + * NOTE: This comment is only meant for maintainers of the Elasticsearch code base and is intentionally not a Javadoc comment as it + * describes an undocumented system property. + * + * + * Determines whether the XContent parser will always check for duplicate keys. This behavior is enabled by default but + * can be disabled by setting the otherwise undocumented system property "es.xcontent.strict_duplicate_detection to "false". + * + * Before we've enabled this mode, we had custom duplicate checks in various parts of the code base. As the user can still disable this + * mode and fall back to the legacy duplicate checks, we still need to keep the custom duplicate checks around and we also need to keep + * the tests around. + * + * If this fallback via system property is removed one day in the future you can remove all tests that call this method and also remove + * the corresponding custom duplicate check code. + * + */ + static boolean isStrictDuplicateDetectionEnabled() { + // Don't allow duplicate keys in JSON content by default but let the user opt out + return Booleans.parseBooleanExact(System.getProperty("es.xcontent.strict_duplicate_detection", "true")); + } + /** * The type this content handles and produces. */ diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java index 4224b5328a7..d79173cfc2b 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java @@ -21,6 +21,7 @@ package org.elasticsearch.common.xcontent.cbor; import com.fasterxml.jackson.core.JsonEncoding; import com.fasterxml.jackson.core.JsonGenerator; +import com.fasterxml.jackson.core.JsonParser; import com.fasterxml.jackson.dataformat.cbor.CBORFactory; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.bytes.BytesReference; @@ -54,6 +55,7 @@ public class CborXContent implements XContent { cborFactory.configure(CBORFactory.Feature.FAIL_ON_SYMBOL_HASH_OVERFLOW, false); // this trips on many mappings now... // Do not automatically close unclosed objects/arrays in com.fasterxml.jackson.dataformat.cbor.CBORGenerator#close() method cborFactory.configure(JsonGenerator.Feature.AUTO_CLOSE_JSON_CONTENT, false); + cborFactory.configure(JsonParser.Feature.STRICT_DUPLICATE_DETECTION, XContent.isStrictDuplicateDetectionEnabled()); cborXContent = new CborXContent(); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java index 4657a554099..1b0b351e6ef 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/json/JsonXContent.java @@ -23,7 +23,6 @@ import com.fasterxml.jackson.core.JsonEncoding; import com.fasterxml.jackson.core.JsonFactory; import com.fasterxml.jackson.core.JsonGenerator; import com.fasterxml.jackson.core.JsonParser; -import org.elasticsearch.common.Booleans; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.FastStringReader; import org.elasticsearch.common.xcontent.XContent; @@ -50,27 +49,6 @@ public class JsonXContent implements XContent { public static final JsonXContent jsonXContent; - /* - * NOTE: This comment is only meant for maintainers of the Elasticsearch code base and is intentionally not a Javadoc comment as it - * describes an undocumented system property. - * - * - * Determines whether the JSON parser will always check for duplicate keys in JSON content. This behavior is enabled by default but - * can be disabled by setting the otherwise undocumented system property "es.json.strict_duplicate_detection" to "false". - * - * Before we've enabled this mode, we had custom duplicate checks in various parts of the code base. As the user can still disable this - * mode and fall back to the legacy duplicate checks, we still need to keep the custom duplicate checks around and we also need to keep - * the tests around. - * - * If this fallback via system property is removed one day in the future you can remove all tests that call this method and also remove - * the corresponding custom duplicate check code. - * - */ - public static boolean isStrictDuplicateDetectionEnabled() { - // Don't allow duplicate keys in JSON content by default but let the user opt out - return Booleans.parseBooleanExact(System.getProperty("es.json.strict_duplicate_detection", "true")); - } - static { jsonFactory = new JsonFactory(); jsonFactory.configure(JsonGenerator.Feature.QUOTE_FIELD_NAMES, true); @@ -78,7 +56,7 @@ public class JsonXContent implements XContent { jsonFactory.configure(JsonFactory.Feature.FAIL_ON_SYMBOL_HASH_OVERFLOW, false); // this trips on many mappings now... // Do not automatically close unclosed objects/arrays in com.fasterxml.jackson.core.json.UTF8JsonGenerator#close() method jsonFactory.configure(JsonGenerator.Feature.AUTO_CLOSE_JSON_CONTENT, false); - jsonFactory.configure(JsonParser.Feature.STRICT_DUPLICATE_DETECTION, isStrictDuplicateDetectionEnabled()); + jsonFactory.configure(JsonParser.Feature.STRICT_DUPLICATE_DETECTION, XContent.isStrictDuplicateDetectionEnabled()); jsonXContent = new JsonXContent(); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java index 94ac9b94356..643326cd82f 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/smile/SmileXContent.java @@ -21,6 +21,7 @@ package org.elasticsearch.common.xcontent.smile; import com.fasterxml.jackson.core.JsonEncoding; import com.fasterxml.jackson.core.JsonGenerator; +import com.fasterxml.jackson.core.JsonParser; import com.fasterxml.jackson.dataformat.smile.SmileFactory; import com.fasterxml.jackson.dataformat.smile.SmileGenerator; import org.elasticsearch.common.bytes.BytesReference; @@ -55,6 +56,7 @@ public class SmileXContent implements XContent { smileFactory.configure(SmileFactory.Feature.FAIL_ON_SYMBOL_HASH_OVERFLOW, false); // this trips on many mappings now... // Do not automatically close unclosed objects/arrays in com.fasterxml.jackson.dataformat.smile.SmileGenerator#close() method smileFactory.configure(JsonGenerator.Feature.AUTO_CLOSE_JSON_CONTENT, false); + smileFactory.configure(JsonParser.Feature.STRICT_DUPLICATE_DETECTION, XContent.isStrictDuplicateDetectionEnabled()); smileXContent = new SmileXContent(); } diff --git a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java index 54da03118d7..7413f05f583 100644 --- a/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java +++ b/core/src/main/java/org/elasticsearch/common/xcontent/yaml/YamlXContent.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.xcontent.yaml; import com.fasterxml.jackson.core.JsonEncoding; +import com.fasterxml.jackson.core.JsonParser; import com.fasterxml.jackson.dataformat.yaml.YAMLFactory; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.bytes.BytesReference; @@ -50,6 +51,7 @@ public class YamlXContent implements XContent { static { yamlFactory = new YAMLFactory(); + yamlFactory.configure(JsonParser.Feature.STRICT_DUPLICATE_DETECTION, XContent.isStrictDuplicateDetectionEnabled()); yamlXContent = new YamlXContent(); } diff --git a/core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java b/core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java index 70439ef8bf9..d917c0d12c0 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/loader/JsonSettingsLoaderTests.java @@ -22,7 +22,7 @@ package org.elasticsearch.common.settings.loader; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsException; -import org.elasticsearch.common.xcontent.json.JsonXContent; +import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.test.ESTestCase; import static org.hamcrest.CoreMatchers.containsString; @@ -49,8 +49,8 @@ public class JsonSettingsLoaderTests extends ESTestCase { } public void testDuplicateKeysThrowsException() { - assumeFalse("Test only makes sense if JSON parser doesn't have strict duplicate checks enabled", - JsonXContent.isStrictDuplicateDetectionEnabled()); + assumeFalse("Test only makes sense if XContent parser doesn't have strict duplicate checks enabled", + XContent.isStrictDuplicateDetectionEnabled()); final String json = "{\"foo\":\"bar\",\"foo\":\"baz\"}"; final SettingsException e = expectThrows(SettingsException.class, () -> Settings.builder().loadFromSource(json).build()); assertEquals(e.getCause().getClass(), ElasticsearchParseException.class); diff --git a/core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java b/core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java index 7c956de8f9a..eeb2df7e1d6 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/loader/YamlSettingsLoaderTests.java @@ -22,6 +22,7 @@ package org.elasticsearch.common.settings.loader; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsException; +import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.test.ESTestCase; import java.nio.charset.StandardCharsets; @@ -68,6 +69,9 @@ public class YamlSettingsLoaderTests extends ESTestCase { } public void testDuplicateKeysThrowsException() { + assumeFalse("Test only makes sense if XContent parser doesn't have strict duplicate checks enabled", + XContent.isStrictDuplicateDetectionEnabled()); + String yaml = "foo: bar\nfoo: baz"; SettingsException e = expectThrows(SettingsException.class, () -> { Settings.builder().loadFromSource(yaml); diff --git a/core/src/test/java/org/elasticsearch/common/xcontent/BaseXContentTestCase.java b/core/src/test/java/org/elasticsearch/common/xcontent/BaseXContentTestCase.java index 57e9e0d5216..1c0a92dbd74 100644 --- a/core/src/test/java/org/elasticsearch/common/xcontent/BaseXContentTestCase.java +++ b/core/src/test/java/org/elasticsearch/common/xcontent/BaseXContentTestCase.java @@ -22,6 +22,7 @@ package org.elasticsearch.common.xcontent; import com.fasterxml.jackson.core.JsonGenerationException; import com.fasterxml.jackson.core.JsonGenerator; +import com.fasterxml.jackson.core.JsonParseException; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.Constants; import org.elasticsearch.cluster.metadata.IndexMetaData; @@ -66,6 +67,7 @@ import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.instanceOf; import static org.hamcrest.Matchers.notNullValue; +import static org.hamcrest.Matchers.startsWith; public abstract class BaseXContentTestCase extends ESTestCase { @@ -984,6 +986,22 @@ public abstract class BaseXContentTestCase extends ESTestCase { assertThat(e.getMessage(), containsString("Object has already been built and is self-referencing itself")); } + public void testChecksForDuplicates() throws Exception { + assumeTrue("Test only makes sense if XContent parser has strict duplicate checks enabled", + XContent.isStrictDuplicateDetectionEnabled()); + + BytesReference bytes = builder() + .startObject() + .field("key", 1) + .field("key", 2) + .endObject() + .bytes(); + + JsonParseException pex = expectThrows(JsonParseException.class, () -> createParser(xcontentType().xContent(), bytes).map()); + assertThat(pex.getMessage(), startsWith("Duplicate field 'key'")); + } + + private static void expectUnclosedException(ThrowingRunnable runnable) { IllegalStateException e = expectThrows(IllegalStateException.class, runnable); assertThat(e.getMessage(), containsString("Failed to close the XContentBuilder")); diff --git a/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java b/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java index 387efdb5e50..6cae59391c5 100644 --- a/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java +++ b/core/src/test/java/org/elasticsearch/common/xcontent/ConstructingObjectParserTests.java @@ -169,8 +169,8 @@ public class ConstructingObjectParserTests extends ESTestCase { } public void testRepeatedConstructorParam() throws IOException { - assumeFalse("Test only makes sense if JSON parser doesn't have strict duplicate checks enabled", - JsonXContent.isStrictDuplicateDetectionEnabled()); + assumeFalse("Test only makes sense if XContent parser doesn't have strict duplicate checks enabled", + XContent.isStrictDuplicateDetectionEnabled()); XContentParser parser = createParser(JsonXContent.jsonXContent, "{\n" + " \"vegetable\": 1,\n" diff --git a/core/src/test/java/org/elasticsearch/common/xcontent/json/JsonXContentTests.java b/core/src/test/java/org/elasticsearch/common/xcontent/json/JsonXContentTests.java index b4549830432..4a79ddb4ec6 100644 --- a/core/src/test/java/org/elasticsearch/common/xcontent/json/JsonXContentTests.java +++ b/core/src/test/java/org/elasticsearch/common/xcontent/json/JsonXContentTests.java @@ -22,7 +22,6 @@ package org.elasticsearch.common.xcontent.json; import com.fasterxml.jackson.core.JsonFactory; import com.fasterxml.jackson.core.JsonGenerator; -import com.fasterxml.jackson.core.JsonParseException; import org.elasticsearch.common.xcontent.BaseXContentTestCase; import org.elasticsearch.common.xcontent.XContentType; @@ -40,13 +39,4 @@ public class JsonXContentTests extends BaseXContentTestCase { JsonGenerator generator = new JsonFactory().createGenerator(os); doTestBigInteger(generator, os); } - - public void testChecksForDuplicates() throws Exception { - assumeTrue("Test only makes sense if JSON parser doesn't have strict duplicate checks enabled", - JsonXContent.isStrictDuplicateDetectionEnabled()); - - JsonParseException pex = expectThrows(JsonParseException.class, - () -> XContentType.JSON.xContent().createParser("{ \"key\": 1, \"key\": 2 }").map()); - assertEquals("Duplicate field 'key'", pex.getMessage()); - } } diff --git a/core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java b/core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java index 8e04ec7f553..13854ffc1e2 100644 --- a/core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java +++ b/core/src/test/java/org/elasticsearch/index/query/BoolQueryBuilderTests.java @@ -26,6 +26,7 @@ import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.Query; import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.ParsingException; +import org.elasticsearch.common.xcontent.XContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; @@ -340,8 +341,8 @@ public class BoolQueryBuilderTests extends AbstractQueryTestCase Date: Mon, 19 Dec 2016 09:54:27 +0100 Subject: [PATCH 15/26] The `_all` default mapper is not completely configured. (#22236) In some cases, it might happen that the `_all` field gets a field type that is not totally configured, and in particular lacks analyzers. This is due to the fact that `AllFieldMapper.TypeParser.getDefault` uses `Defaults.FIELD_TYPE` as a default field type, which does not have any analyzers configured since it does not know about the default analyzers. --- .../index/mapper/AllFieldMapper.java | 15 +++++++++++---- .../index/mapper/DocumentMapper.java | 3 ++- .../index/mapper/FieldNamesFieldMapper.java | 15 +++++++++++---- .../elasticsearch/index/mapper/IdFieldMapper.java | 4 +++- .../index/mapper/IndexFieldMapper.java | 4 ++-- .../index/mapper/MetadataFieldMapper.java | 5 ++--- .../index/mapper/ParentFieldMapper.java | 5 +++-- .../index/mapper/RoutingFieldMapper.java | 15 +++++++++++---- .../index/mapper/SeqNoFieldMapper.java | 4 +++- .../index/mapper/SourceFieldMapper.java | 6 +++--- .../index/mapper/TypeFieldMapper.java | 5 +++-- .../index/mapper/UidFieldMapper.java | 3 ++- .../index/mapper/VersionFieldMapper.java | 3 ++- .../index/mapper/AllFieldMapperTests.java | 11 +++++++++++ .../index/mapper/ExternalMetadataMapper.java | 4 +++- .../index/mapper/FieldNamesFieldMapperTests.java | 3 ++- .../elasticsearch/indices/IndicesModuleTests.java | 4 +--- .../index/mapper/size/SizeFieldMapper.java | 3 ++- 18 files changed, 77 insertions(+), 35 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java index 49b8fb085d6..a0f7ad57d04 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/AllFieldMapper.java @@ -34,6 +34,7 @@ import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.similarity.SimilarityService; import java.io.IOException; +import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -100,7 +101,7 @@ public class AllFieldMapper extends MetadataFieldMapper { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(parserContext.mapperService().fullName(NAME)); builder.fieldType().setIndexAnalyzer(parserContext.getIndexAnalyzers().getDefaultIndexAnalyzer()); @@ -141,8 +142,14 @@ public class AllFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { - return new AllFieldMapper(indexSettings, fieldType); + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + if (fieldType != null) { + return new AllFieldMapper(indexSettings, fieldType); + } else { + return parse(NAME, Collections.emptyMap(), context) + .build(new BuilderContext(indexSettings, new ContentPath(1))); + } } } @@ -179,7 +186,7 @@ public class AllFieldMapper extends MetadataFieldMapper { private EnabledAttributeMapper enabledState; private AllFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), Defaults.ENABLED, indexSettings); + this(existing.clone(), Defaults.ENABLED, indexSettings); } private AllFieldMapper(MappedFieldType fieldType, EnabledAttributeMapper enabled, Settings indexSettings) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java index fbe82a70bd7..fdc45530b99 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java @@ -74,7 +74,8 @@ public class DocumentMapper implements ToXContent { final MetadataFieldMapper metadataMapper; if (existingMetadataMapper == null) { final TypeParser parser = entry.getValue(); - metadataMapper = parser.getDefault(indexSettings, mapperService.fullName(name), builder.name()); + metadataMapper = parser.getDefault(mapperService.fullName(name), + mapperService.documentMapperParser().parserContext(builder.name())); } else { metadataMapper = existingMetadataMapper; } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java index 1b4a97a4660..764586562d2 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/FieldNamesFieldMapper.java @@ -30,6 +30,7 @@ import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; import java.util.ArrayList; +import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -98,7 +99,7 @@ public class FieldNamesFieldMapper extends MetadataFieldMapper { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(parserContext.mapperService().fullName(NAME)); for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { @@ -114,8 +115,14 @@ public class FieldNamesFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { - return new FieldNamesFieldMapper(indexSettings, fieldType); + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + if (fieldType != null) { + return new FieldNamesFieldMapper(indexSettings, fieldType); + } else { + return parse(NAME, Collections.emptyMap(), context) + .build(new BuilderContext(indexSettings, new ContentPath(1))); + } } } @@ -183,7 +190,7 @@ public class FieldNamesFieldMapper extends MetadataFieldMapper { } private FieldNamesFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), indexSettings); + this(existing.clone(), indexSettings); } private FieldNamesFieldMapper(MappedFieldType fieldType, Settings indexSettings) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java index 1b208421a8e..4f74278014b 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IdFieldMapper.java @@ -37,6 +37,7 @@ import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.iterable.Iterables; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.mapper.Mapper.TypeParser.ParserContext; import org.elasticsearch.index.query.QueryShardContext; import java.io.IOException; @@ -79,7 +80,8 @@ public class IdFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new IdFieldMapper(indexSettings, fieldType); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java index 72a7244976d..164f18075a4 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/IndexFieldMapper.java @@ -19,7 +19,6 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; import org.apache.lucene.search.Query; @@ -85,7 +84,8 @@ public class IndexFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new IndexFieldMapper(indexSettings, fieldType); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java index 07a4b3b9a51..ec84631e041 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MetadataFieldMapper.java @@ -39,14 +39,13 @@ public abstract class MetadataFieldMapper extends FieldMapper { * Get the default {@link MetadataFieldMapper} to use, if nothing had to be parsed. * @param fieldType null if this is the first root mapper on this index, the existing * fieldType for this index otherwise - * @param indexSettings the index-level settings * @param fieldType the existing field type for this meta mapper on the current index * or null if this is the first type being introduced - * @param typeName the name of the type that this mapper will be used on + * @param parserContext context that may be useful to build the field like analyzers */ // TODO: remove the fieldType parameter which is only used for bw compat with pre-2.0 // since settings could be modified - MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName); + MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext parserContext); } public abstract static class Builder extends FieldMapper.Builder { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java index 677da0e5f4e..d3f9eb22b72 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/ParentFieldMapper.java @@ -18,7 +18,6 @@ */ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.document.SortedDocValuesField; import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexOptions; @@ -131,7 +130,9 @@ public class ParentFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + final String typeName = context.type(); KeywordFieldMapper parentJoinField = createParentJoinFieldMapper(typeName, new BuilderContext(indexSettings, new ContentPath(0))); MappedFieldType childJoinFieldType = new ParentFieldType(Defaults.FIELD_TYPE, typeName); childJoinFieldType.setName(ParentFieldMapper.NAME); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java index fe2575b9345..8640bfaa35b 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/RoutingFieldMapper.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; +import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -78,7 +79,7 @@ public class RoutingFieldMapper extends MetadataFieldMapper { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(parserContext.mapperService().fullName(NAME)); for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { Map.Entry entry = iterator.next(); @@ -93,8 +94,14 @@ public class RoutingFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { - return new RoutingFieldMapper(indexSettings, fieldType); + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); + if (fieldType != null) { + return new RoutingFieldMapper(indexSettings, fieldType); + } else { + return parse(NAME, Collections.emptyMap(), context) + .build(new BuilderContext(indexSettings, new ContentPath(1))); + } } } @@ -121,7 +128,7 @@ public class RoutingFieldMapper extends MetadataFieldMapper { private boolean required; private RoutingFieldMapper(Settings indexSettings, MappedFieldType existing) { - this(existing == null ? Defaults.FIELD_TYPE.clone() : existing.clone(), Defaults.REQUIRED, indexSettings); + this(existing.clone(), Defaults.REQUIRED, indexSettings); } private RoutingFieldMapper(MappedFieldType fieldType, boolean required, Settings indexSettings) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java index 5820519af7f..e38a2aab840 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/SeqNoFieldMapper.java @@ -48,6 +48,7 @@ import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.MetadataFieldMapper; import org.elasticsearch.index.mapper.ParseContext; +import org.elasticsearch.index.mapper.Mapper.TypeParser.ParserContext; import org.elasticsearch.index.mapper.ParseContext.Document; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.QueryShardException; @@ -136,7 +137,8 @@ public class SeqNoFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new SeqNoFieldMapper(indexSettings); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java index b52d1262796..efddec90669 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/SourceFieldMapper.java @@ -19,7 +19,6 @@ package org.elasticsearch.index.mapper; -import org.apache.lucene.document.Field; import org.apache.lucene.document.StoredField; import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.IndexableField; @@ -109,7 +108,7 @@ public class SourceFieldMapper extends MetadataFieldMapper { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = new Builder(); for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { @@ -144,7 +143,8 @@ public class SourceFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new SourceFieldMapper(indexSettings); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java index 551208c797e..67775283d5a 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/TypeFieldMapper.java @@ -76,12 +76,13 @@ public class TypeFieldMapper extends MetadataFieldMapper { public static class TypeParser implements MetadataFieldMapper.TypeParser { @Override - public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { + public MetadataFieldMapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { throw new MapperParsingException(NAME + " is not configurable"); } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new TypeFieldMapper(indexSettings, fieldType); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java index c0515b18bcc..bb98f44ed1c 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/UidFieldMapper.java @@ -69,7 +69,8 @@ public class UidFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new UidFieldMapper(indexSettings, fieldType); } } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java index fb686d7781b..ce3b2c4b8e8 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/VersionFieldMapper.java @@ -62,7 +62,8 @@ public class VersionFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new VersionFieldMapper(indexSettings); } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/AllFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/AllFieldMapperTests.java index 091aa6003c9..d7def1db8cf 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/AllFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/AllFieldMapperTests.java @@ -30,6 +30,7 @@ import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.lucene.all.AllTermQuery; import org.elasticsearch.common.lucene.all.AllTokenStream; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; @@ -476,4 +477,14 @@ public class AllFieldMapperTests extends ESSingleNodeTestCase { e.getMessage().contains("Field [_all] is a metadata field and cannot be added inside a document")); } } + + public void testAllDefaults() { + // We use to have a bug with the default mapping having null analyzers because + // it was not fully constructed and was in particular lacking analyzers + IndexService index = createIndex("index", Settings.EMPTY, "type"); + AllFieldMapper all = index.mapperService().documentMapper("type").allFieldMapper(); + assertNotNull(all.fieldType().indexAnalyzer()); + assertNotNull(all.fieldType().searchAnalyzer()); + assertNotNull(all.fieldType().searchQuoteAnalyzer()); + } } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/ExternalMetadataMapper.java b/core/src/test/java/org/elasticsearch/index/mapper/ExternalMetadataMapper.java index 234e4fa312b..9f815a7fdea 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/ExternalMetadataMapper.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/ExternalMetadataMapper.java @@ -31,6 +31,7 @@ import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.MetadataFieldMapper; import org.elasticsearch.index.mapper.ParseContext; +import org.elasticsearch.index.mapper.Mapper.TypeParser.ParserContext; import java.io.IOException; import java.util.Collections; @@ -104,7 +105,8 @@ public class ExternalMetadataMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new ExternalMetadataMapper(indexSettings); } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java index 8292970d38c..7de66511f59 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/FieldNamesFieldMapperTests.java @@ -166,7 +166,8 @@ public class FieldNamesFieldMapperTests extends ESSingleNodeTestCase { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new DummyMetadataFieldMapper(indexSettings); } diff --git a/core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java b/core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java index 04b60b80c0c..298bb57c499 100644 --- a/core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java +++ b/core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java @@ -25,8 +25,6 @@ import java.util.List; import java.util.Map; import java.util.stream.Collectors; -import org.elasticsearch.common.io.stream.NamedWriteableRegistry; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.mapper.FieldNamesFieldMapper; import org.elasticsearch.index.mapper.IdFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; @@ -56,7 +54,7 @@ public class IndicesModuleTests extends ESTestCase { return null; } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { return null; } } diff --git a/plugins/mapper-size/src/main/java/org/elasticsearch/index/mapper/size/SizeFieldMapper.java b/plugins/mapper-size/src/main/java/org/elasticsearch/index/mapper/size/SizeFieldMapper.java index a12d5be1fde..c2a7e79ff2b 100644 --- a/plugins/mapper-size/src/main/java/org/elasticsearch/index/mapper/size/SizeFieldMapper.java +++ b/plugins/mapper-size/src/main/java/org/elasticsearch/index/mapper/size/SizeFieldMapper.java @@ -111,7 +111,8 @@ public class SizeFieldMapper extends MetadataFieldMapper { } @Override - public MetadataFieldMapper getDefault(Settings indexSettings, MappedFieldType fieldType, String typeName) { + public MetadataFieldMapper getDefault(MappedFieldType fieldType, ParserContext context) { + final Settings indexSettings = context.mapperService().getIndexSettings().getSettings(); return new SizeFieldMapper(indexSettings, fieldType); } } From 1ed2e18ded46a4743e43c1b70b18ca359422e3a1 Mon Sep 17 00:00:00 2001 From: Adrien Grand Date: Mon, 19 Dec 2016 09:55:13 +0100 Subject: [PATCH 16/26] Fix MapperService.allEnabled(). (#22227) It returns whether the last merged mapping has `_all` enabled rather than whether any of the types has `_all` enabled. --- .../index/mapper/MapperService.java | 6 +++-- .../index/mapper/MapperServiceTests.java | 27 +++++++++++++++++++ 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java index d848ce15331..1e3f96fbe2c 100755 --- a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java @@ -153,7 +153,7 @@ public class MapperService extends AbstractIndexComponent implements Closeable { } /** - * Returns true if the "_all" field is enabled for the type + * Returns true if the "_all" field is enabled on any type. */ public boolean allEnabled() { return this.allEnabled; @@ -377,7 +377,9 @@ public class MapperService extends AbstractIndexComponent implements Closeable { this.hasNested = hasNested; this.fullPathObjectMappers = fullPathObjectMappers; this.parentTypes = parentTypes; - this.allEnabled = mapper.allFieldMapper().enabled(); + // this is only correct because types cannot be removed and we do not + // allow to disable an existing _all field + this.allEnabled |= mapper.allFieldMapper().enabled(); assert assertSerialization(newMapper); assert assertMappersShareSameFieldType(); diff --git a/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java b/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java index 87afdedf89d..b32339b2357 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java @@ -215,4 +215,31 @@ public class MapperServiceTests extends ESSingleNodeTestCase { indexService.mapperService().merge("type3", normsDisabledMapping, MergeReason.MAPPING_UPDATE, true); assertNotSame(indexService.mapperService().documentMapper("type1"), documentMapper); } + + public void testAllEnabled() throws Exception { + IndexService indexService = createIndex("test"); + assertFalse(indexService.mapperService().allEnabled()); + + CompressedXContent enabledAll = new CompressedXContent(XContentFactory.jsonBuilder().startObject() + .startObject("_all") + .field("enabled", true) + .endObject().endObject().bytes()); + + CompressedXContent disabledAll = new CompressedXContent(XContentFactory.jsonBuilder().startObject() + .startObject("_all") + .field("enabled", false) + .endObject().endObject().bytes()); + + indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, enabledAll, + MergeReason.MAPPING_UPDATE, random().nextBoolean()); + assertFalse(indexService.mapperService().allEnabled()); // _default_ does not count + + indexService.mapperService().merge("some_type", enabledAll, + MergeReason.MAPPING_UPDATE, random().nextBoolean()); + assertTrue(indexService.mapperService().allEnabled()); + + indexService.mapperService().merge("other_type", disabledAll, + MergeReason.MAPPING_UPDATE, random().nextBoolean()); + assertTrue(indexService.mapperService().allEnabled()); // this returns true if any of the types has _all enabled + } } From e38f06cdc658efb74bb0e004047a67b96870a34c Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Mon, 19 Dec 2016 10:02:11 +0100 Subject: [PATCH 17/26] Update Gradle shadow plugin for microbenchmarks to 1.2.4 --- benchmarks/build.gradle | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/benchmarks/build.gradle b/benchmarks/build.gradle index 3b8b92328e1..7546e6d7e1c 100644 --- a/benchmarks/build.gradle +++ b/benchmarks/build.gradle @@ -24,7 +24,7 @@ buildscript { } } dependencies { - classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3' + classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.4' } } From b2aaeb56f3d00c934cf85cab27ef649350c6e8ad Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Mon, 19 Dec 2016 10:02:28 +0100 Subject: [PATCH 18/26] Update JMH to 1.17.3 --- benchmarks/build.gradle | 5 ++--- buildSrc/version.properties | 2 +- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/benchmarks/build.gradle b/benchmarks/build.gradle index 7546e6d7e1c..36732215d43 100644 --- a/benchmarks/build.gradle +++ b/benchmarks/build.gradle @@ -44,9 +44,8 @@ task test(type: Test, overwrite: true) dependencies { compile("org.elasticsearch:elasticsearch:${version}") { - // JMH ships with the conflicting version 4.6 (JMH will not update this dependency as it is Java 6 compatible and joptsimple is one - // of the most recent compatible version). This prevents us from using jopt-simple in benchmarks (which should be ok) but allows us - // to invoke the JMH uberjar as usual. + // JMH ships with the conflicting version 4.6. This prevents us from using jopt-simple in benchmarks (which should be ok) but allows + // us to invoke the JMH uberjar as usual. exclude group: 'net.sf.jopt-simple', module: 'jopt-simple' } compile "org.openjdk.jmh:jmh-core:$versions.jmh" diff --git a/buildSrc/version.properties b/buildSrc/version.properties index 146aafceb7f..44835f7227c 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -21,4 +21,4 @@ commonscodec = 1.10 hamcrest = 1.3 securemock = 1.2 # benchmark dependencies -jmh = 1.15 +jmh = 1.17.3 From 655a95a2bb8ddd13989f88c443297c6574ea6400 Mon Sep 17 00:00:00 2001 From: Daniel Mitterdorfer Date: Mon, 19 Dec 2016 10:06:12 +0100 Subject: [PATCH 19/26] Cache results of geoip lookups (#22231) With this commit, we introduce a cache to the geoip ingest processor. The cache is enabled by default and caches the 1000 most recent items. The cache size is controlled by the setting `ingest.geoip.cache_size`. Closes #22074 --- docs/plugins/ingest-geoip.asciidoc | 11 ++++ .../ingest/geoip/GeoIpCache.java | 46 +++++++++++++++++ .../ingest/geoip/IngestGeoIpPlugin.java | 27 ++++++++-- .../ingest/geoip/GeoIpCacheTests.java | 51 +++++++++++++++++++ .../geoip/GeoIpProcessorFactoryTests.java | 6 ++- 5 files changed, 137 insertions(+), 4 deletions(-) create mode 100644 plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/GeoIpCache.java create mode 100644 plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpCacheTests.java diff --git a/docs/plugins/ingest-geoip.asciidoc b/docs/plugins/ingest-geoip.asciidoc index 0481ad40ab6..95e7a0442a4 100644 --- a/docs/plugins/ingest-geoip.asciidoc +++ b/docs/plugins/ingest-geoip.asciidoc @@ -203,3 +203,14 @@ Which returns: } -------------------------------------------------- // TESTRESPONSE + +[[ingest-geoip-settings]] +===== Node Settings + +The geoip processor supports the following setting: + +`ingest.geoip.cache_size`:: + + The maximum number of results that should be cached. Defaults to `1000`. + +Note that these settings are node settings and apply to all geoip processors, i.e. there is one cache for all defined geoip processors. \ No newline at end of file diff --git a/plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/GeoIpCache.java b/plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/GeoIpCache.java new file mode 100644 index 00000000000..83a3374b504 --- /dev/null +++ b/plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/GeoIpCache.java @@ -0,0 +1,46 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.ingest.geoip; + +import com.fasterxml.jackson.databind.JsonNode; +import com.maxmind.db.NodeCache; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.common.cache.Cache; +import org.elasticsearch.common.cache.CacheBuilder; + +import java.io.IOException; +import java.util.concurrent.ExecutionException; + +final class GeoIpCache implements NodeCache { + private final Cache cache; + + GeoIpCache(long maxSize) { + this.cache = CacheBuilder.builder().setMaximumWeight(maxSize).build(); + } + + @Override + public JsonNode get(int key, Loader loader) throws IOException { + try { + return cache.computeIfAbsent(key, loader::load); + } catch (ExecutionException e) { + Throwable cause = e.getCause() != null ? e.getCause() : e; + throw new ElasticsearchException(cause); + } + } +} diff --git a/plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/IngestGeoIpPlugin.java b/plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/IngestGeoIpPlugin.java index 6d5af71aa5b..4e5cc5c0237 100644 --- a/plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/IngestGeoIpPlugin.java +++ b/plugins/ingest-geoip/src/main/java/org/elasticsearch/ingest/geoip/IngestGeoIpPlugin.java @@ -26,38 +26,57 @@ import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.PathMatcher; import java.nio.file.StandardOpenOption; +import java.util.Arrays; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; +import java.util.List; import java.util.Map; import java.util.stream.Stream; import java.util.zip.GZIPInputStream; +import com.maxmind.db.NoCache; +import com.maxmind.db.NodeCache; import com.maxmind.geoip2.DatabaseReader; import org.apache.lucene.util.IOUtils; +import org.elasticsearch.common.settings.Setting; import org.elasticsearch.ingest.Processor; import org.elasticsearch.plugins.IngestPlugin; import org.elasticsearch.plugins.Plugin; public class IngestGeoIpPlugin extends Plugin implements IngestPlugin, Closeable { + public static final Setting CACHE_SIZE = + Setting.longSetting("ingest.geoip.cache_size", 1000, 0, Setting.Property.NodeScope); private Map databaseReaders; + @Override + public List> getSettings() { + return Arrays.asList(CACHE_SIZE); + } + @Override public Map getProcessors(Processor.Parameters parameters) { if (databaseReaders != null) { throw new IllegalStateException("getProcessors called twice for geoip plugin!!"); } Path geoIpConfigDirectory = parameters.env.configFile().resolve("ingest-geoip"); + NodeCache cache; + long cacheSize = CACHE_SIZE.get(parameters.env.settings()); + if (cacheSize > 0) { + cache = new GeoIpCache(cacheSize); + } else { + cache = NoCache.getInstance(); + } try { - databaseReaders = loadDatabaseReaders(geoIpConfigDirectory); + databaseReaders = loadDatabaseReaders(geoIpConfigDirectory, cache); } catch (IOException e) { throw new RuntimeException(e); } return Collections.singletonMap(GeoIpProcessor.TYPE, new GeoIpProcessor.Factory(databaseReaders)); } - static Map loadDatabaseReaders(Path geoIpConfigDirectory) throws IOException { + static Map loadDatabaseReaders(Path geoIpConfigDirectory, NodeCache cache) throws IOException { if (Files.exists(geoIpConfigDirectory) == false && Files.isDirectory(geoIpConfigDirectory)) { throw new IllegalStateException("the geoip directory [" + geoIpConfigDirectory + "] containing databases doesn't exist"); } @@ -71,7 +90,8 @@ public class IngestGeoIpPlugin extends Plugin implements IngestPlugin, Closeable Path databasePath = iterator.next(); if (Files.isRegularFile(databasePath) && pathMatcher.matches(databasePath)) { try (InputStream inputStream = new GZIPInputStream(Files.newInputStream(databasePath, StandardOpenOption.READ))) { - databaseReaders.put(databasePath.getFileName().toString(), new DatabaseReader.Builder(inputStream).build()); + databaseReaders.put(databasePath.getFileName().toString(), + new DatabaseReader.Builder(inputStream).withCache(cache).build()); } } } @@ -85,4 +105,5 @@ public class IngestGeoIpPlugin extends Plugin implements IngestPlugin, Closeable IOUtils.close(databaseReaders.values()); } } + } diff --git a/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpCacheTests.java b/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpCacheTests.java new file mode 100644 index 00000000000..71cab99115f --- /dev/null +++ b/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpCacheTests.java @@ -0,0 +1,51 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.ingest.geoip; + +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.node.IntNode; +import com.maxmind.db.NodeCache; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.test.ESTestCase; + +public class GeoIpCacheTests extends ESTestCase { + public void testCachesAndEvictsResults() throws Exception { + GeoIpCache cache = new GeoIpCache(1); + final NodeCache.Loader loader = key -> new IntNode(key); + + JsonNode jsonNode1 = cache.get(1, loader); + assertSame(jsonNode1, cache.get(1, loader)); + + // evict old key by adding another value + cache.get(2, loader); + + assertNotSame(jsonNode1, cache.get(1, loader)); + } + + public void testThrowsElasticsearchException() throws Exception { + GeoIpCache cache = new GeoIpCache(1); + NodeCache.Loader loader = (int key) -> { + throw new IllegalArgumentException("Illegal key"); + }; + ElasticsearchException ex = expectThrows(ElasticsearchException.class, () -> cache.get(1, loader)); + assertTrue("Expected cause to be of type IllegalArgumentException but was [" + ex.getCause().getClass() + "]", + ex.getCause() instanceof IllegalArgumentException); + assertEquals("Illegal key", ex.getCause().getMessage()); + } +} diff --git a/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java b/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java index ec4db09cd96..162137b5f3c 100644 --- a/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java +++ b/plugins/ingest-geoip/src/test/java/org/elasticsearch/ingest/geoip/GeoIpProcessorFactoryTests.java @@ -20,6 +20,8 @@ package org.elasticsearch.ingest.geoip; import com.carrotsearch.randomizedtesting.generators.RandomPicks; +import com.maxmind.db.NoCache; +import com.maxmind.db.NodeCache; import com.maxmind.geoip2.DatabaseReader; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.Randomness; @@ -57,7 +59,9 @@ public class GeoIpProcessorFactoryTests extends ESTestCase { geoIpConfigDir.resolve("GeoLite2-City.mmdb.gz")); Files.copy(new ByteArrayInputStream(StreamsUtils.copyToBytesFromClasspath("/GeoLite2-Country.mmdb.gz")), geoIpConfigDir.resolve("GeoLite2-Country.mmdb.gz")); - databaseReaders = IngestGeoIpPlugin.loadDatabaseReaders(geoIpConfigDir); + + NodeCache cache = randomFrom(NoCache.getInstance(), new GeoIpCache(randomPositiveLong())); + databaseReaders = IngestGeoIpPlugin.loadDatabaseReaders(geoIpConfigDir, cache); } @AfterClass From f96769f97b6caa6d394f04805062cf4273ad5747 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Mon, 19 Dec 2016 10:08:58 +0100 Subject: [PATCH 20/26] Update painless-syntax.asciidoc Fix asciidoc syntax --- docs/reference/modules/scripting/painless-syntax.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/reference/modules/scripting/painless-syntax.asciidoc b/docs/reference/modules/scripting/painless-syntax.asciidoc index 10d6d501412..e3a6ed24bc0 100644 --- a/docs/reference/modules/scripting/painless-syntax.asciidoc +++ b/docs/reference/modules/scripting/painless-syntax.asciidoc @@ -185,9 +185,9 @@ doesn't have a `foo.keyword` field but is the length of that field if it does. Lastly, `?:` is lazy so the right hand side is not evaluated at all if the left hand side isn't null. -NOTE: Unlike Groovy, Painless' `?:` operator only coalesces `null`, not `false` +NOTE: Unlike Groovy, Painless' ++?:++ operator only coalesces `null`, not `false` or http://groovy-lang.org/semantics.html#Groovy-Truth[falsy] values. Strictly -speaking Painless' `?:` is more like Kotlin's `?:` than Groovy's `?:`. +speaking Painless' ++?:++ is more like Kotlin's ++?:++ than Groovy's ++?:++. NOTE: The result of `?.` and `?:` can't be assigned to primitives. So `int[] someArray = null; int l = someArray?.length` and From ce5c094cda7b7638e586e797adc066411cb49534 Mon Sep 17 00:00:00 2001 From: Simon Willnauer Date: Mon, 19 Dec 2016 10:48:38 +0100 Subject: [PATCH 21/26] Speed up filter and prefix settings operations (#22249) Today if a settings object has many keys ie. if somebody specifies a gazillion synonym in-line (arrays are keys ending with ordinals) operations like `Settings#getByPrefix` have a linear runtime. This can cause index creations to be very slow producing lots of garbage at the same time. Yet, `Settings#getByPrefix` is called quite frequently by group settings etc. which can cause heavy load on the system. While it's not recommended to have synonym lists with 25k entries in-line these use-cases should not have such a large impact on the cluster / node. This change introduces a view-like map that filters based on the prefixes referencing the actual source map instead of copying all values over and over again. A benchmark that adds a single key with 25k random synonyms between 2 and 5 chars takes 16 seconds to get the synonym prefix 200 times while the filtered view takes 4 ms for the 200 iterations. This relates to https://discuss.elastic.co/t/200-cpu-elasticsearch-5-index-creation-very-slow-with-a-huge-synonyms-list/69052 --- .../settings/AbstractScopedSettings.java | 6 +- .../common/settings/Setting.java | 4 +- .../common/settings/Settings.java | 156 ++++++++++++--- .../common/settings/SettingsTests.java | 183 ++++++++++++++++++ 4 files changed, 318 insertions(+), 31 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java b/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java index 3622623987b..89a56f03ecc 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/AbstractScopedSettings.java @@ -239,11 +239,9 @@ public abstract class AbstractScopedSettings extends AbstractComponent { */ public final void validate(Settings settings) { List exceptions = new ArrayList<>(); - // we want them sorted for deterministic error messages - SortedMap sortedSettings = new TreeMap<>(settings.getAsMap()); - for (Map.Entry entry : sortedSettings.entrySet()) { + for (String key : settings.getAsMap().keySet()) { // settings iterate in deterministic fashion try { - validate(entry.getKey(), settings); + validate(key, settings); } catch (RuntimeException ex) { exceptions.add(ex); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/Setting.java b/core/src/main/java/org/elasticsearch/common/settings/Setting.java index 22c74afee7c..5d9adbc34c1 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/Setting.java +++ b/core/src/main/java/org/elasticsearch/common/settings/Setting.java @@ -817,7 +817,9 @@ public class Setting extends ToXContentToBytes { @Override public void apply(Settings value, Settings current, Settings previous) { - logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current)); + if (logger.isInfoEnabled()) { // getRaw can create quite some objects + logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current)); + } consumer.accept(value); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/Settings.java b/core/src/main/java/org/elasticsearch/common/settings/Settings.java index 819edc246ac..67bf8081479 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/Settings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/Settings.java @@ -42,6 +42,8 @@ import java.io.InputStreamReader; import java.nio.charset.StandardCharsets; import java.nio.file.Files; import java.nio.file.Path; +import java.util.AbstractMap; +import java.util.AbstractSet; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; @@ -52,16 +54,15 @@ import java.util.Iterator; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; +import java.util.NoSuchElementException; import java.util.Objects; import java.util.Set; -import java.util.SortedMap; import java.util.TreeMap; import java.util.concurrent.TimeUnit; import java.util.function.Function; import java.util.function.Predicate; import java.util.regex.Matcher; import java.util.regex.Pattern; -import java.util.stream.Collectors; import static org.elasticsearch.common.unit.ByteSizeValue.parseBytesSizeValue; import static org.elasticsearch.common.unit.SizeValue.parseSizeValue; @@ -75,11 +76,10 @@ public final class Settings implements ToXContent { public static final Settings EMPTY = new Builder().build(); private static final Pattern ARRAY_PATTERN = Pattern.compile("(.*)\\.\\d+$"); - private SortedMap settings; + private Map settings; Settings(Map settings) { - // we use a sorted map for consistent serialization when using getAsMap() - this.settings = Collections.unmodifiableSortedMap(new TreeMap<>(settings)); + this.settings = Collections.unmodifiableMap(settings); } /** @@ -87,7 +87,8 @@ public final class Settings implements ToXContent { * @return an unmodifiable map of settings */ public Map getAsMap() { - return Collections.unmodifiableMap(this.settings); + // settings is always unmodifiable + return this.settings; } /** @@ -186,30 +187,14 @@ public final class Settings implements ToXContent { * A settings that are filtered (and key is removed) with the specified prefix. */ public Settings getByPrefix(String prefix) { - Builder builder = new Builder(); - for (Map.Entry entry : getAsMap().entrySet()) { - if (entry.getKey().startsWith(prefix)) { - if (entry.getKey().length() < prefix.length()) { - // ignore this. one - continue; - } - builder.put(entry.getKey().substring(prefix.length()), entry.getValue()); - } - } - return builder.build(); + return new Settings(new FilteredMap(this.settings, (k) -> k.startsWith(prefix), prefix)); } /** * Returns a new settings object that contains all setting of the current one filtered by the given settings key predicate. */ public Settings filter(Predicate predicate) { - Builder builder = new Builder(); - for (Map.Entry entry : getAsMap().entrySet()) { - if (predicate.test(entry.getKey())) { - builder.put(entry.getKey(), entry.getValue()); - } - } - return builder.build(); + return new Settings(new FilteredMap(this.settings, predicate, null)); } /** @@ -443,6 +428,7 @@ public final class Settings implements ToXContent { } return getGroupsInternal(settingPrefix, ignoreNonGrouped); } + private Map getGroupsInternal(String settingPrefix, boolean ignoreNonGrouped) throws SettingsException { // we don't really care that it might happen twice Map> map = new LinkedHashMap<>(); @@ -602,7 +588,8 @@ public final class Settings implements ToXContent { public static final Settings EMPTY_SETTINGS = new Builder().build(); - private final Map map = new LinkedHashMap<>(); + // we use a sorted map for consistent serialization when using getAsMap() + private final Map map = new TreeMap<>(); private Builder() { @@ -1032,7 +1019,124 @@ public final class Settings implements ToXContent { * set on this builder. */ public Settings build() { - return new Settings(Collections.unmodifiableMap(map)); + return new Settings(map); + } + } + + // TODO We could use an FST internally to make things even faster and more compact + private static final class FilteredMap extends AbstractMap { + private final Map delegate; + private final Predicate filter; + private final String prefix; + // we cache that size since we have to iterate the entire set + // this is safe to do since this map is only used with unmodifiable maps + private int size = -1; + @Override + public Set> entrySet() { + Set> delegateSet = delegate.entrySet(); + AbstractSet> filterSet = new AbstractSet>() { + + @Override + public Iterator> iterator() { + Iterator> iter = delegateSet.iterator(); + + return new Iterator>() { + private int numIterated; + private Entry currentElement; + @Override + public boolean hasNext() { + if (currentElement != null) { + return true; // protect against calling hasNext twice + } else { + if (numIterated == size) { // early terminate + assert size != -1 : "size was never set: " + numIterated + " vs. " + size; + return false; + } + while (iter.hasNext()) { + if (filter.test((currentElement = iter.next()).getKey())) { + numIterated++; + return true; + } + } + // we didn't find anything + currentElement = null; + return false; + } + } + + @Override + public Entry next() { + if (currentElement == null && hasNext() == false) { // protect against no #hasNext call or not respecting it + + throw new NoSuchElementException("make sure to call hasNext first"); + } + final Entry current = this.currentElement; + this.currentElement = null; + if (prefix == null) { + return current; + } + return new Entry() { + @Override + public String getKey() { + return current.getKey().substring(prefix.length()); + } + + @Override + public String getValue() { + return current.getValue(); + } + + @Override + public String setValue(String value) { + throw new UnsupportedOperationException(); + } + }; + } + }; + } + + @Override + public int size() { + return FilteredMap.this.size(); + } + }; + return filterSet; + } + + private FilteredMap(Map delegate, Predicate filter, String prefix) { + this.delegate = delegate; + this.filter = filter; + this.prefix = prefix; + } + + @Override + public String get(Object key) { + if (key instanceof String) { + final String theKey = prefix == null ? (String)key : prefix + key; + if (filter.test(theKey)) { + return delegate.get(theKey); + } + } + return null; + } + + @Override + public boolean containsKey(Object key) { + if (key instanceof String) { + final String theKey = prefix == null ? (String) key : prefix + key; + if (filter.test(theKey)) { + return delegate.containsKey(theKey); + } + } + return false; + } + + @Override + public int size() { + if (size == -1) { + size = Math.toIntExact(delegate.keySet().stream().filter((e) -> filter.test(e)).count()); + } + return size; } } } diff --git a/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java b/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java index 346c5bc60de..62fa9ec82c4 100644 --- a/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java +++ b/core/src/test/java/org/elasticsearch/common/settings/SettingsTests.java @@ -20,12 +20,17 @@ package org.elasticsearch.common.settings; import org.elasticsearch.common.settings.loader.YamlSettingsLoader; +import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.test.ESTestCase; import org.hamcrest.Matchers; import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.NoSuchElementException; import java.util.Set; import static org.hamcrest.Matchers.allOf; @@ -131,15 +136,49 @@ public class SettingsTests extends ESTestCase { public void testGetAsSettings() { Settings settings = Settings.builder() + .put("bar", "hello world") .put("foo", "abc") .put("foo.bar", "def") .put("foo.baz", "ghi").build(); Settings fooSettings = settings.getAsSettings("foo"); + assertFalse(fooSettings.isEmpty()); + assertEquals(2, fooSettings.getAsMap().size()); assertThat(fooSettings.get("bar"), equalTo("def")); assertThat(fooSettings.get("baz"), equalTo("ghi")); } + public void testMultLevelGetPrefix() { + Settings settings = Settings.builder() + .put("1.2.3", "hello world") + .put("1.2.3.4", "abc") + .put("2.3.4", "def") + .put("3.4", "ghi").build(); + + Settings firstLevelSettings = settings.getByPrefix("1."); + assertFalse(firstLevelSettings.isEmpty()); + assertEquals(2, firstLevelSettings.getAsMap().size()); + assertThat(firstLevelSettings.get("2.3.4"), equalTo("abc")); + assertThat(firstLevelSettings.get("2.3"), equalTo("hello world")); + + Settings secondLevelSetting = firstLevelSettings.getByPrefix("2."); + assertFalse(secondLevelSetting.isEmpty()); + assertEquals(2, secondLevelSetting.getAsMap().size()); + assertNull(secondLevelSetting.get("2.3.4")); + assertNull(secondLevelSetting.get("1.2.3.4")); + assertNull(secondLevelSetting.get("1.2.3")); + assertThat(secondLevelSetting.get("3.4"), equalTo("abc")); + assertThat(secondLevelSetting.get("3"), equalTo("hello world")); + + Settings thirdLevelSetting = secondLevelSetting.getByPrefix("3."); + assertFalse(thirdLevelSetting.isEmpty()); + assertEquals(1, thirdLevelSetting.getAsMap().size()); + assertNull(thirdLevelSetting.get("2.3.4")); + assertNull(thirdLevelSetting.get("3.4")); + assertNull(thirdLevelSetting.get("1.2.3")); + assertThat(thirdLevelSetting.get("4"), equalTo("abc")); + } + public void testNames() { Settings settings = Settings.builder() .put("bar", "baz") @@ -298,4 +337,148 @@ public class SettingsTests extends ESTestCase { assertThat(settings.getAsMap().size(), equalTo(1)); assertThat(settings.get("foo.test"), equalTo("test")); } + + public void testFilteredMap() { + Settings.Builder builder = Settings.builder(); + builder.put("a", "a1"); + builder.put("a.b", "ab1"); + builder.put("a.b.c", "ab2"); + builder.put("a.c", "ac1"); + builder.put("a.b.c.d", "ab3"); + + + Map fiteredMap = builder.build().filter((k) -> k.startsWith("a.b")).getAsMap(); + assertEquals(3, fiteredMap.size()); + int numKeys = 0; + for (String k : fiteredMap.keySet()) { + numKeys++; + assertTrue(k.startsWith("a.b")); + } + + assertEquals(3, numKeys); + int numValues = 0; + + for (String v : fiteredMap.values()) { + numValues++; + assertTrue(v.startsWith("ab")); + } + assertEquals(3, numValues); + assertFalse(fiteredMap.containsKey("a.c")); + assertFalse(fiteredMap.containsKey("a")); + assertTrue(fiteredMap.containsKey("a.b")); + assertTrue(fiteredMap.containsKey("a.b.c")); + assertTrue(fiteredMap.containsKey("a.b.c.d")); + expectThrows(UnsupportedOperationException.class, () -> + fiteredMap.remove("a.b")); + assertEquals("ab1", fiteredMap.get("a.b")); + assertEquals("ab2", fiteredMap.get("a.b.c")); + assertEquals("ab3", fiteredMap.get("a.b.c.d")); + + Iterator iterator = fiteredMap.keySet().iterator(); + for (int i = 0; i < 10; i++) { + assertTrue(iterator.hasNext()); + } + assertEquals("a.b", iterator.next()); + if (randomBoolean()) { + assertTrue(iterator.hasNext()); + } + assertEquals("a.b.c", iterator.next()); + if (randomBoolean()) { + assertTrue(iterator.hasNext()); + } + assertEquals("a.b.c.d", iterator.next()); + assertFalse(iterator.hasNext()); + expectThrows(NoSuchElementException.class, () -> iterator.next()); + + } + + public void testPrefixMap() { + Settings.Builder builder = Settings.builder(); + builder.put("a", "a1"); + builder.put("a.b", "ab1"); + builder.put("a.b.c", "ab2"); + builder.put("a.c", "ac1"); + builder.put("a.b.c.d", "ab3"); + + Map prefixMap = builder.build().getByPrefix("a.").getAsMap(); + assertEquals(4, prefixMap.size()); + int numKeys = 0; + for (String k : prefixMap.keySet()) { + numKeys++; + assertTrue(k, k.startsWith("b") || k.startsWith("c")); + } + + assertEquals(4, numKeys); + int numValues = 0; + + for (String v : prefixMap.values()) { + numValues++; + assertTrue(v, v.startsWith("ab") || v.startsWith("ac")); + } + assertEquals(4, numValues); + assertFalse(prefixMap.containsKey("a")); + assertTrue(prefixMap.containsKey("c")); + assertTrue(prefixMap.containsKey("b")); + assertTrue(prefixMap.containsKey("b.c")); + assertTrue(prefixMap.containsKey("b.c.d")); + expectThrows(UnsupportedOperationException.class, () -> + prefixMap.remove("a.b")); + assertEquals("ab1", prefixMap.get("b")); + assertEquals("ab2", prefixMap.get("b.c")); + assertEquals("ab3", prefixMap.get("b.c.d")); + Iterator prefixIterator = prefixMap.keySet().iterator(); + for (int i = 0; i < 10; i++) { + assertTrue(prefixIterator.hasNext()); + } + assertEquals("b", prefixIterator.next()); + if (randomBoolean()) { + assertTrue(prefixIterator.hasNext()); + } + assertEquals("b.c", prefixIterator.next()); + if (randomBoolean()) { + assertTrue(prefixIterator.hasNext()); + } + assertEquals("b.c.d", prefixIterator.next()); + if (randomBoolean()) { + assertTrue(prefixIterator.hasNext()); + } + assertEquals("c", prefixIterator.next()); + assertFalse(prefixIterator.hasNext()); + expectThrows(NoSuchElementException.class, () -> prefixIterator.next()); + } + + public void testEmptyFilterMap() { + Settings.Builder builder = Settings.builder(); + builder.put("a", "a1"); + builder.put("a.b", "ab1"); + builder.put("a.b.c", "ab2"); + builder.put("a.c", "ac1"); + builder.put("a.b.c.d", "ab3"); + + Map fiteredMap = builder.build().filter((k) -> false).getAsMap(); + assertEquals(0, fiteredMap.size()); + for (String k : fiteredMap.keySet()) { + fail("no element"); + + } + for (String v : fiteredMap.values()) { + fail("no element"); + } + assertFalse(fiteredMap.containsKey("a.c")); + assertFalse(fiteredMap.containsKey("a")); + assertFalse(fiteredMap.containsKey("a.b")); + assertFalse(fiteredMap.containsKey("a.b.c")); + assertFalse(fiteredMap.containsKey("a.b.c.d")); + expectThrows(UnsupportedOperationException.class, () -> + fiteredMap.remove("a.b")); + assertNull(fiteredMap.get("a.b")); + assertNull(fiteredMap.get("a.b.c")); + assertNull(fiteredMap.get("a.b.c.d")); + + Iterator iterator = fiteredMap.keySet().iterator(); + for (int i = 0; i < 10; i++) { + assertFalse(iterator.hasNext()); + } + expectThrows(NoSuchElementException.class, () -> iterator.next()); + } } From b58bbb9e48a4a86cd5cd2fb6aaab8ddb4e58901e Mon Sep 17 00:00:00 2001 From: Dimitris Athanasiou Date: Mon, 19 Dec 2016 09:52:07 +0000 Subject: [PATCH 22/26] Allow setting aggs after parsing them elsewhere (#22238) This commit exposes public getters for the aggregations in AggregatorFactories.Builder. The reason is that it allows to parse the aggregation object from elsewhere (e.g. a plugin) and then be able to get the aggregation builders in order to set them in a SearchSourceBuilder. The alternative would have been to expose a setter for the AggregatorFactories.Builder object. But that would be making the API a bit trappy. --- .../aggregations/AggregatorFactories.java | 11 ++--- .../AggregatorFactoriesTests.java | 45 +++++++++++++++++++ 2 files changed, 51 insertions(+), 5 deletions(-) create mode 100644 core/src/test/java/org/elasticsearch/search/aggregations/AggregatorFactoriesTests.java diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java index 200232930a9..b0f52ffb130 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java @@ -32,6 +32,7 @@ import org.elasticsearch.search.profile.aggregation.ProfilingAggregator; import java.io.IOException; import java.util.ArrayList; +import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.LinkedList; @@ -255,7 +256,7 @@ public class AggregatorFactories { } else { // Check the non-pipeline sub-aggregator // factories - AggregationBuilder[] subBuilders = aggBuilder.factoriesBuilder.getAggregatorFactories(); + List subBuilders = aggBuilder.factoriesBuilder.aggregationBuilders; boolean foundSubBuilder = false; for (AggregationBuilder subBuilder : subBuilders) { if (aggName.equals(subBuilder.name)) { @@ -297,12 +298,12 @@ public class AggregatorFactories { } } - AggregationBuilder[] getAggregatorFactories() { - return this.aggregationBuilders.toArray(new AggregationBuilder[this.aggregationBuilders.size()]); + public List getAggregatorFactories() { + return Collections.unmodifiableList(aggregationBuilders); } - List getPipelineAggregatorFactories() { - return this.pipelineAggregatorBuilders; + public List getPipelineAggregatorFactories() { + return Collections.unmodifiableList(pipelineAggregatorBuilders); } public int count() { diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/AggregatorFactoriesTests.java b/core/src/test/java/org/elasticsearch/search/aggregations/AggregatorFactoriesTests.java new file mode 100644 index 00000000000..1822b1e22e9 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/aggregations/AggregatorFactoriesTests.java @@ -0,0 +1,45 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.search.aggregations; + +import org.elasticsearch.search.aggregations.pipeline.PipelineAggregatorBuilders; +import org.elasticsearch.test.ESTestCase; + +import java.util.List; + +import static org.hamcrest.Matchers.equalTo; + +public class AggregatorFactoriesTests extends ESTestCase { + + public void testGetAggregatorFactories_returnsUnmodifiableList() { + AggregatorFactories.Builder builder = new AggregatorFactories.Builder().addAggregator(AggregationBuilders.avg("foo")); + List aggregatorFactories = builder.getAggregatorFactories(); + assertThat(aggregatorFactories.size(), equalTo(1)); + expectThrows(UnsupportedOperationException.class, () -> aggregatorFactories.add(AggregationBuilders.avg("bar"))); + } + + public void testGetPipelineAggregatorFactories_returnsUnmodifiableList() { + AggregatorFactories.Builder builder = new AggregatorFactories.Builder().addPipelineAggregator( + PipelineAggregatorBuilders.avgBucket("foo", "path1")); + List pipelineAggregatorFactories = builder.getPipelineAggregatorFactories(); + assertThat(pipelineAggregatorFactories.size(), equalTo(1)); + expectThrows(UnsupportedOperationException.class, + () -> pipelineAggregatorFactories.add(PipelineAggregatorBuilders.avgBucket("bar", "path2"))); + } +} From b857b316b658f93a626cf91b0f6e044f5d3445a6 Mon Sep 17 00:00:00 2001 From: Boaz Leskes Date: Mon, 19 Dec 2016 13:08:24 +0100 Subject: [PATCH 23/26] Add BWC layer to seq no infra and enable BWC tests (#22185) Sequence BWC logic consists of two elements: 1) Wire level BWC using stream versions. 2) A changed to the global checkpoint maintenance semantics. For the sequence number infra to work with a mixed version clusters, we have to consider situation where the primary is on an old node and replicas are on new ones (i.e., the replicas will receive operations without seq#) and also the reverse (i.e., the primary sends operations to a replica but the replica can't process the seq# and respond with local checkpoint). An new primary with an old replica is a rare because we do not allow a replica to recover from a new primary. However, it can occur if the old primary failed and a new replica was promoted or during primary relocation where the source primary is treated as a replica until the master starts the target. 1) Old Primary & New Replica - this case is easy as is taken care of by the wire level BWC. All incoming requests will have their seq# set to `UNASSIGNED_SEQ_NO`, which doesn't confuse the local checkpoint logic (keeping it at `NO_OPS_PERFORMED`) 2) New Primary & Old replica - this one is trickier as the global checkpoint service currently takes all in sync replicas into consideration for the global checkpoint calculation. In order to deal with old replicas, we change the semantics to say all *new node* in sync replicas. That means the replicas on old nodes don't count for the global checkpointing. In this state the seq# infra is not fully operational (you can't search on it, because copies may miss it) but it is maintained on shards that can support it. The old replicas will have to go through a file based recovery at some point and will get the seq# information at that point. There is still an edge case where a new primary fails and an old replica takes over. I'lll discuss this one with @ywelsch as I prefer to avoid it completely. This PR also re-enables the BWC tests which were disabled. As such it had to fix any BWC issue that had crept in. Most notably an issue with the removal of the `timestamp` field in #21670. The commit also includes a fix for the default value of the seq number field in replicated write requests (it was 0 but should be -2), that surface some other minor bugs which are fixed as well. Last - I added some debugging tools like more sane node names and forcing replication request to implement a `toString` --- .gitignore | 5 +- .../gradle/test/ClusterFormationTasks.groovy | 1 + .../action/DocWriteResponse.java | 11 +- .../indices/flush/ShardFlushRequest.java | 2 +- .../admin/indices/stats/ShardStats.java | 9 +- .../action/bulk/TransportShardBulkAction.java | 13 +- .../action/delete/TransportDeleteAction.java | 11 +- .../action/index/IndexRequest.java | 5 +- .../action/index/TransportIndexAction.java | 17 +- .../replication/BasicReplicationRequest.java | 5 + .../replication/ReplicatedWriteRequest.java | 25 ++ .../replication/ReplicationOperation.java | 2 +- .../replication/ReplicationRequest.java | 24 +- .../TransportReplicationAction.java | 41 ++- .../cluster/metadata/MappingMetaData.java | 2 +- .../seqno/GlobalCheckpointSyncAction.java | 31 +- .../cluster/IndicesClusterStateService.java | 11 +- .../ReplicationOperationTests.java | 12 +- .../TransportReplicationActionTests.java | 5 + .../TransportWriteActionTests.java | 5 + .../ESIndexLevelReplicationTestCase.java | 2 +- qa/backwards-5.0/build.gradle | 6 +- .../elasticsearch/backwards/IndexingIT.java | 332 ++++++++++++++++++ qa/rolling-upgrade/build.gradle | 2 +- .../test/cat.shards/10_basic.yaml | 4 + .../test/rest/ESRestTestCase.java | 8 +- .../test/rest/yaml/ObjectPath.java | 17 +- 27 files changed, 519 insertions(+), 89 deletions(-) create mode 100644 qa/backwards-5.0/src/test/java/org/elasticsearch/backwards/IndexingIT.java diff --git a/.gitignore b/.gitignore index b4ec8795057..5d7dbbefdc8 100644 --- a/.gitignore +++ b/.gitignore @@ -38,11 +38,14 @@ dependency-reduced-pom.xml # osx stuff .DS_Store +# default folders in which the create_bwc_index.py expects to find old es versions in +/backwards +/dev-tools/backwards + # needed in case docs build is run...maybe we can configure doc build to generate files under build? html_docs # random old stuff that we should look at the necessity of... /tmp/ -backwards/ eclipse-build diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy index 74cae08298b..4c6771ccda7 100644 --- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy +++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy @@ -268,6 +268,7 @@ class ClusterFormationTasks { static Task configureWriteConfigTask(String name, Project project, Task setup, NodeInfo node, NodeInfo seedNode) { Map esConfig = [ 'cluster.name' : node.clusterName, + 'node.name' : "node-" + node.nodeNum, 'pidfile' : node.pidFile, 'path.repo' : "${node.sharedDir}/repo", 'path.shared_data' : "${node.sharedDir}/", diff --git a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java index 7a12ab8ace2..aef99494d92 100644 --- a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java +++ b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java @@ -18,6 +18,7 @@ */ package org.elasticsearch.action; +import org.elasticsearch.Version; import org.elasticsearch.action.support.WriteRequest; import org.elasticsearch.action.support.WriteRequest.RefreshPolicy; import org.elasticsearch.action.support.WriteResponse; @@ -214,7 +215,11 @@ public abstract class DocWriteResponse extends ReplicationResponse implements Wr type = in.readString(); id = in.readString(); version = in.readZLong(); - seqNo = in.readZLong(); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + seqNo = in.readZLong(); + } else { + seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + } forcedRefresh = in.readBoolean(); result = Result.readFrom(in); } @@ -226,7 +231,9 @@ public abstract class DocWriteResponse extends ReplicationResponse implements Wr out.writeString(type); out.writeString(id); out.writeZLong(version); - out.writeZLong(seqNo); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + out.writeZLong(seqNo); + } out.writeBoolean(forcedRefresh); result.writeTo(out); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/flush/ShardFlushRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/flush/ShardFlushRequest.java index 83eaf11ca3a..ac32b16eb57 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/flush/ShardFlushRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/flush/ShardFlushRequest.java @@ -58,6 +58,6 @@ public class ShardFlushRequest extends ReplicationRequest { @Override public String toString() { - return "flush {" + super.toString() + "}"; + return "flush {" + shardId + "}"; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java index 877db0579a0..150b7c6a52b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.stats; +import org.elasticsearch.Version; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; @@ -103,7 +104,9 @@ public class ShardStats implements Streamable, Writeable, ToXContent { statePath = in.readString(); dataPath = in.readString(); isCustomDataPath = in.readBoolean(); - seqNoStats = in.readOptionalWriteable(SeqNoStats::new); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + seqNoStats = in.readOptionalWriteable(SeqNoStats::new); + } } @Override @@ -114,7 +117,9 @@ public class ShardStats implements Streamable, Writeable, ToXContent { out.writeString(statePath); out.writeString(dataPath); out.writeBoolean(isCustomDataPath); - out.writeOptionalWriteable(seqNoStats); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + out.writeOptionalWriteable(seqNoStats); + } } @Override diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java index cef89e1ce78..86024e4dcd5 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java @@ -50,7 +50,6 @@ import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.mapper.MapperParsingException; -import org.elasticsearch.index.seqno.GlobalCheckpointSyncAction; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardClosedException; @@ -151,7 +150,7 @@ public class TransportShardBulkAction extends TransportWriteAction implement out.writeOptionalString(routing); out.writeOptionalString(parent); if (out.getVersion().before(Version.V_6_0_0_alpha1_UNRELEASED)) { - out.writeOptionalString(null); + // Serialize a fake timestamp. 5.x expect this value to be set by the #process method so we can't use null. + // On the other hand, indices created on 5.x do not index the timestamp field. Therefore passing a 0 (or any value) for + // the transport layer OK as it will be ignored. + out.writeOptionalString("0"); out.writeOptionalWriteable(null); } out.writeBytesReference(source); diff --git a/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java b/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java index 70220679752..9ed9f7f7cd1 100644 --- a/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java @@ -165,19 +165,22 @@ public class TransportIndexAction extends TransportWriteAction> extends ReplicationRequest implements WriteRequest { private RefreshPolicy refreshPolicy = RefreshPolicy.NONE; + private long seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + /** * Constructor for deserialization. */ @@ -62,11 +66,32 @@ public abstract class ReplicatedWriteRequest public void readFrom(StreamInput in) throws IOException { super.readFrom(in); refreshPolicy = RefreshPolicy.readFrom(in); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + seqNo = in.readZLong(); + } else { + seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); refreshPolicy.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + out.writeZLong(seqNo); + } + } + + /** + * Returns the sequence number for this operation. The sequence number is assigned while the operation + * is performed on the primary shard. + */ + public long getSeqNo() { + return seqNo; + } + + /** sets the sequence number for this operation. should only be called on the primary shard */ + public void setSeqNo(long seqNo) { + this.seqNo = seqNo; } } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java index 47284789850..25dcc29a5c3 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java @@ -283,7 +283,7 @@ public class ReplicationOperation< } private void decPendingAndFinishIfNeeded() { - assert pendingActions.get() > 0; + assert pendingActions.get() > 0 : "pending action count goes below 0 for request [" + request + "]"; if (pendingActions.decrementAndGet() == 0) { finish(); } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java index d520b3d4e70..091f96c408f 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationRequest.java @@ -55,7 +55,6 @@ public abstract class ReplicationRequest(request, primary.allocationId().getId())); @@ -950,6 +951,8 @@ public abstract class TransportReplicationAction< public PrimaryResult perform(Request request) throws Exception { PrimaryResult result = shardOperationOnPrimary(request, indexShard); if (result.replicaRequest() != null) { + assert result.finalFailure == null : "a replica request [" + result.replicaRequest() + + "] with a primary failure [" + result.finalFailure + "]"; result.replicaRequest().primaryTerm(indexShard.getPrimaryTerm()); } return result; @@ -983,16 +986,25 @@ public abstract class TransportReplicationAction< @Override public void readFrom(StreamInput in) throws IOException { - super.readFrom(in); - localCheckpoint = in.readZLong(); - allocationId = in.readString(); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + super.readFrom(in); + localCheckpoint = in.readZLong(); + allocationId = in.readString(); + } else { + // 5.x used to read empty responses, which don't really read anything off the stream, so just do nothing. + } } @Override public void writeTo(StreamOutput out) throws IOException { - super.writeTo(out); - out.writeZLong(localCheckpoint); - out.writeString(allocationId); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + super.writeTo(out); + out.writeZLong(localCheckpoint); + out.writeString(allocationId); + } else { + // we use to write empty responses + Empty.INSTANCE.writeTo(out); + } } @Override @@ -1016,10 +1028,9 @@ public abstract class TransportReplicationAction< listener.onFailure(new NoNodeAvailableException("unknown node [" + nodeId + "]")); return; } - transportService.sendRequest(node, transportReplicaAction, - new ConcreteShardRequest<>(request, replica.allocationId().getId()), transportOptions, - // Eclipse can't handle when this is <> so we specify the type here. - new ActionListenerResponseHandler(listener, ReplicaResponse::new)); + final ConcreteShardRequest concreteShardRequest = + new ConcreteShardRequest<>(request, replica.allocationId().getId()); + sendReplicaRequest(concreteShardRequest, node, listener); } @Override @@ -1060,6 +1071,14 @@ public abstract class TransportReplicationAction< } } + /** sends the given replica request to the supplied nodes */ + protected void sendReplicaRequest(ConcreteShardRequest concreteShardRequest, DiscoveryNode node, + ActionListener listener) { + transportService.sendRequest(node, transportReplicaAction, concreteShardRequest, transportOptions, + // Eclipse can't handle when this is <> so we specify the type here. + new ActionListenerResponseHandler(listener, ReplicaResponse::new)); + } + /** a wrapper class to encapsulate a request when being sent to a specific allocation id **/ public static final class ConcreteShardRequest extends TransportRequest { diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java index 3ea61385f1c..0f9db99326d 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java @@ -204,7 +204,7 @@ public class MappingMetaData extends AbstractDiffable { // timestamp out.writeBoolean(false); // enabled out.writeString(DateFieldMapper.DEFAULT_DATE_TIME_FORMATTER.format()); - out.writeOptionalString(null); + out.writeOptionalString("now"); // 5.x default out.writeOptionalBoolean(null); } out.writeBoolean(hasParentField()); diff --git a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java index 8e877298313..6e13573794d 100644 --- a/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java +++ b/core/src/main/java/org/elasticsearch/index/seqno/GlobalCheckpointSyncAction.java @@ -20,20 +20,21 @@ package org.elasticsearch.index.seqno; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.replication.ReplicationOperation; import org.elasticsearch.action.support.replication.ReplicationRequest; import org.elasticsearch.action.support.replication.ReplicationResponse; import org.elasticsearch.action.support.replication.TransportReplicationAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; @@ -41,8 +42,6 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; import java.io.IOException; -import java.io.UncheckedIOException; -import java.io.UnsupportedEncodingException; public class GlobalCheckpointSyncAction extends TransportReplicationAction { @@ -65,6 +64,17 @@ public class GlobalCheckpointSyncAction extends TransportReplicationAction concreteShardRequest, DiscoveryNode node, + ActionListener listener) { + if (node.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + super.sendReplicaRequest(concreteShardRequest, node, listener); + } else { + listener.onResponse( + new ReplicaResponse(concreteShardRequest.getTargetAllocationID(), SequenceNumbersService.UNASSIGNED_SEQ_NO)); + } + } + @Override protected PrimaryResult shardOperationOnPrimary(PrimaryRequest request, IndexShard indexShard) throws Exception { long checkpoint = indexShard.getGlobalCheckpoint(); @@ -105,6 +115,11 @@ public class GlobalCheckpointSyncAction extends TransportReplicationAction { @@ -134,6 +149,14 @@ public class GlobalCheckpointSyncAction extends TransportReplicationAction params = new HashMap<>(); + params.put("wait_for_status", "green"); + params.put("wait_for_no_relocating_shards", "true"); + assertOK(client().performRequest("GET", "_cluster/health", params)); + } + + private void createIndex(String name, Settings settings) throws IOException { + assertOK(client().performRequest("PUT", name, Collections.emptyMap(), + new StringEntity("{ \"settings\": " + Strings.toString(settings, true) + " }"))); + } + + private void updateIndexSetting(String name, Settings.Builder settings) throws IOException { + updateIndexSetting(name, settings.build()); + } + private void updateIndexSetting(String name, Settings settings) throws IOException { + assertOK(client().performRequest("PUT", name + "/_settings", Collections.emptyMap(), + new StringEntity(Strings.toString(settings, true)))); + } + + protected int indexDocs(String index, final int idStart, final int numDocs) throws IOException { + for (int i = 0; i < numDocs; i++) { + final int id = idStart + i; + assertOK(client().performRequest("PUT", index + "/test/" + id, emptyMap(), + new StringEntity("{\"test\": \"test_" + id + "\"}"))); + } + return numDocs; + } + + public void testSeqNoCheckpoints() throws Exception { + Nodes nodes = buildNodeAndVersions(); + logger.info("cluster discovered: {}", nodes.toString()); + final String bwcNames = nodes.getBWCNodes().stream().map(Node::getNodeName).collect(Collectors.joining(",")); + Settings.Builder settings = Settings.builder() + .put(IndexMetaData.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1) + .put(IndexMetaData.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 2) + .put("index.routing.allocation.include._name", bwcNames); + + final boolean checkGlobalCheckpoints = nodes.getMaster().getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED); + logger.info("master version is [{}], global checkpoints will be [{}]", nodes.getMaster().getVersion(), + checkGlobalCheckpoints ? "checked" : "not be checked"); + if (checkGlobalCheckpoints) { + settings.put(IndexSettings.INDEX_SEQ_NO_CHECKPOINT_SYNC_INTERVAL.getKey(), "100ms"); + } + final String index = "test"; + createIndex(index, settings.build()); + try (RestClient newNodeClient = buildClient(restClientSettings(), + nodes.getNewNodes().stream().map(Node::getPublishAddress).toArray(HttpHost[]::new))) { + int numDocs = indexDocs(index, 0, randomInt(5)); + assertSeqNoOnShards(nodes, checkGlobalCheckpoints, 0, newNodeClient); + + logger.info("allowing shards on all nodes"); + updateIndexSetting(index, Settings.builder().putNull("index.routing.allocation.include._name")); + ensureGreen(); + logger.info("indexing some more docs"); + numDocs += indexDocs(index, numDocs, randomInt(5)); + assertSeqNoOnShards(nodes, checkGlobalCheckpoints, 0, newNodeClient); + logger.info("moving primary to new node"); + Shard primary = buildShards(nodes, newNodeClient).stream().filter(Shard::isPrimary).findFirst().get(); + updateIndexSetting(index, Settings.builder().put("index.routing.allocation.exclude._name", primary.getNode().getNodeName())); + ensureGreen(); + logger.info("indexing some more docs"); + int numDocsOnNewPrimary = indexDocs(index, numDocs, randomInt(5)); + numDocs += numDocsOnNewPrimary; + assertSeqNoOnShards(nodes, checkGlobalCheckpoints, numDocsOnNewPrimary, newNodeClient); + } + } + + private void assertSeqNoOnShards(Nodes nodes, boolean checkGlobalCheckpoints, int numDocs, RestClient client) throws Exception { + assertBusy(() -> { + try { + List shards = buildShards(nodes, client); + Shard primaryShard = shards.stream().filter(Shard::isPrimary).findFirst().get(); + assertNotNull("failed to find primary shard", primaryShard); + final long expectedGlobalCkp; + final long expectMaxSeqNo; + logger.info("primary resolved to node {}", primaryShard.getNode()); + if (primaryShard.getNode().getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + expectMaxSeqNo = numDocs - 1; + expectedGlobalCkp = numDocs - 1; + } else { + expectedGlobalCkp = SequenceNumbersService.UNASSIGNED_SEQ_NO; + expectMaxSeqNo = SequenceNumbersService.NO_OPS_PERFORMED; + } + for (Shard shard : shards) { + if (shard.getNode().getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + final SeqNoStats seqNoStats = shard.getSeqNoStats(); + logger.info("stats for {}, primary [{}]: [{}]", shard.getNode(), shard.isPrimary(), seqNoStats); + assertThat("max_seq no on " + shard.getNode() + " is wrong", seqNoStats.getMaxSeqNo(), equalTo(expectMaxSeqNo)); + assertThat("localCheckpoint no on " + shard.getNode() + " is wrong", + seqNoStats.getLocalCheckpoint(), equalTo(expectMaxSeqNo)); + if (checkGlobalCheckpoints) { + assertThat("globalCheckpoint no on " + shard.getNode() + " is wrong", + seqNoStats.getGlobalCheckpoint(), equalTo(expectedGlobalCkp)); + } + } else { + logger.info("skipping seq no test on {}", shard.getNode()); + } + } + } catch (IOException e) { + throw new AssertionError("unexpected io exception", e); + } + }); + } + + private List buildShards(Nodes nodes, RestClient client) throws IOException { + Response response = client.performRequest("GET", "test/_stats", singletonMap("level", "shards")); + List shardStats = objectPath(response).evaluate("indices.test.shards.0"); + ArrayList shards = new ArrayList<>(); + for (Object shard : shardStats) { + final String nodeId = ObjectPath.evaluate(shard, "routing.node"); + final Boolean primary = ObjectPath.evaluate(shard, "routing.primary"); + final Node node = nodes.getSafe(nodeId); + final SeqNoStats seqNoStats; + if (node.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + Integer maxSeqNo = ObjectPath.evaluate(shard, "seq_no.max"); + Integer localCheckpoint = ObjectPath.evaluate(shard, "seq_no.local_checkpoint"); + Integer globalCheckpoint = ObjectPath.evaluate(shard, "seq_no.global_checkpoint"); + seqNoStats = new SeqNoStats(maxSeqNo, localCheckpoint, globalCheckpoint); + } else { + seqNoStats = null; + } + shards.add(new Shard(node, primary, seqNoStats)); + } + return shards; + } + + private Nodes buildNodeAndVersions() throws IOException { + Response response = client().performRequest("GET", "_nodes"); + ObjectPath objectPath = objectPath(response); + Map nodesAsMap = objectPath.evaluate("nodes"); + Nodes nodes = new Nodes(); + for (String id : nodesAsMap.keySet()) { + nodes.add(new Node( + id, + objectPath.evaluate("nodes." + id + ".name"), + Version.fromString(objectPath.evaluate("nodes." + id + ".version")), + HttpHost.create(objectPath.evaluate("nodes." + id + ".http.publish_address")))); + } + response = client().performRequest("GET", "_cluster/state"); + nodes.setMasterNodeId(objectPath(response).evaluate("master_node")); + return nodes; + } + + final class Nodes extends HashMap { + + private String masterNodeId = null; + + public Node getMaster() { + return get(masterNodeId); + } + + public void setMasterNodeId(String id) { + if (get(id) == null) { + throw new IllegalArgumentException("node with id [" + id + "] not found. got:" + toString()); + } + masterNodeId = id; + } + + public void add(Node node) { + put(node.getId(), node); + } + + public List getNewNodes() { + Version bwcVersion = getBWCVersion(); + return values().stream().filter(n -> n.getVersion().after(bwcVersion)).collect(Collectors.toList()); + } + + public List getBWCNodes() { + Version bwcVersion = getBWCVersion(); + return values().stream().filter(n -> n.getVersion().equals(bwcVersion)).collect(Collectors.toList()); + } + + public Version getBWCVersion() { + if (isEmpty()) { + throw new IllegalStateException("no nodes available"); + } + return Version.fromId(values().stream().map(node -> node.getVersion().id).min(Integer::compareTo).get()); + } + + public Node getSafe(String id) { + Node node = get(id); + if (node == null) { + throw new IllegalArgumentException("node with id [" + id + "] not found"); + } + return node; + } + + @Override + public String toString() { + return "Nodes{" + + "masterNodeId='" + masterNodeId + "'\n" + + values().stream().map(Node::toString).collect(Collectors.joining("\n")) + + '}'; + } + } + + final class Node { + private final String id; + private final String nodeName; + private final Version version; + private final HttpHost publishAddress; + + Node(String id, String nodeName, Version version, HttpHost publishAddress) { + this.id = id; + this.nodeName = nodeName; + this.version = version; + this.publishAddress = publishAddress; + } + + public String getId() { + return id; + } + + public String getNodeName() { + return nodeName; + } + + public HttpHost getPublishAddress() { + return publishAddress; + } + + public Version getVersion() { + return version; + } + + @Override + public String toString() { + return "Node{" + + "id='" + id + '\'' + + ", nodeName='" + nodeName + '\'' + + ", version=" + version + + '}'; + } + } + + final class Shard { + private final Node node; + private final boolean Primary; + private final SeqNoStats seqNoStats; + + Shard(Node node, boolean primary, SeqNoStats seqNoStats) { + this.node = node; + Primary = primary; + this.seqNoStats = seqNoStats; + } + + public Node getNode() { + return node; + } + + public boolean isPrimary() { + return Primary; + } + + public SeqNoStats getSeqNoStats() { + return seqNoStats; + } + + @Override + public String toString() { + return "Shard{" + + "node=" + node + + ", Primary=" + Primary + + ", seqNoStats=" + seqNoStats + + '}'; + } + } +} diff --git a/qa/rolling-upgrade/build.gradle b/qa/rolling-upgrade/build.gradle index e17e2454108..182e6a9f7d9 100644 --- a/qa/rolling-upgrade/build.gradle +++ b/qa/rolling-upgrade/build.gradle @@ -25,7 +25,7 @@ task oldClusterTest(type: RestIntegTestTask) { mustRunAfter(precommit) cluster { distribution = 'zip' - bwcVersion = '6.0.0-alpha1-SNAPSHOT' // TODO: either randomize, or make this settable with sysprop + bwcVersion = '5.2.0-SNAPSHOT' // TODO: either randomize, or make this settable with sysprop numBwcNodes = 2 numNodes = 2 clusterName = 'rolling-upgrade' diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/cat.shards/10_basic.yaml b/rest-api-spec/src/main/resources/rest-api-spec/test/cat.shards/10_basic.yaml index 8c7cd83b0e0..4a37734d284 100755 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/cat.shards/10_basic.yaml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/cat.shards/10_basic.yaml @@ -1,5 +1,9 @@ --- "Help": + - skip: + version: " - 5.99.99" + reason: seq no stats were added in 6.0.0 + - do: cat.shards: help: true diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java index 0fc8cb4506b..975e6e2f866 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/ESRestTestCase.java @@ -111,8 +111,8 @@ public abstract class ESRestTestCase extends ESTestCase { } clusterHosts = unmodifiableList(hosts); logger.info("initializing REST clients against {}", clusterHosts); - client = buildClient(restClientSettings()); - adminClient = buildClient(restAdminSettings()); + client = buildClient(restClientSettings(), clusterHosts.toArray(new HttpHost[clusterHosts.size()])); + adminClient = buildClient(restAdminSettings(), clusterHosts.toArray(new HttpHost[clusterHosts.size()])); } assert client != null; assert adminClient != null; @@ -272,8 +272,8 @@ public abstract class ESRestTestCase extends ESTestCase { return "http"; } - private RestClient buildClient(Settings settings) throws IOException { - RestClientBuilder builder = RestClient.builder(clusterHosts.toArray(new HttpHost[clusterHosts.size()])); + protected RestClient buildClient(Settings settings, HttpHost[] hosts) throws IOException { + RestClientBuilder builder = RestClient.builder(hosts); String keystorePath = settings.get(TRUSTSTORE_PATH); if (keystorePath != null) { final String keystorePass = settings.get(TRUSTSTORE_PASSWORD); diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java index 6311944fdcb..265fd7b3e85 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ObjectPath.java @@ -46,17 +46,28 @@ public class ObjectPath { this.object = object; } + + /** + * A utility method that creates an {@link ObjectPath} via {@link #ObjectPath(Object)} returns + * the result of calling {@link #evaluate(String)} on it. + */ + public static T evaluate(Object object, String path) throws IOException { + return new ObjectPath(object).evaluate(path, Stash.EMPTY); + } + + /** * Returns the object corresponding to the provided path if present, null otherwise */ - public Object evaluate(String path) throws IOException { + public T evaluate(String path) throws IOException { return evaluate(path, Stash.EMPTY); } /** * Returns the object corresponding to the provided path if present, null otherwise */ - public Object evaluate(String path, Stash stash) throws IOException { + @SuppressWarnings("unchecked") + public T evaluate(String path, Stash stash) throws IOException { String[] parts = parsePath(path); Object object = this.object; for (String part : parts) { @@ -65,7 +76,7 @@ public class ObjectPath { return null; } } - return object; + return (T)object; } @SuppressWarnings("unchecked") From b2e93d28707521667a4fd9dd9cea2e73fa604c8e Mon Sep 17 00:00:00 2001 From: Adrien Grand Date: Mon, 19 Dec 2016 14:21:21 +0100 Subject: [PATCH 24/26] Be explicit about the fact backslashes need to be escaped. (#22257) Relates #22255 --- .../query-dsl/query-string-query.asciidoc | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/docs/reference/query-dsl/query-string-query.asciidoc b/docs/reference/query-dsl/query-string-query.asciidoc index 2dcfcde1ca0..cc8ac5068c4 100644 --- a/docs/reference/query-dsl/query-string-query.asciidoc +++ b/docs/reference/query-dsl/query-string-query.asciidoc @@ -197,7 +197,24 @@ GET /_search Another option is to provide the wildcard fields search in the query string itself (properly escaping the `*` sign), for example: -`city.\*:something`. +`city.\*:something`: + +[source,js] +-------------------------------------------------- +GET /_search +{ + "query": { + "query_string" : { + "query" : "city.\\*:(this AND that OR thus)", + "use_dis_max" : true + } + } +} +-------------------------------------------------- +// CONSOLE + +NOTE: Since `\` (backslash) is a special character in json strings, it needs to +be escaped, hence the two backslashes in the above `query_string`. When running the `query_string` query against multiple fields, the following additional parameters are allowed: From 1cabf66bd50004255a4fc727ce3437fa80f3f87d Mon Sep 17 00:00:00 2001 From: Yannick Welsch Date: Mon, 19 Dec 2016 14:36:58 +0100 Subject: [PATCH 25/26] Use correct block levels for TRA subclasses (#22224) Subclasses of TransportReplicationAction can currently chose to implement block levels for which the request will be blocked. - Refresh/Flush was using the block level METADATA_WRITE although they don't operate at the cluster meta data level (but more like shard level meta data which is not represented in the block levels). Their level has been changed to null so that they can operate freely in the presence of blocks. - GlobChkptSync was using WRITE although it does not make any changes to the actual documents of a shard. The level has been changed to null so that it can operate freely in the presence of blocks. The commit also adds a check for closed indices in TRA so that the right exception is thrown if refresh/flush/checkpoint syncing is attempted on a closed index (before it was throwing an IndexNotFoundException, now it's throwing IndexClosedException). --- .../flush/TransportShardFlushAction.java | 10 ---- .../refresh/TransportShardRefreshAction.java | 10 ---- .../TransportReplicationAction.java | 37 +++++++++----- .../replication/TransportWriteAction.java | 11 +++++ .../admin/indices/flush/FlushBlocksIT.java | 25 +--------- .../indices/refresh/RefreshBlocksIT.java | 24 +--------- .../TransportReplicationActionTests.java | 48 +++++++++++++++++++ 7 files changed, 86 insertions(+), 79 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java index 1ec7186393f..b04bb86a63c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/flush/TransportShardFlushAction.java @@ -64,16 +64,6 @@ public class TransportShardFlushAction extends TransportReplicationAction listener = new PlainActionFuture<>(); ReplicationTask task = maybeTask(); + Action action = new Action(Settings.EMPTY, "testActionWithBlocks", transportService, clusterService, shardStateAction, threadPool) { + @Override + protected ClusterBlockLevel globalBlockLevel() { + return ClusterBlockLevel.WRITE; + } + }; ClusterBlocks.Builder block = ClusterBlocks.builder() .addGlobalBlock(new ClusterBlock(1, "non retryable", false, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL)); @@ -216,6 +225,17 @@ public class TransportReplicationActionTests extends ESTestCase { assertListenerThrows("primary phase should fail operation when moving from a retryable block to a non-retryable one", listener, ClusterBlockException.class); assertIndexShardUninitialized(); + + action = new Action(Settings.EMPTY, "testActionWithNoBlocks", transportService, clusterService, shardStateAction, threadPool) { + @Override + protected ClusterBlockLevel globalBlockLevel() { + return null; + } + }; + listener = new PlainActionFuture<>(); + reroutePhase = action.new ReroutePhase(task, new Request().timeout("5ms"), listener); + reroutePhase.run(); + assertListenerThrows("should fail with an IndexNotFoundException when no blocks checked", listener, IndexNotFoundException.class); } public void assertIndexShardUninitialized() { @@ -337,6 +357,34 @@ public class TransportReplicationActionTests extends ESTestCase { } + public void testClosedIndexOnReroute() throws InterruptedException { + final String index = "test"; + // no replicas in oder to skip the replication part + setState(clusterService, + new ClusterStateChanges().closeIndices(state(index, true, ShardRoutingState.UNASSIGNED), new CloseIndexRequest(index))); + logger.debug("--> using initial state:\n{}", clusterService.state()); + Request request = new Request(new ShardId("test", "_na_", 0)).timeout("1ms"); + PlainActionFuture listener = new PlainActionFuture<>(); + ReplicationTask task = maybeTask(); + + ClusterBlockLevel indexBlockLevel = randomBoolean() ? ClusterBlockLevel.WRITE : null; + Action action = new Action(Settings.EMPTY, "testActionWithBlocks", transportService, clusterService, shardStateAction, threadPool) { + @Override + protected ClusterBlockLevel indexBlockLevel() { + return indexBlockLevel; + } + }; + Action.ReroutePhase reroutePhase = action.new ReroutePhase(task, request, listener); + reroutePhase.run(); + if (indexBlockLevel == ClusterBlockLevel.WRITE) { + assertListenerThrows("must throw block exception", listener, ClusterBlockException.class); + } else { + assertListenerThrows("must throw index closed exception", listener, IndexClosedException.class); + } + assertPhase(task, "failed"); + assertFalse(request.isRetrySet.get()); + } + public void testStalePrimaryShardOnReroute() throws InterruptedException { final String index = "test"; final ShardId shardId = new ShardId(index, "_na_", 0); From 63af03a1042a6ae1ed333aaabcd9cfc3a9fc3fec Mon Sep 17 00:00:00 2001 From: Yannick Welsch Date: Mon, 19 Dec 2016 14:39:50 +0100 Subject: [PATCH 26/26] Atomic mapping updates across types (#22220) This commit makes mapping updates atomic when multiple types in an index are updated. Mappings for an index are now applied in a single atomic operation, which also allows to optimize some of the cross-type updates and checks. --- .../metadata/MetaDataCreateIndexService.java | 7 +- .../metadata/MetaDataIndexAliasesService.java | 8 +- .../MetaDataIndexTemplateService.java | 3 +- .../metadata/MetaDataIndexUpgradeService.java | 5 +- .../metadata/MetaDataMappingService.java | 13 +- .../index/mapper/MapperService.java | 359 +++++++++++------- .../index/shard/LocalShardSnapshot.java | 7 +- .../index/shard/StoreRecovery.java | 7 +- .../elasticsearch/indices/IndicesService.java | 6 +- .../admin/indices/create/CreateIndexIT.java | 12 +- .../gateway/GatewayIndexStateIT.java | 3 +- .../mapper/GeoShapeFieldMapperTests.java | 10 - .../index/mapper/MapperServiceTests.java | 8 +- .../index/mapper/UpdateMappingTests.java | 8 - .../search/child/ChildQuerySearchIT.java | 11 +- .../PercolatorFieldMapperTests.java | 9 +- .../index/shard/IndexShardTestCase.java | 4 +- 17 files changed, 256 insertions(+), 224 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java index 422492e396b..9d81939995a 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java @@ -67,6 +67,7 @@ import org.elasticsearch.index.IndexService; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.MapperService.MergeReason; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.indices.IndexCreationException; import org.elasticsearch.indices.IndicesService; @@ -356,10 +357,10 @@ public class MetaDataCreateIndexService extends AbstractComponent { // now add the mappings MapperService mapperService = indexService.mapperService(); try { - mapperService.merge(mappings, request.updateAllTypes()); - } catch (MapperParsingException mpe) { + mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, request.updateAllTypes()); + } catch (Exception e) { removalExtraInfo = "failed on parsing default mapping/mappings on index creation"; - throw mpe; + throw e; } // the context is only used for validation so it's fine to pass fake values for the shard id and the current diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java index c1de936d9c7..f1584ee325c 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexAliasesService.java @@ -141,15 +141,11 @@ public class MetaDataIndexAliasesService extends AbstractComponent { // temporarily create the index and add mappings so we can parse the filter try { indexService = indicesService.createIndex(index, emptyList(), shardId -> {}); + indicesToClose.add(index.getIndex()); } catch (IOException e) { throw new ElasticsearchException("Failed to create temporary index for parsing the alias", e); } - for (ObjectCursor cursor : index.getMappings().values()) { - MappingMetaData mappingMetaData = cursor.value; - indexService.mapperService().merge(mappingMetaData.type(), mappingMetaData.source(), - MapperService.MergeReason.MAPPING_RECOVERY, false); - } - indicesToClose.add(index.getIndex()); + indexService.mapperService().merge(index, MapperService.MergeReason.MAPPING_RECOVERY, false); } indices.put(action.getIndex(), indexService); } diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java index 020a1d75231..2e11f1e7f45 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexTemplateService.java @@ -39,6 +39,7 @@ import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.MapperService.MergeReason; import org.elasticsearch.indices.IndexTemplateMissingException; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.InvalidIndexTemplateException; @@ -222,7 +223,7 @@ public class MetaDataIndexTemplateService extends AbstractComponent { mappingsForValidation.put(entry.getKey(), MapperService.parseMapping(entry.getValue())); } - dummyIndexService.mapperService().merge(mappingsForValidation, false); + dummyIndexService.mapperService().merge(mappingsForValidation, MergeReason.MAPPING_UPDATE, false); } finally { if (createdIndex != null) { diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java index 2a8b80b9e68..e299874990b 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataIndexUpgradeService.java @@ -147,10 +147,7 @@ public class MetaDataIndexUpgradeService extends AbstractComponent { }; try (IndexAnalyzers fakeIndexAnalzyers = new IndexAnalyzers(indexSettings, fakeDefault, fakeDefault, fakeDefault, analyzerMap)) { MapperService mapperService = new MapperService(indexSettings, fakeIndexAnalzyers, similarityService, mapperRegistry, () -> null); - for (ObjectCursor cursor : indexMetaData.getMappings().values()) { - MappingMetaData mappingMetaData = cursor.value; - mapperService.merge(mappingMetaData.type(), mappingMetaData.source(), MapperService.MergeReason.MAPPING_RECOVERY, false); - } + mapperService.merge(indexMetaData, MapperService.MergeReason.MAPPING_RECOVERY, false); } } catch (Exception ex) { // Wrap the inner exception so we have the index name in the exception message diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java index 4e9b114ff13..8defc5c7c47 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java @@ -43,6 +43,7 @@ import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.mapper.MapperService.MergeReason; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.InvalidTypeNameException; @@ -146,10 +147,7 @@ public class MetaDataMappingService extends AbstractComponent { // we need to create the index here, and add the current mapping to it, so we can merge indexService = indicesService.createIndex(indexMetaData, Collections.emptyList(), shardId -> {}); removeIndex = true; - for (ObjectCursor metaData : indexMetaData.getMappings().values()) { - // don't apply the default mapping, it has been applied when the mapping was created - indexService.mapperService().merge(metaData.value.type(), metaData.value.source(), MapperService.MergeReason.MAPPING_RECOVERY, true); - } + indexService.mapperService().merge(indexMetaData, MergeReason.MAPPING_RECOVERY, true); } IndexMetaData.Builder builder = IndexMetaData.builder(indexMetaData); @@ -226,10 +224,7 @@ public class MetaDataMappingService extends AbstractComponent { MapperService mapperService = indicesService.createIndexMapperService(indexMetaData); indexMapperServices.put(index, mapperService); // add mappings for all types, we need them for cross-type validation - for (ObjectCursor mapping : indexMetaData.getMappings().values()) { - mapperService.merge(mapping.value.type(), mapping.value.source(), - MapperService.MergeReason.MAPPING_RECOVERY, request.updateAllTypes()); - } + mapperService.merge(indexMetaData, MergeReason.MAPPING_RECOVERY, request.updateAllTypes()); } } currentState = applyRequest(currentState, request, indexMapperServices); @@ -313,7 +308,7 @@ public class MetaDataMappingService extends AbstractComponent { if (existingMapper != null) { existingSource = existingMapper.mappingSource(); } - DocumentMapper mergedMapper = mapperService.merge(mappingType, mappingUpdateSource, MapperService.MergeReason.MAPPING_UPDATE, request.updateAllTypes()); + DocumentMapper mergedMapper = mapperService.merge(mappingType, mappingUpdateSource, MergeReason.MAPPING_UPDATE, request.updateAllTypes()); CompressedXContent updatedSource = mergedMapper.mappingSource(); if (existingSource != null) { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java index 1e3f96fbe2c..74e97120285 100755 --- a/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MapperService.java @@ -28,6 +28,7 @@ import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.MappingMetaData; +import org.elasticsearch.common.Nullable; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Setting; @@ -51,6 +52,7 @@ import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Set; @@ -61,7 +63,6 @@ import java.util.stream.Collectors; import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; import static java.util.Collections.unmodifiableMap; -import static org.elasticsearch.common.collect.MapBuilder.newMapBuilder; public class MapperService extends AbstractIndexComponent implements Closeable { @@ -191,153 +192,235 @@ public class MapperService extends AbstractIndexComponent implements Closeable { } } + /** + * Update mapping by only merging the metadata that is different between received and stored entries + */ public boolean updateMapping(IndexMetaData indexMetaData) throws IOException { assert indexMetaData.getIndex().equals(index()) : "index mismatch: expected " + index() + " but was " + indexMetaData.getIndex(); // go over and add the relevant mappings (or update them) + final Set existingMappers = new HashSet<>(mappers.keySet()); + final Map updatedEntries; + try { + // only update entries if needed + updatedEntries = internalMerge(indexMetaData, MergeReason.MAPPING_RECOVERY, true, true); + } catch (Exception e) { + logger.warn((org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("[{}] failed to apply mappings", index()), e); + throw e; + } + boolean requireRefresh = false; - for (ObjectCursor cursor : indexMetaData.getMappings().values()) { - MappingMetaData mappingMd = cursor.value; - String mappingType = mappingMd.type(); - CompressedXContent mappingSource = mappingMd.source(); + + for (DocumentMapper documentMapper : updatedEntries.values()) { + String mappingType = documentMapper.type(); + CompressedXContent incomingMappingSource = indexMetaData.mapping(mappingType).source(); + + String op = existingMappers.contains(mappingType) ? "updated" : "added"; + if (logger.isDebugEnabled() && incomingMappingSource.compressed().length < 512) { + logger.debug("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, incomingMappingSource.string()); + } else if (logger.isTraceEnabled()) { + logger.trace("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, incomingMappingSource.string()); + } else { + logger.debug("[{}] {} mapping [{}] (source suppressed due to length, use TRACE level if needed)", index(), op, mappingType); + } + // refresh mapping can happen when the parsing/merging of the mapping from the metadata doesn't result in the same // mapping, in this case, we send to the master to refresh its own version of the mappings (to conform with the // merge version of it, which it does when refreshing the mappings), and warn log it. - try { - DocumentMapper existingMapper = documentMapper(mappingType); + if (documentMapper(mappingType).mappingSource().equals(incomingMappingSource) == false) { + logger.debug("[{}] parsed mapping [{}], and got different sources\noriginal:\n{}\nparsed:\n{}", index(), mappingType, + incomingMappingSource, documentMapper(mappingType).mappingSource()); - if (existingMapper == null || mappingSource.equals(existingMapper.mappingSource()) == false) { - String op = existingMapper == null ? "adding" : "updating"; - if (logger.isDebugEnabled() && mappingSource.compressed().length < 512) { - logger.debug("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, mappingSource.string()); - } else if (logger.isTraceEnabled()) { - logger.trace("[{}] {} mapping [{}], source [{}]", index(), op, mappingType, mappingSource.string()); - } else { - logger.debug("[{}] {} mapping [{}] (source suppressed due to length, use TRACE level if needed)", index(), op, - mappingType); - } - merge(mappingType, mappingSource, MergeReason.MAPPING_RECOVERY, true); - if (!documentMapper(mappingType).mappingSource().equals(mappingSource)) { - logger.debug("[{}] parsed mapping [{}], and got different sources\noriginal:\n{}\nparsed:\n{}", index(), - mappingType, mappingSource, documentMapper(mappingType).mappingSource()); - requireRefresh = true; - } - } - } catch (Exception e) { - logger.warn( - (org.apache.logging.log4j.util.Supplier) - () -> new ParameterizedMessage("[{}] failed to add mapping [{}], source [{}]", index(), mappingType, mappingSource), - e); - throw e; + requireRefresh = true; } } + return requireRefresh; } - //TODO: make this atomic - public void merge(Map> mappings, boolean updateAllTypes) throws MapperParsingException { - // first, add the default mapping - if (mappings.containsKey(DEFAULT_MAPPING)) { - try { - this.merge(DEFAULT_MAPPING, new CompressedXContent(XContentFactory.jsonBuilder().map(mappings.get(DEFAULT_MAPPING)).string()), MergeReason.MAPPING_UPDATE, updateAllTypes); - } catch (Exception e) { - throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, DEFAULT_MAPPING, e.getMessage()); - } - } + public void merge(Map> mappings, MergeReason reason, boolean updateAllTypes) { + Map mappingSourcesCompressed = new LinkedHashMap<>(mappings.size()); for (Map.Entry> entry : mappings.entrySet()) { - if (entry.getKey().equals(DEFAULT_MAPPING)) { - continue; - } try { - // apply the default here, its the first time we parse it - this.merge(entry.getKey(), new CompressedXContent(XContentFactory.jsonBuilder().map(entry.getValue()).string()), MergeReason.MAPPING_UPDATE, updateAllTypes); + mappingSourcesCompressed.put(entry.getKey(), new CompressedXContent(XContentFactory.jsonBuilder().map(entry.getValue()).string())); } catch (Exception e) { throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, entry.getKey(), e.getMessage()); } } + + internalMerge(mappingSourcesCompressed, reason, updateAllTypes); + } + + public void merge(IndexMetaData indexMetaData, MergeReason reason, boolean updateAllTypes) { + internalMerge(indexMetaData, reason, updateAllTypes, false); } public DocumentMapper merge(String type, CompressedXContent mappingSource, MergeReason reason, boolean updateAllTypes) { - if (DEFAULT_MAPPING.equals(type)) { + return internalMerge(Collections.singletonMap(type, mappingSource), reason, updateAllTypes).get(type); + } + + private synchronized Map internalMerge(IndexMetaData indexMetaData, MergeReason reason, boolean updateAllTypes, + boolean onlyUpdateIfNeeded) { + Map map = new LinkedHashMap<>(); + for (ObjectCursor cursor : indexMetaData.getMappings().values()) { + MappingMetaData mappingMetaData = cursor.value; + if (onlyUpdateIfNeeded) { + DocumentMapper existingMapper = documentMapper(mappingMetaData.type()); + if (existingMapper == null || mappingMetaData.source().equals(existingMapper.mappingSource()) == false) { + map.put(mappingMetaData.type(), mappingMetaData.source()); + } + } else { + map.put(mappingMetaData.type(), mappingMetaData.source()); + } + } + return internalMerge(map, reason, updateAllTypes); + } + + private synchronized Map internalMerge(Map mappings, MergeReason reason, boolean updateAllTypes) { + DocumentMapper defaultMapper = null; + String defaultMappingSource = null; + + if (mappings.containsKey(DEFAULT_MAPPING)) { // verify we can parse it // NOTE: never apply the default here - DocumentMapper mapper = documentParser.parse(type, mappingSource); - // still add it as a document mapper so we have it registered and, for example, persisted back into - // the cluster meta data if needed, or checked for existence - synchronized (this) { - mappers = newMapBuilder(mappers).put(type, mapper).map(); + try { + defaultMapper = documentParser.parse(DEFAULT_MAPPING, mappings.get(DEFAULT_MAPPING)); + } catch (Exception e) { + throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, DEFAULT_MAPPING, e.getMessage()); } try { - defaultMappingSource = mappingSource.string(); + defaultMappingSource = mappings.get(DEFAULT_MAPPING).string(); } catch (IOException e) { throw new ElasticsearchGenerationException("failed to un-compress", e); } - return mapper; + } + + final String defaultMappingSourceOrLastStored; + if (defaultMappingSource != null) { + defaultMappingSourceOrLastStored = defaultMappingSource; } else { - synchronized (this) { - final boolean applyDefault = - // the default was already applied if we are recovering - reason != MergeReason.MAPPING_RECOVERY - // only apply the default mapping if we don't have the type yet - && mappers.containsKey(type) == false; - DocumentMapper mergeWith = parse(type, mappingSource, applyDefault); - return merge(mergeWith, reason, updateAllTypes); + defaultMappingSourceOrLastStored = this.defaultMappingSource; + } + + List documentMappers = new ArrayList<>(); + for (Map.Entry entry : mappings.entrySet()) { + String type = entry.getKey(); + if (type.equals(DEFAULT_MAPPING)) { + continue; + } + + final boolean applyDefault = + // the default was already applied if we are recovering + reason != MergeReason.MAPPING_RECOVERY + // only apply the default mapping if we don't have the type yet + && mappers.containsKey(type) == false; + + try { + DocumentMapper documentMapper = documentParser.parse(type, entry.getValue(), applyDefault ? defaultMappingSourceOrLastStored : null); + documentMappers.add(documentMapper); + } catch (Exception e) { + throw new MapperParsingException("Failed to parse mapping [{}]: {}", e, entry.getKey(), e.getMessage()); } } + + return internalMerge(defaultMapper, defaultMappingSource, documentMappers, reason, updateAllTypes); } - private synchronized DocumentMapper merge(DocumentMapper mapper, MergeReason reason, boolean updateAllTypes) { - if (mapper.type().length() == 0) { - throw new InvalidTypeNameException("mapping type name is empty"); - } - if (mapper.type().length() > 255) { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] is too long; limit is length 255 but was [" + mapper.type().length() + "]"); - } - if (mapper.type().charAt(0) == '_') { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] can't start with '_'"); - } - if (mapper.type().contains("#")) { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include '#' in it"); - } - if (mapper.type().contains(",")) { - throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include ',' in it"); - } - if (mapper.type().equals(mapper.parentFieldMapper().type())) { - throw new IllegalArgumentException("The [_parent.type] option can't point to the same type"); - } - if (typeNameStartsWithIllegalDot(mapper)) { - throw new IllegalArgumentException("mapping type name [" + mapper.type() + "] must not start with a '.'"); - } - - // 1. compute the merged DocumentMapper - DocumentMapper oldMapper = mappers.get(mapper.type()); - DocumentMapper newMapper; - if (oldMapper != null) { - newMapper = oldMapper.merge(mapper.mapping(), updateAllTypes); - } else { - newMapper = mapper; - } - - // 2. check basic sanity of the new mapping - List objectMappers = new ArrayList<>(); - List fieldMappers = new ArrayList<>(); - Collections.addAll(fieldMappers, newMapper.mapping().metadataMappers); - MapperUtils.collect(newMapper.mapping().root(), objectMappers, fieldMappers); - checkFieldUniqueness(newMapper.type(), objectMappers, fieldMappers); - checkObjectsCompatibility(objectMappers, updateAllTypes); - - // 3. update lookup data-structures - // this will in particular make sure that the merged fields are compatible with other types - FieldTypeLookup fieldTypes = this.fieldTypes.copyAndAddAll(newMapper.type(), fieldMappers, updateAllTypes); - + private synchronized Map internalMerge(@Nullable DocumentMapper defaultMapper, @Nullable String defaultMappingSource, + List documentMappers, MergeReason reason, boolean updateAllTypes) { boolean hasNested = this.hasNested; - Map fullPathObjectMappers = new HashMap<>(this.fullPathObjectMappers); - for (ObjectMapper objectMapper : objectMappers) { - fullPathObjectMappers.put(objectMapper.fullPath(), objectMapper); - if (objectMapper.nested().isNested()) { - hasNested = true; - } + boolean allEnabled = this.allEnabled; + Map fullPathObjectMappers = this.fullPathObjectMappers; + FieldTypeLookup fieldTypes = this.fieldTypes; + Set parentTypes = this.parentTypes; + Map mappers = new HashMap<>(this.mappers); + + Map results = new LinkedHashMap<>(documentMappers.size() + 1); + + if (defaultMapper != null) { + assert defaultMapper.type().equals(DEFAULT_MAPPING); + mappers.put(DEFAULT_MAPPING, defaultMapper); + results.put(DEFAULT_MAPPING, defaultMapper); + } + + for (DocumentMapper mapper : documentMappers) { + // check naming + if (mapper.type().length() == 0) { + throw new InvalidTypeNameException("mapping type name is empty"); + } + if (mapper.type().length() > 255) { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] is too long; limit is length 255 but was [" + mapper.type().length() + "]"); + } + if (mapper.type().charAt(0) == '_') { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] can't start with '_'"); + } + if (mapper.type().contains("#")) { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include '#' in it"); + } + if (mapper.type().contains(",")) { + throw new InvalidTypeNameException("mapping type name [" + mapper.type() + "] should not include ',' in it"); + } + if (mapper.type().equals(mapper.parentFieldMapper().type())) { + throw new IllegalArgumentException("The [_parent.type] option can't point to the same type"); + } + if (typeNameStartsWithIllegalDot(mapper)) { + throw new IllegalArgumentException("mapping type name [" + mapper.type() + "] must not start with a '.'"); + } + + // compute the merged DocumentMapper + DocumentMapper oldMapper = mappers.get(mapper.type()); + DocumentMapper newMapper; + if (oldMapper != null) { + newMapper = oldMapper.merge(mapper.mapping(), updateAllTypes); + } else { + newMapper = mapper; + } + + // check basic sanity of the new mapping + List objectMappers = new ArrayList<>(); + List fieldMappers = new ArrayList<>(); + Collections.addAll(fieldMappers, newMapper.mapping().metadataMappers); + MapperUtils.collect(newMapper.mapping().root(), objectMappers, fieldMappers); + checkFieldUniqueness(newMapper.type(), objectMappers, fieldMappers, fullPathObjectMappers, fieldTypes); + checkObjectsCompatibility(objectMappers, updateAllTypes, fullPathObjectMappers); + + // update lookup data-structures + // this will in particular make sure that the merged fields are compatible with other types + fieldTypes = fieldTypes.copyAndAddAll(newMapper.type(), fieldMappers, updateAllTypes); + + for (ObjectMapper objectMapper : objectMappers) { + if (fullPathObjectMappers == this.fullPathObjectMappers) { + fullPathObjectMappers = new HashMap<>(this.fullPathObjectMappers); + } + fullPathObjectMappers.put(objectMapper.fullPath(), objectMapper); + + if (objectMapper.nested().isNested()) { + hasNested = true; + } + } + + if (reason == MergeReason.MAPPING_UPDATE) { + // this check will only be performed on the master node when there is + // a call to the update mapping API. For all other cases like + // the master node restoring mappings from disk or data nodes + // deserializing cluster state that was sent by the master node, + // this check will be skipped. + checkTotalFieldsLimit(objectMappers.size() + fieldMappers.size()); + } + + if (oldMapper == null && newMapper.parentFieldMapper().active()) { + if (parentTypes == this.parentTypes) { + parentTypes = new HashSet<>(this.parentTypes); + } + parentTypes.add(mapper.parentFieldMapper().type()); + } + + // this is only correct because types cannot be removed and we do not + // allow to disable an existing _all field + allEnabled |= mapper.allFieldMapper().enabled(); + + results.put(newMapper.type(), newMapper); + mappers.put(newMapper.type(), newMapper); } - fullPathObjectMappers = Collections.unmodifiableMap(fullPathObjectMappers); if (reason == MergeReason.MAPPING_UPDATE) { // this check will only be performed on the master node when there is @@ -346,45 +429,46 @@ public class MapperService extends AbstractIndexComponent implements Closeable { // deserializing cluster state that was sent by the master node, // this check will be skipped. checkNestedFieldsLimit(fullPathObjectMappers); - checkTotalFieldsLimit(objectMappers.size() + fieldMappers.size()); checkDepthLimit(fullPathObjectMappers.keySet()); } - Set parentTypes = this.parentTypes; - if (oldMapper == null && newMapper.parentFieldMapper().active()) { - parentTypes = new HashSet<>(parentTypes.size() + 1); - parentTypes.addAll(this.parentTypes); - parentTypes.add(mapper.parentFieldMapper().type()); - parentTypes = Collections.unmodifiableSet(parentTypes); - } - - Map mappers = new HashMap<>(this.mappers); - mappers.put(newMapper.type(), newMapper); for (Map.Entry entry : mappers.entrySet()) { if (entry.getKey().equals(DEFAULT_MAPPING)) { continue; } - DocumentMapper m = entry.getValue(); + DocumentMapper documentMapper = entry.getValue(); // apply changes to the field types back - m = m.updateFieldType(fieldTypes.fullNameToFieldType); - entry.setValue(m); + DocumentMapper updatedDocumentMapper = documentMapper.updateFieldType(fieldTypes.fullNameToFieldType); + if (updatedDocumentMapper != documentMapper) { + // update both mappers and result + entry.setValue(updatedDocumentMapper); + if (results.containsKey(updatedDocumentMapper.type())) { + results.put(updatedDocumentMapper.type(), updatedDocumentMapper); + } + } } - mappers = Collections.unmodifiableMap(mappers); - // 4. commit the change + // make structures immutable + mappers = Collections.unmodifiableMap(mappers); + results = Collections.unmodifiableMap(results); + parentTypes = Collections.unmodifiableSet(parentTypes); + fullPathObjectMappers = Collections.unmodifiableMap(fullPathObjectMappers); + + // commit the change + if (defaultMappingSource != null) { + this.defaultMappingSource = defaultMappingSource; + } this.mappers = mappers; this.fieldTypes = fieldTypes; this.hasNested = hasNested; this.fullPathObjectMappers = fullPathObjectMappers; this.parentTypes = parentTypes; - // this is only correct because types cannot be removed and we do not - // allow to disable an existing _all field - this.allEnabled |= mapper.allFieldMapper().enabled(); + this.allEnabled = allEnabled; - assert assertSerialization(newMapper); assert assertMappersShareSameFieldType(); + assert results.values().stream().allMatch(this::assertSerialization); - return newMapper; + return results; } private boolean assertMappersShareSameFieldType() { @@ -421,8 +505,8 @@ public class MapperService extends AbstractIndexComponent implements Closeable { return true; } - private void checkFieldUniqueness(String type, Collection objectMappers, Collection fieldMappers) { - assert Thread.holdsLock(this); + private static void checkFieldUniqueness(String type, Collection objectMappers, Collection fieldMappers, + Map fullPathObjectMappers, FieldTypeLookup fieldTypes) { // first check within mapping final Set objectFullNames = new HashSet<>(); @@ -459,9 +543,8 @@ public class MapperService extends AbstractIndexComponent implements Closeable { } } - private void checkObjectsCompatibility(Collection objectMappers, boolean updateAllTypes) { - assert Thread.holdsLock(this); - + private static void checkObjectsCompatibility(Collection objectMappers, boolean updateAllTypes, + Map fullPathObjectMappers) { for (ObjectMapper newObjectMapper : objectMappers) { ObjectMapper existingObjectMapper = fullPathObjectMappers.get(newObjectMapper.fullPath()); if (existingObjectMapper != null) { diff --git a/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java b/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java index d576f4d2ab5..cc45c97bf39 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java +++ b/core/src/main/java/org/elasticsearch/index/shard/LocalShardSnapshot.java @@ -26,8 +26,7 @@ import org.apache.lucene.store.IOContext; import org.apache.lucene.store.IndexOutput; import org.apache.lucene.store.Lock; import org.apache.lucene.store.NoLockFactory; -import org.elasticsearch.cluster.metadata.MappingMetaData; -import org.elasticsearch.common.collect.ImmutableOpenMap; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.index.Index; import org.elasticsearch.index.store.Store; @@ -123,8 +122,8 @@ final class LocalShardSnapshot implements Closeable { } } - ImmutableOpenMap getMappings() { - return shard.indexSettings.getIndexMetaData().getMappings(); + IndexMetaData getIndexMetaData() { + return shard.indexSettings.getIndexMetaData(); } @Override diff --git a/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java b/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java index 2e28b4051e5..04c2113dea3 100644 --- a/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java +++ b/core/src/main/java/org/elasticsearch/index/shard/StoreRecovery.java @@ -104,12 +104,11 @@ final class StoreRecovery { if (indices.size() > 1) { throw new IllegalArgumentException("can't add shards from more than one index"); } - for (ObjectObjectCursor mapping : shards.get(0).getMappings()) { + IndexMetaData indexMetaData = shards.get(0).getIndexMetaData(); + for (ObjectObjectCursor mapping : indexMetaData.getMappings()) { mappingUpdateConsumer.accept(mapping.key, mapping.value); } - for (ObjectObjectCursor mapping : shards.get(0).getMappings()) { - indexShard.mapperService().merge(mapping.key,mapping.value.source(), MapperService.MergeReason.MAPPING_RECOVERY, true); - } + indexShard.mapperService().merge(indexMetaData, MapperService.MergeReason.MAPPING_RECOVERY, true); return executeRecovery(indexShard, () -> { logger.debug("starting recovery from local shards {}", shards); try { diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesService.java b/core/src/main/java/org/elasticsearch/indices/IndicesService.java index a3361f6b2ed..f4c586abc21 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesService.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesService.java @@ -485,11 +485,7 @@ public class IndicesService extends AbstractLifecycleComponent final IndexService service = createIndexService("metadata verification", metaData, indicesQueryCache, indicesFieldDataCache, emptyList(), s -> {}); closeables.add(() -> service.close("metadata verification", false)); - for (ObjectCursor typeMapping : metaData.getMappings().values()) { - // don't apply the default mapping, it has been applied when the mapping was created - service.mapperService().merge(typeMapping.value.type(), typeMapping.value.source(), - MapperService.MergeReason.MAPPING_RECOVERY, true); - } + service.mapperService().merge(metaData, MapperService.MergeReason.MAPPING_RECOVERY, true); if (metaData.equals(metaDataUpdate) == false) { service.updateMetaData(metaDataUpdate); } diff --git a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexIT.java b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexIT.java index 9d2e56f25bb..0219078fd31 100644 --- a/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexIT.java +++ b/core/src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexIT.java @@ -39,7 +39,6 @@ import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.IndexNotFoundException; -import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.query.RangeQueryBuilder; import org.elasticsearch.index.query.TermsQueryBuilder; import org.elasticsearch.test.ESIntegTestCase; @@ -277,15 +276,8 @@ public class CreateIndexIT extends ESIntegTestCase { .startObject("text") .field("type", "text") .endObject().endObject().endObject()); - try { - b.get(); - } catch (MapperParsingException e) { - StringBuilder messages = new StringBuilder(); - for (Exception rootCause: e.guessRootCauses()) { - messages.append(rootCause.getMessage()); - } - assertThat(messages.toString(), containsString("mapper [text] is used by multiple types")); - } + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> b.get()); + assertThat(e.getMessage(), containsString("mapper [text] is used by multiple types")); } public void testRestartIndexCreationAfterFullClusterRestart() throws Exception { diff --git a/core/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java b/core/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java index 22f06b9098d..c8607e0af31 100644 --- a/core/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java +++ b/core/src/test/java/org/elasticsearch/gateway/GatewayIndexStateIT.java @@ -58,6 +58,7 @@ import static org.elasticsearch.action.support.WriteRequest.RefreshPolicy.IMMEDI import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount; +import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.nullValue; @@ -491,7 +492,7 @@ public class GatewayIndexStateIT extends ESIntegTestCase { assertEquals(ex.getMessage(), "Failed to verify index " + metaData.getIndex()); assertNotNull(ex.getCause()); assertEquals(MapperParsingException.class, ex.getCause().getClass()); - assertEquals(ex.getCause().getMessage(), "analyzer [test] not found for field [field1]"); + assertThat(ex.getCause().getMessage(), containsString("analyzer [test] not found for field [field1]")); } public void testArchiveBrokenClusterSettings() throws Exception { diff --git a/core/src/test/java/org/elasticsearch/index/mapper/GeoShapeFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/GeoShapeFieldMapperTests.java index 572188d7a5d..5972a8ecee8 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/GeoShapeFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/GeoShapeFieldMapperTests.java @@ -22,27 +22,17 @@ import org.apache.lucene.spatial.prefix.PrefixTreeStrategy; import org.apache.lucene.spatial.prefix.RecursivePrefixTreeStrategy; import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree; import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree; -import org.elasticsearch.Version; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.geo.builders.ShapeBuilder; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.DocumentMapperParser; -import org.elasticsearch.index.mapper.FieldMapper; -import org.elasticsearch.index.mapper.GeoShapeFieldMapper; -import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESSingleNodeTestCase; import org.elasticsearch.test.InternalSettingsPlugin; -import org.elasticsearch.test.VersionUtils; import java.io.IOException; import java.util.Collection; -import static com.carrotsearch.randomizedtesting.RandomizedTest.getRandom; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.instanceOf; diff --git a/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java b/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java index b32339b2357..42e88015169 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/MapperServiceTests.java @@ -49,7 +49,7 @@ public class MapperServiceTests extends ESSingleNodeTestCase { String index = "test-index"; String type = ".test-type"; String field = "field"; - MapperParsingException e = expectThrows(MapperParsingException.class, () -> { + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> { client().admin().indices().prepareCreate(index) .addMapping(type, field, "type=text") .execute().actionGet(); @@ -62,7 +62,7 @@ public class MapperServiceTests extends ESSingleNodeTestCase { String field = "field"; String type = new String(new char[256]).replace("\0", "a"); - MapperParsingException e = expectThrows(MapperParsingException.class, () -> { + MapperException e = expectThrows(MapperException.class, () -> { client().admin().indices().prepareCreate(index) .addMapping(type, field, "type=text") .execute().actionGet(); @@ -175,14 +175,14 @@ public class MapperServiceTests extends ESSingleNodeTestCase { mappings.put(MapperService.DEFAULT_MAPPING, MapperService.parseMapping("{}")); MapperException e = expectThrows(MapperParsingException.class, - () -> mapperService.merge(mappings, false)); + () -> mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, false)); assertThat(e.getMessage(), startsWith("Failed to parse mapping [" + MapperService.DEFAULT_MAPPING + "]: ")); mappings.clear(); mappings.put("type1", MapperService.parseMapping("{}")); e = expectThrows( MapperParsingException.class, - () -> mapperService.merge(mappings, false)); + () -> mapperService.merge(mappings, MergeReason.MAPPING_UPDATE, false)); assertThat(e.getMessage(), startsWith("Failed to parse mapping [type1]: ")); } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingTests.java b/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingTests.java index 7aec1ecd0bb..a892dd719e8 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/UpdateMappingTests.java @@ -19,17 +19,11 @@ package org.elasticsearch.index.mapper; -import org.elasticsearch.Version; -import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.mapper.DocumentMapper; -import org.elasticsearch.index.mapper.FieldMapper; -import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.MapperService.MergeReason; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESSingleNodeTestCase; @@ -37,9 +31,7 @@ import org.elasticsearch.test.InternalSettingsPlugin; import java.io.IOException; import java.util.Collection; -import java.util.LinkedHashMap; -import static org.elasticsearch.test.StreamsUtils.copyToStringFromClasspath; import static org.hamcrest.CoreMatchers.containsString; import static org.hamcrest.CoreMatchers.equalTo; diff --git a/core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java b/core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java index a3ecc66c030..c3a33e67401 100644 --- a/core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java +++ b/core/src/test/java/org/elasticsearch/search/child/ChildQuerySearchIT.java @@ -111,14 +111,9 @@ public class ChildQuerySearchIT extends ESIntegTestCase { } public void testSelfReferentialIsForbidden() { - try { - prepareCreate("test").addMapping("type", "_parent", "type=type").get(); - fail("self referential should be forbidden"); - } catch (Exception e) { - Throwable cause = e.getCause(); - assertThat(cause, instanceOf(IllegalArgumentException.class)); - assertThat(cause.getMessage(), equalTo("The [_parent.type] option can't point to the same type")); - } + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> + prepareCreate("test").addMapping("type", "_parent", "type=type").get()); + assertThat(e.getMessage(), equalTo("The [_parent.type] option can't point to the same type")); } public void testMultiLevelChild() throws Exception { diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java index ec1e44344e5..e5a4fe18d91 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/PercolatorFieldMapperTests.java @@ -332,12 +332,9 @@ public class PercolatorFieldMapperTests extends ESSingleNodeTestCase { String percolatorMapper = XContentFactory.jsonBuilder().startObject().startObject(typeName) .startObject("properties").startObject(fieldName).field("type", "percolator").field("index", "no").endObject().endObject() .endObject().endObject().string(); - try { - mapperService.merge(typeName, new CompressedXContent(percolatorMapper), MapperService.MergeReason.MAPPING_UPDATE, true); - fail("MapperParsingException expected"); - } catch (MapperParsingException e) { - assertThat(e.getMessage(), equalTo("Mapping definition for [" + fieldName + "] has unsupported parameters: [index : no]")); - } + MapperParsingException e = expectThrows(MapperParsingException.class, () -> + mapperService.merge(typeName, new CompressedXContent(percolatorMapper), MapperService.MergeReason.MAPPING_UPDATE, true)); + assertThat(e.getMessage(), containsString("Mapping definition for [" + fieldName + "] has unsupported parameters: [index : no]")); } // multiple percolator fields are allowed in the mapping, but only one field can be used at index time. diff --git a/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java b/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java index 8398430f4e9..93021be95fe 100644 --- a/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/index/shard/IndexShardTestCase.java @@ -263,9 +263,7 @@ public abstract class IndexShardTestCase extends ESTestCase { try { IndexCache indexCache = new IndexCache(indexSettings, new DisabledQueryCache(indexSettings), null); MapperService mapperService = MapperTestUtils.newMapperService(createTempDir(), indexSettings.getSettings()); - for (ObjectObjectCursor typeMapping : indexMetaData.getMappings()) { - mapperService.merge(typeMapping.key, typeMapping.value.source(), MapperService.MergeReason.MAPPING_RECOVERY, true); - } + mapperService.merge(indexMetaData, MapperService.MergeReason.MAPPING_RECOVERY, true); SimilarityService similarityService = new SimilarityService(indexSettings, Collections.emptyMap()); final IndexEventListener indexEventListener = new IndexEventListener() { };