From 370ea01149b20b686f7e4f664e1f6633875a5c97 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Fri, 13 Jan 2017 14:42:27 +0100 Subject: [PATCH 01/28] gradle run --debug-jvm is explained twice We are already explaining how to debug remotely in `Debugging from an IDE` section. We can remove one. --- TESTING.asciidoc | 6 ------ 1 file changed, 6 deletions(-) diff --git a/TESTING.asciidoc b/TESTING.asciidoc index aa5431ed69b..13f8ca3ca71 100644 --- a/TESTING.asciidoc +++ b/TESTING.asciidoc @@ -25,12 +25,6 @@ run it using Gradle: gradle run ------------------------------------- -or to attach a remote debugger, run it as: - -------------------------------------- -gradle run --debug-jvm -------------------------------------- - === Test case filtering. - `tests.class` is a class-filtering shell-like glob pattern, From 812f6e30f577ae1fae8b4d6a356653b8eac110aa Mon Sep 17 00:00:00 2001 From: David Pilato Date: Fri, 13 Jan 2017 14:57:40 +0100 Subject: [PATCH 02/28] Add documentation on remote debugging For a node which is launched outside the context of Gradle. --- TESTING.asciidoc | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/TESTING.asciidoc b/TESTING.asciidoc index 13f8ca3ca71..83fe27a963e 100644 --- a/TESTING.asciidoc +++ b/TESTING.asciidoc @@ -474,7 +474,7 @@ Combined (Unit+Integration) coverage: mvn -Dtests.coverage verify jacoco:report --------------------------------------------------------------------------- -== Debugging from an IDE +== Launching and debugging from an IDE If you want to run elasticsearch from your IDE, the `gradle run` task supports a remote debugging option: @@ -483,6 +483,23 @@ supports a remote debugging option: gradle run --debug-jvm --------------------------------------------------------------------------- +== Debugging remotely from an IDE + +If you want to run elasticsearch and be able to remotely attach the process +for debugging purposes from your IDE, you need to add the following line in +`config/jvm.options`: + +--------------------------------------------------------------------------- +-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=n +--------------------------------------------------------------------------- + +Then start elasticsearch with `bin/elasticsearch` as usual. + +If you are using IntelliJ, create a new Run/Debug configuration, choose `Remote` +and define the same port you defined in `config/jvm.options`. Then start the +remote debug session from IntelliJ. + + == Building with extra plugins Additional plugins may be built alongside elasticsearch, where their dependency on elasticsearch will be substituted with the local elasticsearch From 4019cbb2222d7026b88540ee63917e4d968a42a9 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Mon, 16 Jan 2017 10:26:12 +0100 Subject: [PATCH 03/28] Update doc after review --- TESTING.asciidoc | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/TESTING.asciidoc b/TESTING.asciidoc index 83fe27a963e..a1a01a8f231 100644 --- a/TESTING.asciidoc +++ b/TESTING.asciidoc @@ -485,20 +485,14 @@ gradle run --debug-jvm == Debugging remotely from an IDE -If you want to run elasticsearch and be able to remotely attach the process -for debugging purposes from your IDE, you need to add the following line in -`config/jvm.options`: +If you want to run Elasticsearch and be able to remotely attach the process +for debugging purposes from your IDE, can start Elasticsearch using `ES_JAVA_OPTS`: --------------------------------------------------------------------------- --Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=n +ES_JAVA_OPTS="-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=y" ./bin/elasticsearch --------------------------------------------------------------------------- -Then start elasticsearch with `bin/elasticsearch` as usual. - -If you are using IntelliJ, create a new Run/Debug configuration, choose `Remote` -and define the same port you defined in `config/jvm.options`. Then start the -remote debug session from IntelliJ. - +Read your IDE documentation for how to attach a debugger to a JVM process. == Building with extra plugins Additional plugins may be built alongside elasticsearch, where their From 9ae5410ea641bdcc077bbd9f62004f9167bbc32c Mon Sep 17 00:00:00 2001 From: Jason Tedor Date: Mon, 16 Jan 2017 07:30:21 -0500 Subject: [PATCH 04/28] Do not configure a logger named level When logger.level is set, we end up configuring a logger named "level" because we look for all settings of the form "logger\..+" as configuring a logger. Yet, logger.level is special and is meant to only configure the default logging level. This commit causes is to avoid not configuring a logger named level. Relates #22624 --- .../common/logging/LogConfigurator.java | 17 +++++++--- .../logging/EvilLoggerConfigurationTests.java | 31 +++++++++++++++++++ .../common/logging/minimal/log4j2.properties | 7 +++++ 3 files changed, 50 insertions(+), 5 deletions(-) create mode 100644 qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/minimal/log4j2.properties diff --git a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java index 428e3ce7964..237755ce2f1 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java +++ b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java @@ -122,20 +122,27 @@ public class LogConfigurator { Configurator.initialize(builder.build()); } - private static void configureLoggerLevels(Settings settings) { + /** + * Configures the logging levels for loggers configured in the specified settings. + * + * @param settings the settings from which logger levels will be extracted + */ + private static void configureLoggerLevels(final Settings settings) { if (ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.exists(settings)) { final Level level = ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.get(settings); Loggers.setLevel(ESLoggerFactory.getRootLogger(), level); } final Map levels = settings.filter(ESLoggerFactory.LOG_LEVEL_SETTING::match).getAsMap(); - for (String key : levels.keySet()) { - final Level level = ESLoggerFactory.LOG_LEVEL_SETTING.getConcreteSetting(key).get(settings); - Loggers.setLevel(ESLoggerFactory.getLogger(key.substring("logger.".length())), level); + for (final String key : levels.keySet()) { + // do not set a log level for a logger named level (from the default log setting) + if (!key.equals(ESLoggerFactory.LOG_DEFAULT_LEVEL_SETTING.getKey())) { + final Level level = ESLoggerFactory.LOG_LEVEL_SETTING.getConcreteSetting(key).get(settings); + Loggers.setLevel(ESLoggerFactory.getLogger(key.substring("logger.".length())), level); + } } } - @SuppressForbidden(reason = "sets system property for logging configuration") private static void setLogConfigurationSystemProperty(final Path logsPath, final Settings settings) { System.setProperty("es.logs", logsPath.resolve(ClusterName.CLUSTER_NAME_SETTING.get(settings).value()).toString()); diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerConfigurationTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerConfigurationTests.java index 7ee2120c36f..9cd6ec630ad 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerConfigurationTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerConfigurationTests.java @@ -34,9 +34,11 @@ import org.elasticsearch.test.ESTestCase; import java.io.IOException; import java.nio.file.Path; +import java.util.Map; import static org.hamcrest.CoreMatchers.containsString; import static org.hamcrest.CoreMatchers.equalTo; +import static org.hamcrest.Matchers.hasKey; import static org.hamcrest.Matchers.hasToString; import static org.hamcrest.Matchers.notNullValue; @@ -151,4 +153,33 @@ public class EvilLoggerConfigurationTests extends ESTestCase { assertThat(e, hasToString(containsString("no log4j2.properties found; tried"))); } + public void testLoggingLevelsFromSettings() throws IOException, UserException { + final Level rootLevel = randomFrom(Level.TRACE, Level.DEBUG, Level.INFO, Level.WARN, Level.ERROR); + final Level fooLevel = randomFrom(Level.TRACE, Level.DEBUG, Level.INFO, Level.WARN, Level.ERROR); + final Level barLevel = randomFrom(Level.TRACE, Level.DEBUG, Level.INFO, Level.WARN, Level.ERROR); + final Path configDir = getDataPath("minimal"); + final Settings settings = Settings.builder() + .put(Environment.PATH_CONF_SETTING.getKey(), configDir.toAbsolutePath()) + .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) + .put("logger.level", rootLevel.name()) + .put("logger.foo", fooLevel.name()) + .put("logger.bar", barLevel.name()) + .build(); + final Environment environment = new Environment(settings); + LogConfigurator.configure(environment); + + final LoggerContext ctx = (LoggerContext) LogManager.getContext(false); + final Configuration config = ctx.getConfiguration(); + final Map loggerConfigs = config.getLoggers(); + assertThat(loggerConfigs.size(), equalTo(3)); + assertThat(loggerConfigs, hasKey("")); + assertThat(loggerConfigs.get("").getLevel(), equalTo(rootLevel)); + assertThat(loggerConfigs, hasKey("foo")); + assertThat(loggerConfigs.get("foo").getLevel(), equalTo(fooLevel)); + assertThat(loggerConfigs, hasKey("bar")); + assertThat(loggerConfigs.get("bar").getLevel(), equalTo(barLevel)); + + assertThat(ctx.getLogger(randomAsciiOfLength(16)).getLevel(), equalTo(rootLevel)); + } + } diff --git a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/minimal/log4j2.properties b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/minimal/log4j2.properties new file mode 100644 index 00000000000..f245dde979c --- /dev/null +++ b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/minimal/log4j2.properties @@ -0,0 +1,7 @@ +appender.console.type = Console +appender.console.name = console +appender.console.layout.type = PatternLayout +appender.console.layout.pattern = %m%n + +rootLogger.level = info +rootLogger.appenderRef.console.ref = console From fc3280b3cf6a958bf8200ce67aef92687a1bc725 Mon Sep 17 00:00:00 2001 From: Jason Tedor Date: Mon, 16 Jan 2017 07:39:37 -0500 Subject: [PATCH 05/28] Expose logs base path For certain situations, end-users need the base path for Elasticsearch logs. Exposing this as a property is better than hard-coding the path into the logging configuration file as otherwise the logging configuration file could easily diverge from the Elasticsearch configuration file. Additionally, Elasticsearch will only have permissions to write to the log directory configured in the Elasticsearch configuration file. This commit adds a property that exposes this base path. One use-case for this is configuring a rollover strategy to retain logs for a certain period of time. As such, we add an example of this to the documentation. Additionally, we expose the property es.logs.cluster_name as this is used as the name of the log files in the default configuration. Finally, we expose es.logs.node_name in cases where node.name is explicitly set in case users want to include the node name as part of the name of the log files. Relates #22625 --- .../common/logging/LogConfigurator.java | 28 +++++++++- .../main/resources/config/log4j2.properties | 16 +++--- docs/reference/setup/configuration.asciidoc | 47 +++++++++++++--- .../common/logging/EvilLoggerTests.java | 55 ++++++++++++++++--- .../common/logging/config/log4j2.properties | 4 +- .../logging/deprecation/log4j2.properties | 4 +- .../logging/location_info/log4j2.properties | 2 +- .../common/logging/prefix/log4j2.properties | 2 +- 8 files changed, 127 insertions(+), 31 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java index 237755ce2f1..5e20b6c37e3 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java +++ b/core/src/main/java/org/elasticsearch/common/logging/LogConfigurator.java @@ -36,6 +36,7 @@ import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.node.Node; import java.io.IOException; import java.nio.file.FileVisitOption; @@ -97,7 +98,7 @@ public class LogConfigurator { final Set options = EnumSet.of(FileVisitOption.FOLLOW_LINKS); Files.walkFileTree(configsPath, options, Integer.MAX_VALUE, new SimpleFileVisitor() { @Override - public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { + public FileVisitResult visitFile(final Path file, final BasicFileAttributes attrs) throws IOException { if (file.getFileName().toString().equals("log4j2.properties")) { configurations.add((PropertiesConfiguration) factory.getConfiguration(context, file.toString(), file.toUri())); } @@ -143,9 +144,32 @@ public class LogConfigurator { } } + /** + * Set system properties that can be used in configuration files to specify paths and file patterns for log files. We expose three + * properties here: + *
    + *
  • + * {@code es.logs.base_path} the base path containing the log files + *
  • + *
  • + * {@code es.logs.cluster_name} the cluster name, used as the prefix of log filenames in the default configuration + *
  • + *
  • + * {@code es.logs.node_name} the node name, can be used as part of log filenames (only exposed if {@link Node#NODE_NAME_SETTING} is + * explicitly set) + *
  • + *
+ * + * @param logsPath the path to the log files + * @param settings the settings to extract the cluster and node names + */ @SuppressForbidden(reason = "sets system property for logging configuration") private static void setLogConfigurationSystemProperty(final Path logsPath, final Settings settings) { - System.setProperty("es.logs", logsPath.resolve(ClusterName.CLUSTER_NAME_SETTING.get(settings).value()).toString()); + System.setProperty("es.logs.base_path", logsPath.toString()); + System.setProperty("es.logs.cluster_name", ClusterName.CLUSTER_NAME_SETTING.get(settings).value()); + if (Node.NODE_NAME_SETTING.exists(settings)) { + System.setProperty("es.logs.node_name", Node.NODE_NAME_SETTING.get(settings)); + } } } diff --git a/distribution/src/main/resources/config/log4j2.properties b/distribution/src/main/resources/config/log4j2.properties index 3702afff9f3..f344d0aee55 100644 --- a/distribution/src/main/resources/config/log4j2.properties +++ b/distribution/src/main/resources/config/log4j2.properties @@ -11,10 +11,10 @@ appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n appender.rolling.type = RollingFile appender.rolling.name = rolling -appender.rolling.fileName = ${sys:es.logs}.log +appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log appender.rolling.layout.type = PatternLayout appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n -appender.rolling.filePattern = ${sys:es.logs}-%d{yyyy-MM-dd}.log +appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log appender.rolling.policies.type = Policies appender.rolling.policies.time.type = TimeBasedTriggeringPolicy appender.rolling.policies.time.interval = 1 @@ -26,10 +26,10 @@ rootLogger.appenderRef.rolling.ref = rolling appender.deprecation_rolling.type = RollingFile appender.deprecation_rolling.name = deprecation_rolling -appender.deprecation_rolling.fileName = ${sys:es.logs}_deprecation.log +appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log appender.deprecation_rolling.layout.type = PatternLayout appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n -appender.deprecation_rolling.filePattern = ${sys:es.logs}_deprecation-%i.log.gz +appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz appender.deprecation_rolling.policies.type = Policies appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy appender.deprecation_rolling.policies.size.size = 1GB @@ -43,10 +43,10 @@ logger.deprecation.additivity = false appender.index_search_slowlog_rolling.type = RollingFile appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling -appender.index_search_slowlog_rolling.fileName = ${sys:es.logs}_index_search_slowlog.log +appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log appender.index_search_slowlog_rolling.layout.type = PatternLayout appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n -appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs}_index_search_slowlog-%d{yyyy-MM-dd}.log +appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log appender.index_search_slowlog_rolling.policies.type = Policies appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy appender.index_search_slowlog_rolling.policies.time.interval = 1 @@ -59,10 +59,10 @@ logger.index_search_slowlog_rolling.additivity = false appender.index_indexing_slowlog_rolling.type = RollingFile appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling -appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs}_index_indexing_slowlog.log +appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log appender.index_indexing_slowlog_rolling.layout.type = PatternLayout appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n -appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs}_index_indexing_slowlog-%d{yyyy-MM-dd}.log +appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log appender.index_indexing_slowlog_rolling.policies.type = Policies appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy appender.index_indexing_slowlog_rolling.policies.time.interval = 1 diff --git a/docs/reference/setup/configuration.asciidoc b/docs/reference/setup/configuration.asciidoc index ac8855bf3d8..706398f4c39 100644 --- a/docs/reference/setup/configuration.asciidoc +++ b/docs/reference/setup/configuration.asciidoc @@ -112,22 +112,30 @@ command line with `es.node.name` or in the config file with `node.name`. Elasticsearch uses http://logging.apache.org/log4j/2.x/[Log4j 2] for logging. Log4j 2 can be configured using the log4j2.properties -file. Elasticsearch exposes a single property `${sys:es.logs}` that can be -referenced in the configuration file to determine the location of the log files; -this will resolve to a prefix for the Elasticsearch log file at runtime. +file. Elasticsearch exposes three properties, `${sys:es.logs.base_path}, +`${sys:es.logs.cluster_name}`, and `${sys:es.logs.node_name} (if the node name +is explicitly set via `node.name`) that can be referenced in the configuration +file to determine the location of the log files. The property +`${sys:es.logs.base_path}` will resolve to the log directory, +`${sys:es.logs.cluster_name}` will resolve to the cluster name (used as the +prefix of log filenames in the default configuration), and +`${sys:es.logs.node_name}` will resolve to the node name (if the node name is +explicitly set). For example, if your log directory (`path.logs`) is `/var/log/elasticsearch` and -your cluster is named `production` then `${sys:es.logs}` will resolve to -`/var/log/elasticsearch/production`. +your cluster is named `production` then `${sys:es.logs.base_path}` will resolve +to `/var/log/elasticsearch` and +`${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log` +will resolve to `/var/log/elasticsearch/production.log`. [source,properties] -------------------------------------------------- appender.rolling.type = RollingFile <1> appender.rolling.name = rolling -appender.rolling.fileName = ${sys:es.logs}.log <2> +appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log <2> appender.rolling.layout.type = PatternLayout appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %.10000m%n -appender.rolling.filePattern = ${sys:es.logs}-%d{yyyy-MM-dd}.log <3> +appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log <3> appender.rolling.policies.type = Policies appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <4> appender.rolling.policies.time.interval = 1 <5> @@ -145,6 +153,31 @@ appender.rolling.policies.time.modulate = true <6> If you append `.gz` or `.zip` to `appender.rolling.filePattern`, then the logs will be compressed as they are rolled. +If you want to retain log files for a specified period of time, you can use a +rollover strategy with a delete action. + +[source,properties] +-------------------------------------------------- +appender.rolling.strategy.type = DefaultRolloverStrategy <1> +appender.rolling.strategy.action.type = Delete <2> +appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path} <3> +appender.rolling.strategy.action.condition.type = IfLastModified <4> +appender.rolling.strategy.action.condition.age = 7D <5> +appender.rolling.strategy.action.PathConditions.type = IfFileName <6> +appender.rolling.strategy.action.PathConditions.glob = ${sys:es.logs.cluster_name}-* <7> +-------------------------------------------------- + +<1> Configure the `DefaultRolloverStrategy` +<2> Configure the `Delete` action for handling rollovers +<3> The base path to the Elasticsearch logs +<4> The condition to apply when handling rollovers +<5> Retain logs for seven days +<6> Only delete files older than seven days if they match the specified glob +<7> Delete files from the base path matching the glob + `${sys:es.logs.cluster_name}-*`; this is the glob that log files are rolled + to; this is needed to only delete the rolled Elasticsearch logs but not also + delete the deprecation and slow logs + Multiple configuration files can be loaded (in which case they will get merged) as long as they are named `log4j2.properties` and have the Elasticsearch config directory as an ancestor; this is useful for plugins that expose additional diff --git a/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java b/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java index bb4e32417be..f02ce6031c3 100644 --- a/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java +++ b/qa/evil-tests/src/test/java/org/elasticsearch/common/logging/EvilLoggerTests.java @@ -29,9 +29,11 @@ import org.apache.logging.log4j.core.appender.CountingNoOpAppender; import org.apache.logging.log4j.core.config.Configurator; import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.cli.UserException; +import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; +import org.elasticsearch.node.Node; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.hamcrest.RegexMatcher; @@ -70,7 +72,11 @@ public class EvilLoggerTests extends ESTestCase { testLogger.info("This is an info message"); testLogger.debug("This is a debug message"); testLogger.trace("This is a trace message"); - final String path = System.getProperty("es.logs") + ".log"; + final String path = + System.getProperty("es.logs.base_path") + + System.getProperty("file.separator") + + System.getProperty("es.logs.cluster_name") + + ".log"; final List events = Files.readAllLines(PathUtils.get(path)); assertThat(events.size(), equalTo(5)); final String location = "org.elasticsearch.common.logging.EvilLoggerTests.testLocationInfoTest"; @@ -88,7 +94,11 @@ public class EvilLoggerTests extends ESTestCase { final DeprecationLogger deprecationLogger = new DeprecationLogger(ESLoggerFactory.getLogger("deprecation")); deprecationLogger.deprecated("This is a deprecation message"); - final String deprecationPath = System.getProperty("es.logs") + "_deprecation.log"; + final String deprecationPath = + System.getProperty("es.logs.base_path") + + System.getProperty("file.separator") + + System.getProperty("es.logs.cluster_name") + + "_deprecation.log"; final List deprecationEvents = Files.readAllLines(PathUtils.get(deprecationPath)); assertThat(deprecationEvents.size(), equalTo(1)); assertLogLine( @@ -123,7 +133,11 @@ public class EvilLoggerTests extends ESTestCase { final Exception e = new Exception("exception"); logger.info(new ParameterizedMessage("{}", "test"), e); - final String path = System.getProperty("es.logs") + ".log"; + final String path = + System.getProperty("es.logs.base_path") + + System.getProperty("file.separator") + + System.getProperty("es.logs.cluster_name") + + ".log"; final List events = Files.readAllLines(PathUtils.get(path)); final StringWriter sw = new StringWriter(); @@ -141,14 +155,39 @@ public class EvilLoggerTests extends ESTestCase { } } + public void testProperties() throws IOException, UserException { + final Settings.Builder builder = Settings.builder().put("cluster.name", randomAsciiOfLength(16)); + if (randomBoolean()) { + builder.put("node.name", randomAsciiOfLength(16)); + } + final Settings settings = builder.build(); + setupLogging("minimal", settings); + + assertNotNull(System.getProperty("es.logs.base_path")); + + assertThat(System.getProperty("es.logs.cluster_name"), equalTo(ClusterName.CLUSTER_NAME_SETTING.get(settings).value())); + if (Node.NODE_NAME_SETTING.exists(settings)) { + assertThat(System.getProperty("es.logs.node_name"), equalTo(Node.NODE_NAME_SETTING.get(settings))); + } else { + assertNull(System.getProperty("es.logs.node_name")); + } + } + private void setupLogging(final String config) throws IOException, UserException { + setupLogging(config, Settings.EMPTY); + } + + private void setupLogging(final String config, final Settings settings) throws IOException, UserException { + assert !Environment.PATH_CONF_SETTING.exists(settings); + assert !Environment.PATH_HOME_SETTING.exists(settings); final Path configDir = getDataPath(config); // need to set custom path.conf so we can use a custom log4j2.properties file for the test - final Settings settings = Settings.builder() - .put(Environment.PATH_CONF_SETTING.getKey(), configDir.toAbsolutePath()) - .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) - .build(); - final Environment environment = new Environment(settings); + final Settings mergedSettings = Settings.builder() + .put(settings) + .put(Environment.PATH_CONF_SETTING.getKey(), configDir.toAbsolutePath()) + .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) + .build(); + final Environment environment = new Environment(mergedSettings); LogConfigurator.configure(environment); } diff --git a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/config/log4j2.properties b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/config/log4j2.properties index 6cfb6a4ec37..8b956421458 100644 --- a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/config/log4j2.properties +++ b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/config/log4j2.properties @@ -5,7 +5,7 @@ appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n appender.file.type = File appender.file.name = file -appender.file.fileName = ${sys:es.logs}.log +appender.file.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log appender.file.layout.type = PatternLayout appender.file.layout.pattern = [%p][%l] %marker%m%n @@ -21,7 +21,7 @@ logger.test.additivity = false appender.deprecation_file.type = File appender.deprecation_file.name = deprecation_file -appender.deprecation_file.fileName = ${sys:es.logs}_deprecation.log +appender.deprecation_file.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log appender.deprecation_file.layout.type = PatternLayout appender.deprecation_file.layout.pattern = [%p][%l] %marker%m%n diff --git a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/deprecation/log4j2.properties b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/deprecation/log4j2.properties index 3f4958adee8..fd7af2ce731 100644 --- a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/deprecation/log4j2.properties +++ b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/deprecation/log4j2.properties @@ -5,7 +5,7 @@ appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n appender.file.type = File appender.file.name = file -appender.file.fileName = ${sys:es.logs}.log +appender.file.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log appender.file.layout.type = PatternLayout appender.file.layout.pattern = [%p][%l] %marker%m%n @@ -15,7 +15,7 @@ rootLogger.appenderRef.file.ref = file appender.deprecation_file.type = File appender.deprecation_file.name = deprecation_file -appender.deprecation_file.fileName = ${sys:es.logs}_deprecation.log +appender.deprecation_file.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log appender.deprecation_file.layout.type = PatternLayout appender.deprecation_file.layout.pattern = [%p][%l] %marker%m%n diff --git a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/location_info/log4j2.properties b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/location_info/log4j2.properties index edb143d5fc5..5fa85b5d156 100644 --- a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/location_info/log4j2.properties +++ b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/location_info/log4j2.properties @@ -5,7 +5,7 @@ appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n appender.file.type = File appender.file.name = file -appender.file.fileName = ${sys:es.logs}.log +appender.file.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log appender.file.layout.type = PatternLayout appender.file.layout.pattern = [%p][%l] %marker%m%n diff --git a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/prefix/log4j2.properties b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/prefix/log4j2.properties index 1f18b38d91e..5dfa369a3bd 100644 --- a/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/prefix/log4j2.properties +++ b/qa/evil-tests/src/test/resources/org/elasticsearch/common/logging/prefix/log4j2.properties @@ -5,7 +5,7 @@ appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%m%n appender.file.type = File appender.file.name = file -appender.file.fileName = ${sys:es.logs}.log +appender.file.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log appender.file.layout.type = PatternLayout appender.file.layout.pattern = %marker%m%n From 884302dcaab9b7a4c962f8b97de1fc06f7003d27 Mon Sep 17 00:00:00 2001 From: javanna Date: Thu, 12 Jan 2017 17:36:38 +0100 Subject: [PATCH 06/28] Expose CheckedFunction --- .../elasticsearch/common/CheckedFunction.java | 30 +++++++++++++++++++ .../index/query/QueryShardContext.java | 8 ++--- 2 files changed, 32 insertions(+), 6 deletions(-) create mode 100644 core/src/main/java/org/elasticsearch/common/CheckedFunction.java diff --git a/core/src/main/java/org/elasticsearch/common/CheckedFunction.java b/core/src/main/java/org/elasticsearch/common/CheckedFunction.java new file mode 100644 index 00000000000..4a2d222db0b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/common/CheckedFunction.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common; + +import java.util.function.Function; + +/** + * A {@link Function}-like interface which allows throwing checked exceptions. + */ +@FunctionalInterface +public interface CheckedFunction { + R apply(T t) throws E; +} diff --git a/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java b/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java index e075368a2b4..93881306f08 100644 --- a/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java +++ b/core/src/main/java/org/elasticsearch/index/query/QueryShardContext.java @@ -29,6 +29,7 @@ import org.apache.lucene.search.similarities.Similarity; import org.apache.lucene.util.SetOnce; import org.elasticsearch.Version; import org.elasticsearch.client.Client; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesReference; @@ -308,12 +309,7 @@ public class QueryShardContext extends QueryRewriteContext { }); } - @FunctionalInterface - private interface CheckedFunction { - R apply(T t) throws IOException; - } - - private ParsedQuery toQuery(QueryBuilder queryBuilder, CheckedFunction filterOrQuery) { + private ParsedQuery toQuery(QueryBuilder queryBuilder, CheckedFunction filterOrQuery) { reset(); try { QueryBuilder rewriteQuery = QueryBuilder.rewriteQuery(queryBuilder, this); From bc22afcb2f5cd6b109bd87c840b7d4ecc6225d5e Mon Sep 17 00:00:00 2001 From: javanna Date: Thu, 12 Jan 2017 17:38:28 +0100 Subject: [PATCH 07/28] [TEST] replace SizeFunction with Function --- .../threadpool/ScalingThreadPoolTests.java | 20 +++---------------- 1 file changed, 3 insertions(+), 17 deletions(-) diff --git a/core/src/test/java/org/elasticsearch/threadpool/ScalingThreadPoolTests.java b/core/src/test/java/org/elasticsearch/threadpool/ScalingThreadPoolTests.java index d065abb884c..5e7227052d3 100644 --- a/core/src/test/java/org/elasticsearch/threadpool/ScalingThreadPoolTests.java +++ b/core/src/test/java/org/elasticsearch/threadpool/ScalingThreadPoolTests.java @@ -29,10 +29,10 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.Executor; import java.util.concurrent.TimeUnit; import java.util.function.BiConsumer; +import java.util.function.Function; import static org.hamcrest.CoreMatchers.instanceOf; import static org.hamcrest.Matchers.equalTo; -import static org.hamcrest.Matchers.hasToString; public class ScalingThreadPoolTests extends ESThreadPoolTestCase { @@ -95,13 +95,8 @@ public class ScalingThreadPoolTests extends ESThreadPoolTestCase { }); } - @FunctionalInterface - private interface SizeFunction { - int size(int numberOfProcessors); - } - private int expectedSize(final String threadPoolName, final int numberOfProcessors) { - final Map sizes = new HashMap<>(); + final Map> sizes = new HashMap<>(); sizes.put(ThreadPool.Names.GENERIC, n -> ThreadPool.boundedBy(4 * n, 128, 512)); sizes.put(ThreadPool.Names.MANAGEMENT, n -> 5); sizes.put(ThreadPool.Names.FLUSH, ThreadPool::halfNumberOfProcessorsMaxFive); @@ -110,7 +105,7 @@ public class ScalingThreadPoolTests extends ESThreadPoolTestCase { sizes.put(ThreadPool.Names.SNAPSHOT, ThreadPool::halfNumberOfProcessorsMaxFive); sizes.put(ThreadPool.Names.FETCH_SHARD_STARTED, ThreadPool::twiceNumberOfProcessors); sizes.put(ThreadPool.Names.FETCH_SHARD_STORE, ThreadPool::twiceNumberOfProcessors); - return sizes.get(threadPoolName).size(numberOfProcessors); + return sizes.get(threadPoolName).apply(numberOfProcessors); } public void testScalingThreadPoolIsBounded() throws InterruptedException { @@ -198,13 +193,4 @@ public class ScalingThreadPoolTests extends ESThreadPoolTestCase { terminateThreadPoolIfNeeded(threadPool); } } - - private static Settings settings(final String setting, final int value) { - return settings(setting, Integer.toString(value)); - } - - private static Settings settings(final String setting, final String value) { - return Settings.builder().put(setting, value).build(); - } - } From a8a13bb46fdeb64f496f4956ecb07b815d9a0649 Mon Sep 17 00:00:00 2001 From: javanna Date: Thu, 12 Jan 2017 17:57:27 +0100 Subject: [PATCH 08/28] replace custom functional interface with CheckedFunction in percolate module --- .../percolator/PercolateQuery.java | 29 +++++++------------ .../PercolatorHighlightSubFetchPhase.java | 2 +- .../percolator/CandidateQueryTests.java | 5 ++-- 3 files changed, 15 insertions(+), 21 deletions(-) diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java index 40218e50a4f..f5978a2dbd1 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQuery.java @@ -32,6 +32,7 @@ import org.apache.lucene.search.TwoPhaseIterator; import org.apache.lucene.search.Weight; import org.apache.lucene.util.Accountable; import org.apache.lucene.util.Bits; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.lucene.Lucene; @@ -90,8 +91,8 @@ final class PercolateQuery extends Query implements Accountable { if (result == docId) { if (twoPhaseIterator.matches()) { if (needsScores) { - QueryStore.Leaf percolatorQueries = queryStore.getQueries(leafReaderContext); - Query query = percolatorQueries.getQuery(docId); + CheckedFunction percolatorQueries = queryStore.getQueries(leafReaderContext); + Query query = percolatorQueries.apply(docId); Explanation detail = percolatorIndexSearcher.explain(query, 0); return Explanation.match(scorer.score(), "PercolateQuery", detail); } else { @@ -120,7 +121,7 @@ final class PercolateQuery extends Query implements Accountable { return null; } - final QueryStore.Leaf queries = queryStore.getQueries(leafReaderContext); + final CheckedFunction queries = queryStore.getQueries(leafReaderContext); if (needsScores) { return new BaseScorer(this, approximation, queries, percolatorIndexSearcher) { @@ -128,7 +129,7 @@ final class PercolateQuery extends Query implements Accountable { @Override boolean matchDocId(int docId) throws IOException { - Query query = percolatorQueries.getQuery(docId); + Query query = percolatorQueries.apply(docId); if (query != null) { TopDocs topDocs = percolatorIndexSearcher.search(query, 1); if (topDocs.totalHits > 0) { @@ -166,7 +167,7 @@ final class PercolateQuery extends Query implements Accountable { if (verifiedDocsBits.get(docId)) { return true; } - Query query = percolatorQueries.getQuery(docId); + Query query = percolatorQueries.apply(docId); return query != null && Lucene.exists(percolatorIndexSearcher, query); } }; @@ -224,26 +225,18 @@ final class PercolateQuery extends Query implements Accountable { } @FunctionalInterface - public interface QueryStore { - - Leaf getQueries(LeafReaderContext ctx) throws IOException; - - @FunctionalInterface - interface Leaf { - - Query getQuery(int docId) throws IOException; - - } - + interface QueryStore { + CheckedFunction getQueries(LeafReaderContext ctx) throws IOException; } abstract static class BaseScorer extends Scorer { final Scorer approximation; - final QueryStore.Leaf percolatorQueries; + final CheckedFunction percolatorQueries; final IndexSearcher percolatorIndexSearcher; - BaseScorer(Weight weight, Scorer approximation, QueryStore.Leaf percolatorQueries, IndexSearcher percolatorIndexSearcher) { + BaseScorer(Weight weight, Scorer approximation, CheckedFunction percolatorQueries, + IndexSearcher percolatorIndexSearcher) { super(weight); this.approximation = approximation; this.percolatorQueries = percolatorQueries; diff --git a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java index 2afa2c92ed1..db0745a045e 100644 --- a/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java +++ b/modules/percolator/src/main/java/org/elasticsearch/percolator/PercolatorHighlightSubFetchPhase.java @@ -86,7 +86,7 @@ public final class PercolatorHighlightSubFetchPhase extends HighlightPhase { try { LeafReaderContext ctx = ctxs.get(ReaderUtil.subIndex(hit.docId(), ctxs)); int segmentDocId = hit.docId() - ctx.docBase; - query = queryStore.getQueries(ctx).getQuery(segmentDocId); + query = queryStore.getQueries(ctx).apply(segmentDocId); } catch (IOException e) { throw new RuntimeException(e); } diff --git a/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java b/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java index 1526823369f..65005e957ac 100644 --- a/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java +++ b/modules/percolator/src/test/java/org/elasticsearch/percolator/CandidateQueryTests.java @@ -60,6 +60,7 @@ import org.apache.lucene.search.spans.SpanNotQuery; import org.apache.lucene.search.spans.SpanOrQuery; import org.apache.lucene.search.spans.SpanTermQuery; import org.apache.lucene.store.Directory; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.compress.CompressedXContent; import org.elasticsearch.common.settings.Settings; @@ -384,13 +385,13 @@ public class CandidateQueryTests extends ESSingleNodeTestCase { @Override public Scorer scorer(LeafReaderContext context) throws IOException { DocIdSetIterator allDocs = DocIdSetIterator.all(context.reader().maxDoc()); - PercolateQuery.QueryStore.Leaf leaf = queryStore.getQueries(context); + CheckedFunction leaf = queryStore.getQueries(context); FilteredDocIdSetIterator memoryIndexIterator = new FilteredDocIdSetIterator(allDocs) { @Override protected boolean match(int doc) { try { - Query query = leaf.getQuery(doc); + Query query = leaf.apply(doc); float score = memoryIndex.search(query); if (score != 0f) { if (needsScores) { From ab144c418e3bd50621075b3af66e191a61635d16 Mon Sep 17 00:00:00 2001 From: javanna Date: Thu, 12 Jan 2017 17:58:26 +0100 Subject: [PATCH 09/28] replace ShardSearchRequest.FilterParser functional interface with CheckedFunction --- .../index/query/functionscore/ScoreFunctionParser.java | 3 +-- .../java/org/elasticsearch/indices/IndicesService.java | 4 +++- .../org/elasticsearch/search/internal/AliasFilter.java | 3 ++- .../search/internal/ShardSearchRequest.java | 9 +++------ .../internal/ShardSearchTransportRequestTests.java | 3 ++- 5 files changed, 11 insertions(+), 11 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java index 3c01c2d92f3..1a2fad90c46 100644 --- a/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionParser.java @@ -19,7 +19,6 @@ package org.elasticsearch.index.query.functionscore; -import org.elasticsearch.common.ParsingException; import org.elasticsearch.index.query.QueryParseContext; import java.io.IOException; @@ -29,5 +28,5 @@ import java.io.IOException; */ @FunctionalInterface public interface ScoreFunctionParser> { - FB fromXContent(QueryParseContext context) throws IOException, ParsingException; + FB fromXContent(QueryParseContext context) throws IOException; } diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesService.java b/core/src/main/java/org/elasticsearch/indices/IndicesService.java index 77a948ecb14..6778b941219 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesService.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesService.java @@ -42,6 +42,7 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.routing.RecoverySource; import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.bytes.BytesArray; @@ -87,6 +88,7 @@ import org.elasticsearch.index.get.GetStats; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.merge.MergeStats; +import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.recovery.RecoveryStats; import org.elasticsearch.index.refresh.RefreshStats; @@ -1251,7 +1253,7 @@ public class IndicesService extends AbstractLifecycleComponent public AliasFilter buildAliasFilter(ClusterState state, String index, String... expressions) { /* Being static, parseAliasFilter doesn't have access to whatever guts it needs to parse a query. Instead of passing in a bunch * of dependencies we pass in a function that can perform the parsing. */ - ShardSearchRequest.FilterParser filterParser = bytes -> { + CheckedFunction filterParser = bytes -> { try (XContentParser parser = XContentFactory.xContent(bytes).createParser(xContentRegistry, bytes)) { return new QueryParseContext(parser).parseInnerQueryBuilder(); } diff --git a/core/src/main/java/org/elasticsearch/search/internal/AliasFilter.java b/core/src/main/java/org/elasticsearch/search/internal/AliasFilter.java index 8d55dfbab07..ed82cbd1b69 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/AliasFilter.java +++ b/core/src/main/java/org/elasticsearch/search/internal/AliasFilter.java @@ -21,6 +21,7 @@ package org.elasticsearch.search.internal; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -66,7 +67,7 @@ public final class AliasFilter implements Writeable { final IndexMetaData indexMetaData = context.getIndexSettings().getIndexMetaData(); /* Being static, parseAliasFilter doesn't have access to whatever guts it needs to parse a query. Instead of passing in a bunch * of dependencies we pass in a function that can perform the parsing. */ - ShardSearchRequest.FilterParser filterParser = bytes -> { + CheckedFunction filterParser = bytes -> { try (XContentParser parser = XContentFactory.xContent(bytes).createParser(context.getXContentRegistry(), bytes)) { return context.newParseContext(parser).parseInnerQueryBuilder(); } diff --git a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java index f021d7730cf..dd08cb49353 100644 --- a/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java +++ b/core/src/main/java/org/elasticsearch/search/internal/ShardSearchRequest.java @@ -22,6 +22,7 @@ package org.elasticsearch.search.internal; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.cluster.metadata.AliasMetaData; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.index.Index; @@ -88,17 +89,13 @@ public interface ShardSearchRequest { */ void rewrite(QueryShardContext context) throws IOException; - @FunctionalInterface - public interface FilterParser { - QueryBuilder parse(byte[] bytes) throws IOException; - } /** * Returns the filter associated with listed filtering aliases. *

* The list of filtering aliases should be obtained by calling MetaData.filteringAliases. * Returns null if no filtering is required.

*/ - static QueryBuilder parseAliasFilter(FilterParser filterParser, + static QueryBuilder parseAliasFilter(CheckedFunction filterParser, IndexMetaData metaData, String... aliasNames) { if (aliasNames == null || aliasNames.length == 0) { return null; @@ -110,7 +107,7 @@ public interface ShardSearchRequest { return null; } try { - return filterParser.parse(alias.filter().uncompressed()); + return filterParser.apply(alias.filter().uncompressed()); } catch (IOException ex) { throw new AliasFilterParsingException(index, alias.getAlias(), "Invalid alias filter", ex); } diff --git a/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java b/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java index 728eee4a850..d0132cca7ad 100644 --- a/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java +++ b/core/src/test/java/org/elasticsearch/search/internal/ShardSearchTransportRequestTests.java @@ -23,6 +23,7 @@ import org.elasticsearch.Version; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.cluster.metadata.AliasMetaData; import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.Strings; import org.elasticsearch.common.bytes.BytesArray; @@ -161,7 +162,7 @@ public class ShardSearchTransportRequestTests extends AbstractSearchTestCase { } public QueryBuilder aliasFilter(IndexMetaData indexMetaData, String... aliasNames) { - ShardSearchRequest.FilterParser filterParser = bytes -> { + CheckedFunction filterParser = bytes -> { try (XContentParser parser = XContentFactory.xContent(bytes).createParser(xContentRegistry(), bytes)) { return new QueryParseContext(parser).parseInnerQueryBuilder(); } From 9a910d3c9d467dbef0e2749b2f541f8cb38e3d1a Mon Sep 17 00:00:00 2001 From: javanna Date: Thu, 12 Jan 2017 18:06:28 +0100 Subject: [PATCH 10/28] Make RestChannelConsumer extend CheckedConsumer --- .../java/org/elasticsearch/rest/BaseRestHandler.java | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java b/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java index ac2f82136c7..81620ec8a7c 100644 --- a/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java +++ b/core/src/main/java/org/elasticsearch/rest/BaseRestHandler.java @@ -22,6 +22,7 @@ package org.elasticsearch.rest; import org.apache.lucene.search.spell.LevensteinDistance; import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.CheckedConsumer; import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Setting; @@ -125,14 +126,7 @@ public abstract class BaseRestHandler extends AbstractComponent implements RestH * the request against a channel. */ @FunctionalInterface - protected interface RestChannelConsumer { - /** - * Executes a request against the given channel. - * - * @param channel the channel for sending the response - * @throws Exception if an exception occurred executing the request - */ - void accept(RestChannel channel) throws Exception; + protected interface RestChannelConsumer extends CheckedConsumer { } /** From 8e3f1dd6898ab009c0689bad44931605db93a263 Mon Sep 17 00:00:00 2001 From: javanna Date: Fri, 13 Jan 2017 16:47:49 +0100 Subject: [PATCH 11/28] Replace custom Functional interface in ElasticsearchException with CheckedFunction --- .../org/elasticsearch/ElasticsearchException.java | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/ElasticsearchException.java b/core/src/main/java/org/elasticsearch/ElasticsearchException.java index bd3ea6797db..ecf6f01f3ff 100644 --- a/core/src/main/java/org/elasticsearch/ElasticsearchException.java +++ b/core/src/main/java/org/elasticsearch/ElasticsearchException.java @@ -21,6 +21,7 @@ package org.elasticsearch; import org.elasticsearch.action.support.replication.ReplicationOperation; import org.elasticsearch.cluster.action.shard.ShardStateAction; +import org.elasticsearch.common.CheckedFunction; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -82,7 +83,7 @@ public class ElasticsearchException extends RuntimeException implements ToXConte private static final String ERROR = "error"; private static final String ROOT_CAUSE = "root_cause"; - private static final Map> ID_TO_SUPPLIER; + private static final Map> ID_TO_SUPPLIER; private static final Map, ElasticsearchExceptionHandle> CLASS_TO_ELASTICSEARCH_EXCEPTION_HANDLE; private final Map> headers = new HashMap<>(); @@ -223,7 +224,7 @@ public class ElasticsearchException extends RuntimeException implements ToXConte } public static ElasticsearchException readException(StreamInput input, int id) throws IOException { - FunctionThatThrowsIOException elasticsearchException = ID_TO_SUPPLIER.get(id); + CheckedFunction elasticsearchException = ID_TO_SUPPLIER.get(id); if (elasticsearchException == null) { throw new IllegalStateException("unknown exception for id: " + id); } @@ -792,12 +793,12 @@ public class ElasticsearchException extends RuntimeException implements ToXConte org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException::new, 148, Version.V_5_2_0_UNRELEASED); final Class exceptionClass; - final FunctionThatThrowsIOException constructor; + final CheckedFunction constructor; final int id; final Version versionAdded; ElasticsearchExceptionHandle(Class exceptionClass, - FunctionThatThrowsIOException constructor, int id, + CheckedFunction constructor, int id, Version versionAdded) { // We need the exceptionClass because you can't dig it out of the constructor reliably. this.exceptionClass = exceptionClass; @@ -892,10 +893,6 @@ public class ElasticsearchException extends RuntimeException implements ToXConte builder.endObject(); } - interface FunctionThatThrowsIOException { - R apply(T t) throws IOException; - } - // lower cases and adds underscores to transitions in a name private static String toUnderscoreCase(String value) { StringBuilder sb = new StringBuilder(); From e6dc74f2bfe0d54470ccdfa3fde88f0c99066aeb Mon Sep 17 00:00:00 2001 From: Jason Tedor Date: Mon, 16 Jan 2017 08:08:52 -0500 Subject: [PATCH 12/28] Add replica ops with version conflict to translog An operation that completed successfully on a primary can result in a version conflict on a replica due to the asynchronous nature of operations. When a replica operation results in a version conflict, the operation is not added to the translog. This leads to gaps in the translog which is problematic as it can lead to situations where a replica shard can never advance its local checkpoint. As such operations are just normal course of business for a replica shard, these operations should be treated as if they completed successfully. This commit adds these operations to the translog. Relates #22626 --- .../action/bulk/TransportShardBulkAction.java | 13 +- .../replication/ReplicationOperation.java | 26 +--- .../index/engine/InternalEngine.java | 38 +++-- .../index/engine/InternalEngineTests.java | 145 ++++++++++++------ 4 files changed, 126 insertions(+), 96 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java index bb5714d3c3a..b4c3daee08f 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java @@ -21,6 +21,7 @@ package org.elasticsearch.action.bulk; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.delete.DeleteRequest; @@ -29,6 +30,7 @@ import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.replication.ReplicationOperation; +import org.elasticsearch.action.support.TransportActions; import org.elasticsearch.action.support.replication.ReplicationResponse.ShardInfo; import org.elasticsearch.action.support.replication.TransportWriteAction; import org.elasticsearch.action.update.UpdateHelper; @@ -65,9 +67,6 @@ import org.elasticsearch.transport.TransportService; import java.util.Map; -import static org.elasticsearch.action.support.replication.ReplicationOperation.ignoreReplicaException; -import static org.elasticsearch.action.support.replication.ReplicationOperation.isConflictException; - /** Performs shard-level bulk (index, delete or update) operations */ public class TransportShardBulkAction extends TransportWriteAction { @@ -235,6 +234,10 @@ public class TransportShardBulkAction extends TransportWriteAction, ReplicaRequest extends ReplicationRequest, diff --git a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java index a18ca7f280e..362a4c9a48c 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java +++ b/core/src/main/java/org/elasticsearch/index/engine/InternalEngine.java @@ -477,10 +477,7 @@ public class InternalEngine extends Engine { } if (op.versionType().isVersionConflictForWrites(currentVersion, expectedVersion, deleted)) { - if (op.origin().isRecovery()) { - // version conflict, but okay - result = onSuccess.get(); - } else { + if (op.origin() == Operation.Origin.PRIMARY) { // fatal version conflict final VersionConflictEngineException e = new VersionConflictEngineException( @@ -489,8 +486,13 @@ public class InternalEngine extends Engine { op.id(), op.versionType().explainConflictForWrites(currentVersion, expectedVersion, deleted)); result = onFailure.apply(e); + } else { + /* + * Version conflicts during recovery and on replicas are normal due to asynchronous execution; as such, we should return a + * successful result. + */ + result = onSuccess.get(); } - return Optional.of(result); } else { return Optional.empty(); @@ -672,7 +674,7 @@ public class InternalEngine extends Engine { } } final long expectedVersion = index.version(); - final Optional checkVersionConflictResult = + final Optional resultOnVersionConflict = checkVersionConflict( index, currentVersion, @@ -682,15 +684,15 @@ public class InternalEngine extends Engine { e -> new IndexResult(e, currentVersion, index.seqNo())); final IndexResult indexResult; - if (checkVersionConflictResult.isPresent()) { - indexResult = checkVersionConflictResult.get(); + if (resultOnVersionConflict.isPresent()) { + indexResult = resultOnVersionConflict.get(); } else { // no version conflict if (index.origin() == Operation.Origin.PRIMARY) { seqNo = seqNoService().generateSeqNo(); } - /** + /* * Update the document's sequence number and primary term; the sequence number here is derived here from either the sequence * number service if this is on the primary, or the existing document's sequence number if this is on the replica. The * primary term here has already been set, see IndexShard#prepareIndex where the Engine$Index operation is created. @@ -707,10 +709,12 @@ public class InternalEngine extends Engine { update(index.uid(), index.docs(), indexWriter); } indexResult = new IndexResult(updatedVersion, seqNo, deleted); + versionMap.putUnderLock(index.uid().bytes(), new VersionValue(updatedVersion)); + } + if (!indexResult.hasFailure()) { location = index.origin() != Operation.Origin.LOCAL_TRANSLOG_RECOVERY ? translog.add(new Translog.Index(index, indexResult)) : null; - versionMap.putUnderLock(index.uid().bytes(), new VersionValue(updatedVersion)); indexResult.setTranslogLocation(location); } indexResult.setTook(System.nanoTime() - index.startTime()); @@ -804,7 +808,7 @@ public class InternalEngine extends Engine { final long expectedVersion = delete.version(); - final Optional result = + final Optional resultOnVersionConflict = checkVersionConflict( delete, currentVersion, @@ -812,10 +816,9 @@ public class InternalEngine extends Engine { deleted, () -> new DeleteResult(expectedVersion, delete.seqNo(), true), e -> new DeleteResult(e, expectedVersion, delete.seqNo())); - final DeleteResult deleteResult; - if (result.isPresent()) { - deleteResult = result.get(); + if (resultOnVersionConflict.isPresent()) { + deleteResult = resultOnVersionConflict.get(); } else { if (delete.origin() == Operation.Origin.PRIMARY) { seqNo = seqNoService().generateSeqNo(); @@ -824,11 +827,14 @@ public class InternalEngine extends Engine { updatedVersion = delete.versionType().updateVersion(currentVersion, expectedVersion); found = deleteIfFound(delete.uid(), currentVersion, deleted, versionValue); deleteResult = new DeleteResult(updatedVersion, seqNo, found); + + versionMap.putUnderLock(delete.uid().bytes(), + new DeleteVersionValue(updatedVersion, engineConfig.getThreadPool().estimatedTimeInMillis())); + } + if (!deleteResult.hasFailure()) { location = delete.origin() != Operation.Origin.LOCAL_TRANSLOG_RECOVERY ? translog.add(new Translog.Delete(delete, deleteResult)) : null; - versionMap.putUnderLock(delete.uid().bytes(), - new DeleteVersionValue(updatedVersion, engineConfig.getThreadPool().estimatedTimeInMillis())); deleteResult.setTranslogLocation(location); } deleteResult.setTook(System.nanoTime() - delete.startTime()); diff --git a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java index f0ca8292f4f..6f85d65bc91 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java @@ -1478,76 +1478,121 @@ public class InternalEngineTests extends ESTestCase { } public void testVersioningReplicaConflict1() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); - Engine.IndexResult indexResult = engine.index(index); - assertThat(indexResult.getVersion(), equalTo(1L)); + final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); + final Engine.Index v1Index = new Engine.Index(newUid("1"), doc); + final Engine.IndexResult v1Result = engine.index(v1Index); + assertThat(v1Result.getVersion(), equalTo(1L)); - index = new Engine.Index(newUid("1"), doc); - indexResult = engine.index(index); - assertThat(indexResult.getVersion(), equalTo(2L)); + final Engine.Index v2Index = new Engine.Index(newUid("1"), doc); + final Engine.IndexResult v2Result = engine.index(v2Index); + assertThat(v2Result.getVersion(), equalTo(2L)); // apply the second index to the replica, should work fine - index = new Engine.Index(newUid("1"), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); - indexResult = replicaEngine.index(index); - assertThat(indexResult.getVersion(), equalTo(2L)); + final Engine.Index replicaV2Index = new Engine.Index( + newUid("1"), + doc, + v2Result.getSeqNo(), + v2Index.primaryTerm(), + v2Result.getVersion(), + VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), + REPLICA, + 0, + -1, + false); + final Engine.IndexResult replicaV2Result = replicaEngine.index(replicaV2Index); + assertThat(replicaV2Result.getVersion(), equalTo(2L)); - long seqNo = indexResult.getSeqNo(); - long primaryTerm = index.primaryTerm(); - // now, the old one should not work - index = new Engine.Index(newUid("1"), doc, seqNo, primaryTerm, 1L, VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); - indexResult = replicaEngine.index(index); - assertTrue(indexResult.hasFailure()); - assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); + // now, the old one should produce an indexing result + final Engine.Index replicaV1Index = new Engine.Index( + newUid("1"), + doc, + v1Result.getSeqNo(), + v1Index.primaryTerm(), + v1Result.getVersion(), + VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), + REPLICA, + 0, + -1, + false); + final Engine.IndexResult replicaV1Result = replicaEngine.index(replicaV1Index); + assertFalse(replicaV1Result.hasFailure()); + assertFalse(replicaV1Result.isCreated()); + assertThat(replicaV1Result.getVersion(), equalTo(2L)); // second version on replica should fail as well - index = new Engine.Index(newUid("1"), doc, seqNo, primaryTerm, 2L - , VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); - indexResult = replicaEngine.index(index); - assertThat(indexResult.getVersion(), equalTo(2L)); - assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); + final Engine.IndexResult replicaV2ReplayResult = replicaEngine.index(replicaV2Index); + assertFalse(replicaV2Result.hasFailure()); + assertFalse(replicaV1Result.isCreated()); + assertThat(replicaV2ReplayResult.getVersion(), equalTo(2L)); } public void testVersioningReplicaConflict2() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); - Engine.IndexResult indexResult = engine.index(index); - assertThat(indexResult.getVersion(), equalTo(1L)); + final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); + final Engine.Index v1Index = new Engine.Index(newUid("1"), doc); + final Engine.IndexResult v1Result = engine.index(v1Index); + assertThat(v1Result.getVersion(), equalTo(1L)); // apply the first index to the replica, should work fine - index = new Engine.Index(newUid("1"), doc, indexResult.getSeqNo(), index.primaryTerm(), 1L, - VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); - indexResult = replicaEngine.index(index); - assertThat(indexResult.getVersion(), equalTo(1L)); + final Engine.Index replicaV1Index = new Engine.Index( + newUid("1"), + doc, + v1Result.getSeqNo(), + v1Index.primaryTerm(), + v1Result.getVersion(), + VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), + REPLICA, + 0, + -1, + false); + Engine.IndexResult replicaV1Result = replicaEngine.index(replicaV1Index); + assertThat(replicaV1Result.getVersion(), equalTo(1L)); // index it again - index = new Engine.Index(newUid("1"), doc); - indexResult = engine.index(index); - assertThat(indexResult.getVersion(), equalTo(2L)); + final Engine.Index v2Index = new Engine.Index(newUid("1"), doc); + final Engine.IndexResult v2Result = engine.index(v2Index); + assertThat(v2Result.getVersion(), equalTo(2L)); // now delete it - Engine.Delete delete = new Engine.Delete("test", "1", newUid("1")); - Engine.DeleteResult deleteResult = engine.delete(delete); + final Engine.Delete delete = new Engine.Delete("test", "1", newUid("1")); + final Engine.DeleteResult deleteResult = engine.delete(delete); assertThat(deleteResult.getVersion(), equalTo(3L)); // apply the delete on the replica (skipping the second index) - delete = new Engine.Delete("test", "1", newUid("1"), deleteResult.getSeqNo(), delete.primaryTerm(), 3L - , VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), REPLICA, 0); - deleteResult = replicaEngine.delete(delete); - assertThat(deleteResult.getVersion(), equalTo(3L)); + final Engine.Delete replicaDelete = new Engine.Delete( + "test", + "1", + newUid("1"), + deleteResult.getSeqNo(), + delete.primaryTerm(), + deleteResult.getVersion(), + VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), + REPLICA, + 0); + final Engine.DeleteResult replicaDeleteResult = replicaEngine.delete(replicaDelete); + assertThat(replicaDeleteResult.getVersion(), equalTo(3L)); - // second time delete with same version should fail - delete = new Engine.Delete("test", "1", newUid("1"), deleteResult.getSeqNo(), delete.primaryTerm(), 3L - , VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), REPLICA, 0); - deleteResult = replicaEngine.delete(delete); - assertTrue(deleteResult.hasFailure()); - assertThat(deleteResult.getFailure(), instanceOf(VersionConflictEngineException.class)); + // second time delete with same version should just produce the same version + final Engine.DeleteResult deleteReplayResult = replicaEngine.delete(replicaDelete); + assertFalse(deleteReplayResult.hasFailure()); + assertTrue(deleteReplayResult.isFound()); + assertThat(deleteReplayResult.getVersion(), equalTo(3L)); - // now do the second index on the replica, it should fail - index = new Engine.Index(newUid("1"), doc, deleteResult.getSeqNo(), delete.primaryTerm(), 2L, VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); - indexResult = replicaEngine.index(index); - assertTrue(indexResult.hasFailure()); - assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); + // now do the second index on the replica, it should result in the current version + final Engine.Index replicaV2Index = new Engine.Index( + newUid("1"), + doc, + v2Result.getSeqNo(), + v2Index.primaryTerm(), + v2Result.getVersion(), + VersionType.INTERNAL.versionTypeForReplicationAndRecovery(), + REPLICA, + 0, + -1, + false); + final Engine.IndexResult replicaV2Result = replicaEngine.index(replicaV2Index); + assertFalse(replicaV2Result.hasFailure()); + assertFalse(replicaV2Result.isCreated()); + assertThat(replicaV2Result.getVersion(), equalTo(3L)); } public void testBasicCreatedFlag() { From 59a48ffc41c8e93c70fd36b822d52d5c10cdd399 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20B=C3=BCscher?= Date: Mon, 16 Jan 2017 14:27:55 +0100 Subject: [PATCH 13/28] ProfileResult and CollectorResult should print machine readable timing information (#22561) Currently both ProfileResult and CollectorResult print the time field in a human readable string format (e.g. "time": "55.20315000ms"). When trying to parse this back to a long value, for example to use in the planned high level java rest client, we can lose precision because of conversion and rounding issues. This change adds a new additional field (`time_in_nanos`) to the profile response to be able to get the original time value in nanoseconds back. The old `time` field is only printed when the `?`human=true` flag in the url is set. This follow the behaviour for all other stats-related apis. Also the format of the `time` field is slightly changed. Instead of always formatting the output as a 10-digit ms value, by using the `XContentBuilder#timeValueField()` method we now print the largest time unit present is used (e.g. "s", "ms", "micros"). --- .../search/profile/ProfileResult.java | 5 +- .../search/profile/query/CollectorResult.java | 5 +- .../search/profile/ProfileResultTests.java | 121 ++++++++++++++++++ .../profile/query/CollectorResultTests.java | 102 +++++++++++++++ .../migration/migrate_6_0/search.asciidoc | 6 + docs/reference/search/profile.asciidoc | 78 +++++------ 6 files changed, 277 insertions(+), 40 deletions(-) create mode 100644 core/src/test/java/org/elasticsearch/search/profile/ProfileResultTests.java create mode 100644 core/src/test/java/org/elasticsearch/search/profile/query/CollectorResultTests.java diff --git a/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java b/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java index 4c745a5127d..d2198cbb35b 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java +++ b/core/src/main/java/org/elasticsearch/search/profile/ProfileResult.java @@ -31,8 +31,8 @@ import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.List; -import java.util.Locale; import java.util.Map; +import java.util.concurrent.TimeUnit; /** * This class is the internal representation of a profiled Query, corresponding @@ -48,6 +48,7 @@ public final class ProfileResult implements Writeable, ToXContent { private static final ParseField TYPE = new ParseField("type"); private static final ParseField DESCRIPTION = new ParseField("description"); private static final ParseField NODE_TIME = new ParseField("time"); + private static final ParseField NODE_TIME_RAW = new ParseField("time_in_nanos"); private static final ParseField CHILDREN = new ParseField("children"); private static final ParseField BREAKDOWN = new ParseField("breakdown"); @@ -146,7 +147,7 @@ public final class ProfileResult implements Writeable, ToXContent { builder = builder.startObject() .field(TYPE.getPreferredName(), type) .field(DESCRIPTION.getPreferredName(), description) - .field(NODE_TIME.getPreferredName(), String.format(Locale.US, "%.10gms", getTime() / 1000000.0)) + .timeValueField(NODE_TIME_RAW.getPreferredName(), NODE_TIME.getPreferredName(), getTime(), TimeUnit.NANOSECONDS) .field(BREAKDOWN.getPreferredName(), timings); if (!children.isEmpty()) { diff --git a/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java b/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java index 6b4d7c0e842..941037b6ff6 100644 --- a/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java +++ b/core/src/main/java/org/elasticsearch/search/profile/query/CollectorResult.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.ArrayList; import java.util.List; -import java.util.Locale; +import java.util.concurrent.TimeUnit; /** * Public interface and serialization container for profiled timings of the @@ -52,6 +52,7 @@ public class CollectorResult implements ToXContent, Writeable { private static final ParseField NAME = new ParseField("name"); private static final ParseField REASON = new ParseField("reason"); private static final ParseField TIME = new ParseField("time"); + private static final ParseField TIME_NANOS = new ParseField("time_in_nanos"); private static final ParseField CHILDREN = new ParseField("children"); /** @@ -140,7 +141,7 @@ public class CollectorResult implements ToXContent, Writeable { builder = builder.startObject() .field(NAME.getPreferredName(), getName()) .field(REASON.getPreferredName(), getReason()) - .field(TIME.getPreferredName(), String.format(Locale.US, "%.10gms", (double) (getTime() / 1000000.0))); + .timeValueField(TIME_NANOS.getPreferredName(), TIME.getPreferredName(), getTime(), TimeUnit.NANOSECONDS); if (!children.isEmpty()) { builder = builder.startArray(CHILDREN.getPreferredName()); diff --git a/core/src/test/java/org/elasticsearch/search/profile/ProfileResultTests.java b/core/src/test/java/org/elasticsearch/search/profile/ProfileResultTests.java new file mode 100644 index 00000000000..17676b42447 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/profile/ProfileResultTests.java @@ -0,0 +1,121 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.profile; + +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class ProfileResultTests extends ESTestCase { + + public void testToXContent() throws IOException { + List children = new ArrayList<>(); + children.add(new ProfileResult("child1", "desc1", Collections.emptyMap(), Collections.emptyList(), 100L)); + children.add(new ProfileResult("child2", "desc2", Collections.emptyMap(), Collections.emptyList(), 123356L)); + Map timings = new HashMap<>(); + timings.put("key1", 12345L); + timings.put("key2", 6789L); + ProfileResult result = new ProfileResult("someType", "some description", timings, children, 123456L); + XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"type\" : \"someType\",\n" + + " \"description\" : \"some description\",\n" + + " \"time_in_nanos\" : 123456,\n" + + " \"breakdown\" : {\n" + + " \"key1\" : 12345,\n" + + " \"key2\" : 6789\n" + + " },\n" + + " \"children\" : [\n" + + " {\n" + + " \"type\" : \"child1\",\n" + + " \"description\" : \"desc1\",\n" + + " \"time_in_nanos\" : 100,\n" + + " \"breakdown\" : { }\n" + + " },\n" + + " {\n" + + " \"type\" : \"child2\",\n" + + " \"description\" : \"desc2\",\n" + + " \"time_in_nanos\" : 123356,\n" + + " \"breakdown\" : { }\n" + + " }\n" + + " ]\n" + + "}", builder.string()); + + builder = XContentFactory.jsonBuilder().prettyPrint().humanReadable(true); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"type\" : \"someType\",\n" + + " \"description\" : \"some description\",\n" + + " \"time\" : \"123.4micros\",\n" + + " \"time_in_nanos\" : 123456,\n" + + " \"breakdown\" : {\n" + + " \"key1\" : 12345,\n" + + " \"key2\" : 6789\n" + + " },\n" + + " \"children\" : [\n" + + " {\n" + + " \"type\" : \"child1\",\n" + + " \"description\" : \"desc1\",\n" + + " \"time\" : \"100nanos\",\n" + + " \"time_in_nanos\" : 100,\n" + + " \"breakdown\" : { }\n" + + " },\n" + + " {\n" + + " \"type\" : \"child2\",\n" + + " \"description\" : \"desc2\",\n" + + " \"time\" : \"123.3micros\",\n" + + " \"time_in_nanos\" : 123356,\n" + + " \"breakdown\" : { }\n" + + " }\n" + + " ]\n" + + "}", builder.string()); + + result = new ProfileResult("profileName", "some description", Collections.emptyMap(), Collections.emptyList(), 12345678L); + builder = XContentFactory.jsonBuilder().prettyPrint().humanReadable(true); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"type\" : \"profileName\",\n" + + " \"description\" : \"some description\",\n" + + " \"time\" : \"12.3ms\",\n" + + " \"time_in_nanos\" : 12345678,\n" + + " \"breakdown\" : { }\n" + + "}", builder.string()); + + result = new ProfileResult("profileName", "some description", Collections.emptyMap(), Collections.emptyList(), 1234567890L); + builder = XContentFactory.jsonBuilder().prettyPrint().humanReadable(true); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"type\" : \"profileName\",\n" + + " \"description\" : \"some description\",\n" + + " \"time\" : \"1.2s\",\n" + + " \"time_in_nanos\" : 1234567890,\n" + + " \"breakdown\" : { }\n" + + "}", builder.string()); + } +} diff --git a/core/src/test/java/org/elasticsearch/search/profile/query/CollectorResultTests.java b/core/src/test/java/org/elasticsearch/search/profile/query/CollectorResultTests.java new file mode 100644 index 00000000000..a1539c310e3 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/profile/query/CollectorResultTests.java @@ -0,0 +1,102 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.profile.query; + +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +public class CollectorResultTests extends ESTestCase { + + public void testToXContent() throws IOException { + List children = new ArrayList<>(); + children.add(new CollectorResult("child1", "reason1", 100L, Collections.emptyList())); + children.add(new CollectorResult("child2", "reason1", 123356L, Collections.emptyList())); + CollectorResult result = new CollectorResult("collectorName", "some reason", 123456L, children); + XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"name\" : \"collectorName\",\n" + + " \"reason\" : \"some reason\",\n" + + " \"time_in_nanos\" : 123456,\n" + + " \"children\" : [\n" + + " {\n" + + " \"name\" : \"child1\",\n" + + " \"reason\" : \"reason1\",\n" + + " \"time_in_nanos\" : 100\n" + + " },\n" + + " {\n" + + " \"name\" : \"child2\",\n" + + " \"reason\" : \"reason1\",\n" + + " \"time_in_nanos\" : 123356\n" + + " }\n" + + " ]\n" + + "}", builder.string()); + + builder = XContentFactory.jsonBuilder().prettyPrint().humanReadable(true); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"name\" : \"collectorName\",\n" + + " \"reason\" : \"some reason\",\n" + + " \"time\" : \"123.4micros\",\n" + + " \"time_in_nanos\" : 123456,\n" + + " \"children\" : [\n" + + " {\n" + + " \"name\" : \"child1\",\n" + + " \"reason\" : \"reason1\",\n" + + " \"time\" : \"100nanos\",\n" + + " \"time_in_nanos\" : 100\n" + + " },\n" + + " {\n" + + " \"name\" : \"child2\",\n" + + " \"reason\" : \"reason1\",\n" + + " \"time\" : \"123.3micros\",\n" + + " \"time_in_nanos\" : 123356\n" + + " }\n" + + " ]\n" + + "}", builder.string()); + + result = new CollectorResult("collectorName", "some reason", 12345678L, Collections.emptyList()); + builder = XContentFactory.jsonBuilder().prettyPrint().humanReadable(true); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"name\" : \"collectorName\",\n" + + " \"reason\" : \"some reason\",\n" + + " \"time\" : \"12.3ms\",\n" + + " \"time_in_nanos\" : 12345678\n" + + "}", builder.string()); + + result = new CollectorResult("collectorName", "some reason", 1234567890L, Collections.emptyList()); + builder = XContentFactory.jsonBuilder().prettyPrint().humanReadable(true); + result.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals("{\n" + + " \"name\" : \"collectorName\",\n" + + " \"reason\" : \"some reason\",\n" + + " \"time\" : \"1.2s\",\n" + + " \"time_in_nanos\" : 1234567890\n" + + "}", builder.string()); + } +} diff --git a/docs/reference/migration/migrate_6_0/search.asciidoc b/docs/reference/migration/migrate_6_0/search.asciidoc index bae77b67372..6228a9dbff3 100644 --- a/docs/reference/migration/migrate_6_0/search.asciidoc +++ b/docs/reference/migration/migrate_6_0/search.asciidoc @@ -33,3 +33,9 @@ The search shards API no longer accepts the `type` url parameter, which didn't have any effect in previous versions. + +==== Changes to the Profile API + +* The `"time"` field showing human readable timing output has been replaced by the `"time_in_nanos"` + field which displays the elapsed time in nanoseconds. The `"time"` field can be turned on by adding + `"?human=true"` to the request url. It will display a rounded, human readable time value. diff --git a/docs/reference/search/profile.asciidoc b/docs/reference/search/profile.asciidoc index 52b744d30e9..ac0632a3288 100644 --- a/docs/reference/search/profile.asciidoc +++ b/docs/reference/search/profile.asciidoc @@ -58,7 +58,7 @@ This will yield the following result: { "type": "BooleanQuery", "description": "message:message message:number", - "time": "1.873811000ms", + "time_in_nanos": "1873811", "breakdown": { "score": 51306, "score_count": 4, @@ -77,7 +77,7 @@ This will yield the following result: { "type": "TermQuery", "description": "message:message", - "time": "0.3919430000ms", + "time_in_nanos": "391943", "breakdown": { "score": 28776, "score_count": 4, @@ -96,7 +96,7 @@ This will yield the following result: { "type": "TermQuery", "description": "message:number", - "time": "0.2106820000ms", + "time_in_nanos": "210682", "breakdown": { "score": 4552, "score_count": 4, @@ -120,12 +120,12 @@ This will yield the following result: { "name": "CancellableCollector", "reason": "search_cancelled", - "time": "0.3043110000ms", + "time_in_nanos": "304311", "children": [ { "name": "SimpleTopScoreDocCollector", "reason": "search_top_hits", - "time": "0.03227300000ms" + "time_in_nanos": "32273" } ] } @@ -143,22 +143,22 @@ This will yield the following result: // TESTRESPONSE[s/"id": "\[2aE02wS1R8q_QFnYu6vDVQ\]\[twitter\]\[1\]"/"id": $body.profile.shards.0.id/] // TESTRESPONSE[s/"rewrite_time": 51443/"rewrite_time": $body.profile.shards.0.searches.0.rewrite_time/] // TESTRESPONSE[s/"score": 51306/"score": $body.profile.shards.0.searches.0.query.0.breakdown.score/] -// TESTRESPONSE[s/"time": "1.873811000ms"/"time": $body.profile.shards.0.searches.0.query.0.time/] +// TESTRESPONSE[s/"time_in_nanos": "1873811"/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.time_in_nanos/] // TESTRESPONSE[s/"build_scorer": 2935582/"build_scorer": $body.profile.shards.0.searches.0.query.0.breakdown.build_scorer/] // TESTRESPONSE[s/"create_weight": 919297/"create_weight": $body.profile.shards.0.searches.0.query.0.breakdown.create_weight/] // TESTRESPONSE[s/"next_doc": 53876/"next_doc": $body.profile.shards.0.searches.0.query.0.breakdown.next_doc/] -// TESTRESPONSE[s/"time": "0.3919430000ms"/"time": $body.profile.shards.0.searches.0.query.0.children.0.time/] +// TESTRESPONSE[s/"time_in_nanos": "391943"/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.children.0.time_in_nanos/] // TESTRESPONSE[s/"score": 28776/"score": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.score/] // TESTRESPONSE[s/"build_scorer": 784451/"build_scorer": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.build_scorer/] // TESTRESPONSE[s/"create_weight": 1669564/"create_weight": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.create_weight/] // TESTRESPONSE[s/"next_doc": 10111/"next_doc": $body.profile.shards.0.searches.0.query.0.children.0.breakdown.next_doc/] -// TESTRESPONSE[s/"time": "0.2106820000ms"/"time": $body.profile.shards.0.searches.0.query.0.children.1.time/] +// TESTRESPONSE[s/"time_in_nanos": "210682"/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.children.1.time_in_nanos/] // TESTRESPONSE[s/"score": 4552/"score": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.score/] // TESTRESPONSE[s/"build_scorer": 42602/"build_scorer": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.build_scorer/] // TESTRESPONSE[s/"create_weight": 89323/"create_weight": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.create_weight/] // TESTRESPONSE[s/"next_doc": 2852/"next_doc": $body.profile.shards.0.searches.0.query.0.children.1.breakdown.next_doc/] -// TESTRESPONSE[s/"time": "0.3043110000ms"/"time": $body.profile.shards.0.searches.0.collector.0.time/] -// TESTRESPONSE[s/"time": "0.03227300000ms"/"time": $body.profile.shards.0.searches.0.collector.0.children.0.time/] +// TESTRESPONSE[s/"time_in_nanos": "304311"/"time_in_nanos": $body.profile.shards.0.searches.0.collector.0.time_in_nanos/] +// TESTRESPONSE[s/"time_in_nanos": "32273"/"time_in_nanos": $body.profile.shards.0.searches.0.collector.0.children.0.time_in_nanos/] // Sorry for this mess.... <1> Search results are returned, but were omitted here for brevity @@ -216,6 +216,12 @@ a `query` array and a `collector` array. Alongside the `search` object is an `a There will also be a `rewrite` metric showing the total time spent rewriting the query (in nanoseconds). +==== Human readable output + +As with other statistics apis, the Profile API supports human readable outputs for the time values. This can be turned on by adding +`?human=true` to the query string. In this case, in addition to the `"time_in_nanos"` field, the output contains the additional `"time"` +field with containing a rounded, human readable time value , (e.g. `"time": "391,9ms"`, `"time": "123.3micros"`). + === Profiling Queries [NOTE] @@ -244,19 +250,19 @@ The overall structure of this query tree will resemble your original Elasticsear { "type": "BooleanQuery", "description": "message:message message:number", - "time": "1.873811000ms", + "time_in_nanos": "1873811", "breakdown": {...}, <1> "children": [ { "type": "TermQuery", "description": "message:message", - "time": "0.3919430000ms", + "time_in_nanos": "391943", "breakdown": {...} }, { "type": "TermQuery", "description": "message:number", - "time": "0.2106820000ms", + "time_in_nanos": "210682", "breakdown": {...} } ] @@ -265,9 +271,9 @@ The overall structure of this query tree will resemble your original Elasticsear -------------------------------------------------- // TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.profile.shards.0.id",\n"searches": [{\n/] // TESTRESPONSE[s/]$/],"rewrite_time": $body.profile.shards.0.searches.0.rewrite_time, "collector": $body.profile.shards.0.searches.0.collector}], "aggregations": []}]}}/] -// TESTRESPONSE[s/"time": "1.873811000ms",\n.+"breakdown": \{...\}/"time": $body.profile.shards.0.searches.0.query.0.time, "breakdown": $body.profile.shards.0.searches.0.query.0.breakdown/] -// TESTRESPONSE[s/"time": "0.3919430000ms",\n.+"breakdown": \{...\}/"time": $body.profile.shards.0.searches.0.query.0.children.0.time, "breakdown": $body.profile.shards.0.searches.0.query.0.children.0.breakdown/] -// TESTRESPONSE[s/"time": "0.2106820000ms",\n.+"breakdown": \{...\}/"time": $body.profile.shards.0.searches.0.query.0.children.1.time, "breakdown": $body.profile.shards.0.searches.0.query.0.children.1.breakdown/] +// TESTRESPONSE[s/"time_in_nanos": "1873811",\n.+"breakdown": \{...\}/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.time_in_nanos, "breakdown": $body.profile.shards.0.searches.0.query.0.breakdown/] +// TESTRESPONSE[s/"time_in_nanos": "391943",\n.+"breakdown": \{...\}/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.children.0.time_in_nanos, "breakdown": $body.profile.shards.0.searches.0.query.0.children.0.breakdown/] +// TESTRESPONSE[s/"time_in_nanos": "210682",\n.+"breakdown": \{...\}/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.children.1.time_in_nanos, "breakdown": $body.profile.shards.0.searches.0.query.0.children.1.breakdown/] <1> The breakdown timings are omitted for simplicity Based on the profile structure, we can see that our `match` query was rewritten by Lucene into a BooleanQuery with two @@ -276,7 +282,7 @@ the equivalent name in Elasticsearch. The `"description"` field displays the Lu is made available to help differentiating between parts of your query (e.g. both `"message:search"` and `"message:test"` are TermQuery's and would appear identical otherwise. -The `"time"` field shows that this query took ~15ms for the entire BooleanQuery to execute. The recorded time is inclusive +The `"time_in_nanos"` field shows that this query took ~1.8ms for the entire BooleanQuery to execute. The recorded time is inclusive of all children. The `"breakdown"` field will give detailed stats about how the time was spent, we'll look at @@ -305,16 +311,16 @@ The `"breakdown"` component lists detailed timing statistics about low-level Luc "advance_count": 0 } -------------------------------------------------- -// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.profile.shards.0.id",\n"searches": [{\n"query": [{\n"type": "BooleanQuery",\n"description": "message:message message:number",\n"time": $body.profile.shards.0.searches.0.query.0.time,/] +// TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.profile.shards.0.id",\n"searches": [{\n"query": [{\n"type": "BooleanQuery",\n"description": "message:message message:number",\n"time_in_nanos": $body.profile.shards.0.searches.0.query.0.time_in_nanos,/] // TESTRESPONSE[s/}$/},\n"children": $body.profile.shards.0.searches.0.query.0.children}],\n"rewrite_time": $body.profile.shards.0.searches.0.rewrite_time, "collector": $body.profile.shards.0.searches.0.collector}], "aggregations": []}]}}/] // TESTRESPONSE[s/"score": 51306/"score": $body.profile.shards.0.searches.0.query.0.breakdown.score/] -// TESTRESPONSE[s/"time": "1.873811000ms"/"time": $body.profile.shards.0.searches.0.query.0.time/] +// TESTRESPONSE[s/"time_in_nanos": "1873811"/"time_in_nanos": $body.profile.shards.0.searches.0.query.0.time_in_nanos/] // TESTRESPONSE[s/"build_scorer": 2935582/"build_scorer": $body.profile.shards.0.searches.0.query.0.breakdown.build_scorer/] // TESTRESPONSE[s/"create_weight": 919297/"create_weight": $body.profile.shards.0.searches.0.query.0.breakdown.create_weight/] // TESTRESPONSE[s/"next_doc": 53876/"next_doc": $body.profile.shards.0.searches.0.query.0.breakdown.next_doc/] Timings are listed in wall-clock nanoseconds and are not normalized at all. All caveats about the overall -`"time"` apply here. The intention of the breakdown is to give you a feel for A) what machinery in Lucene is +`"time_in_nanos"` apply here. The intention of the breakdown is to give you a feel for A) what machinery in Lucene is actually eating time, and B) the magnitude of differences in times between the various components. Like the overall time, the breakdown is inclusive of all children times. @@ -401,12 +407,12 @@ Looking at the previous example: { "name": "CancellableCollector", "reason": "search_cancelled", - "time": "0.3043110000ms", + "time_in_nanos": "304311", "children": [ { "name": "SimpleTopScoreDocCollector", "reason": "search_top_hits", - "time": "0.03227300000ms" + "time_in_nanos": "32273" } ] } @@ -414,12 +420,12 @@ Looking at the previous example: -------------------------------------------------- // TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.profile.shards.0.id",\n"searches": [{\n"query": $body.profile.shards.0.searches.0.query,\n"rewrite_time": $body.profile.shards.0.searches.0.rewrite_time,/] // TESTRESPONSE[s/]$/]}], "aggregations": []}]}}/] -// TESTRESPONSE[s/"time": "0.3043110000ms"/"time": $body.profile.shards.0.searches.0.collector.0.time/] -// TESTRESPONSE[s/"time": "0.03227300000ms"/"time": $body.profile.shards.0.searches.0.collector.0.children.0.time/] +// TESTRESPONSE[s/"time_in_nanos": "304311"/"time_in_nanos": $body.profile.shards.0.searches.0.collector.0.time_in_nanos/] +// TESTRESPONSE[s/"time_in_nanos": "32273"/"time_in_nanos": $body.profile.shards.0.searches.0.collector.0.children.0.time_in_nanos/] We see a single collector named `SimpleTopScoreDocCollector` wrapped into `CancellableCollector`. `SimpleTopScoreDocCollector` is the default "scoring and sorting" `Collector` used by Elasticsearch. The `"reason"` field attempts to give a plain english description of the class name. The -`"time"` is similar to the time in the Query tree: a wall-clock time inclusive of all children. Similarly, `children` lists +`"time_in_nanos"` is similar to the time in the Query tree: a wall-clock time inclusive of all children. Similarly, `children` lists all sub-collectors. The `CancellableCollector` that wraps `SimpleTopScoreDocCollector` is used by elasticsearch to detect if the current search was cancelled and stop collecting documents as soon as it occurs. @@ -564,7 +570,7 @@ And the response: { "type": "TermQuery", "description": "my_field:foo", - "time": "0.4094560000ms", + "time_in_nanos": "409456", "breakdown": { "score": 0, "score_count": 1, @@ -583,7 +589,7 @@ And the response: { "type": "TermQuery", "description": "message:search", - "time": "0.3037020000ms", + "time_in_nanos": "303702", "breakdown": { "score": 0, "score_count": 1, @@ -605,24 +611,24 @@ And the response: { "name": "MultiCollector", "reason": "search_multi", - "time": "1.378943000ms", + "time_in_nanos": "1378943", "children": [ { "name": "FilteredCollector", "reason": "search_post_filter", - "time": "0.4036590000ms", + "time_in_nanos": "403659", "children": [ { "name": "SimpleTopScoreDocCollector", "reason": "search_top_hits", - "time": "0.006391000000ms" + "time_in_nanos": "6391" } ] }, { "name": "BucketCollector: [[non_global_term, another_agg]]", "reason": "aggregation", - "time": "0.9546020000ms" + "time_in_nanos": "954602" } ] } @@ -633,7 +639,7 @@ And the response: { "type": "MatchAllDocsQuery", "description": "*:*", - "time": "0.04829300000ms", + "time_in_nanos": "48293", "breakdown": { "score": 0, "score_count": 1, @@ -655,7 +661,7 @@ And the response: { "name": "GlobalAggregator: [global_agg]", "reason": "aggregation_global", - "time": "0.1226310000ms" + "time_in_nanos": "122631" } ] } @@ -738,7 +744,7 @@ Which yields the following aggregation profile output { "type": "org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator", "description": "property_type", - "time": "4280.456978ms", + "time_in_nanos": "4280456978", "breakdown": { "reduce": 0, "reduce_count": 0, @@ -753,7 +759,7 @@ Which yields the following aggregation profile output { "type": "org.elasticsearch.search.aggregations.metrics.avg.AvgAggregator", "description": "avg_price", - "time": "1124.864392ms", + "time_in_nanos": "1124864392", "breakdown": { "reduce": 0, "reduce_count": 0, @@ -773,7 +779,7 @@ Which yields the following aggregation profile output From the profile structure we can see our `property_type` terms aggregation which is internally represented by the `GlobalOrdinalsStringTermsAggregator` class and the sub aggregator `avg_price` which is internally represented by the `AvgAggregator` class. The `type` field displays the class used internally to represent the aggregation. The `description` field displays the name of the aggregation. -The `"time"` field shows that it took ~4 seconds for the entire aggregation to execute. The recorded time is inclusive +The `"time_in_nanos"` field shows that it took ~4 seconds for the entire aggregation to execute. The recorded time is inclusive of all children. The `"breakdown"` field will give detailed stats about how the time was spent, we'll look at From 49a49da3f52520e9a35310109f74de2d10ff488d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20B=C3=BCscher?= Date: Mon, 16 Jan 2017 14:53:06 +0100 Subject: [PATCH 14/28] [Docs] Fix section title in profile.asciidoc --- docs/reference/search/profile.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/search/profile.asciidoc b/docs/reference/search/profile.asciidoc index ac0632a3288..7a0296d4d1f 100644 --- a/docs/reference/search/profile.asciidoc +++ b/docs/reference/search/profile.asciidoc @@ -216,7 +216,7 @@ a `query` array and a `collector` array. Alongside the `search` object is an `a There will also be a `rewrite` metric showing the total time spent rewriting the query (in nanoseconds). -==== Human readable output +=== Human readable output As with other statistics apis, the Profile API supports human readable outputs for the time values. This can be turned on by adding `?human=true` to the query string. In this case, in addition to the `"time_in_nanos"` field, the output contains the additional `"time"` From 0da190234c87838df5d37f2375e901351e05e03d Mon Sep 17 00:00:00 2001 From: Boaz Leskes Date: Mon, 16 Jan 2017 15:40:05 +0100 Subject: [PATCH 15/28] Add a deprecation notice to shadow replicas (#22025) Also adds deprecation logging. See #22024 --- .../elasticsearch/cluster/metadata/IndexMetaData.java | 11 +++++++---- .../main/java/org/elasticsearch/env/Environment.java | 3 ++- .../java/org/elasticsearch/env/NodeEnvironment.java | 3 +-- docs/reference/indices/shadow-replicas.asciidoc | 2 +- docs/reference/migration/migrate_6_0/indices.asciidoc | 5 +++++ 5 files changed, 16 insertions(+), 8 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java index 8c2dc3d47ed..dc7849812f1 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java @@ -190,11 +190,11 @@ public class IndexMetaData implements Diffable, ToXContent { Setting.intSetting(SETTING_NUMBER_OF_REPLICAS, 1, 0, Property.Dynamic, Property.IndexScope); public static final String SETTING_SHADOW_REPLICAS = "index.shadow_replicas"; public static final Setting INDEX_SHADOW_REPLICAS_SETTING = - Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, Property.IndexScope); + Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, Property.IndexScope, Property.Deprecated); public static final String SETTING_SHARED_FILESYSTEM = "index.shared_filesystem"; public static final Setting INDEX_SHARED_FILESYSTEM_SETTING = - Setting.boolSetting(SETTING_SHARED_FILESYSTEM, false, Property.IndexScope); + Setting.boolSetting(SETTING_SHARED_FILESYSTEM, INDEX_SHADOW_REPLICAS_SETTING, Property.IndexScope, Property.Deprecated); public static final String SETTING_AUTO_EXPAND_REPLICAS = "index.auto_expand_replicas"; public static final Setting INDEX_AUTO_EXPAND_REPLICAS_SETTING = AutoExpandReplicas.SETTING; @@ -232,10 +232,11 @@ public class IndexMetaData implements Diffable, ToXContent { public static final String SETTING_INDEX_UUID = "index.uuid"; public static final String SETTING_DATA_PATH = "index.data_path"; public static final Setting INDEX_DATA_PATH_SETTING = - new Setting<>(SETTING_DATA_PATH, "", Function.identity(), Property.IndexScope); + new Setting<>(SETTING_DATA_PATH, "", Function.identity(), Property.IndexScope, Property.Deprecated); public static final String SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE = "index.shared_filesystem.recover_on_any_node"; public static final Setting INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING = - Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, Property.Dynamic, Property.IndexScope); + Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, + Property.Dynamic, Property.IndexScope, Property.Deprecated); public static final String INDEX_UUID_NA_VALUE = "_na_"; public static final String INDEX_ROUTING_REQUIRE_GROUP_PREFIX = "index.routing.allocation.require"; @@ -1217,6 +1218,7 @@ public class IndexMetaData implements Diffable, ToXContent { * {@link #isIndexUsingShadowReplicas(org.elasticsearch.common.settings.Settings)}. */ public static boolean isOnSharedFilesystem(Settings settings) { + // don't use the settings directly, not to trigger manny deprecation return settings.getAsBoolean(SETTING_SHARED_FILESYSTEM, isIndexUsingShadowReplicas(settings)); } @@ -1226,6 +1228,7 @@ public class IndexMetaData implements Diffable, ToXContent { * setting for this is false. */ public static boolean isIndexUsingShadowReplicas(Settings settings) { + // don't use the settings directly, not to trigger manny deprecation return settings.getAsBoolean(SETTING_SHADOW_REPLICAS, false); } diff --git a/core/src/main/java/org/elasticsearch/env/Environment.java b/core/src/main/java/org/elasticsearch/env/Environment.java index 4b544aa3882..9c7026f2e9e 100644 --- a/core/src/main/java/org/elasticsearch/env/Environment.java +++ b/core/src/main/java/org/elasticsearch/env/Environment.java @@ -56,7 +56,8 @@ public class Environment { public static final Setting PATH_LOGS_SETTING = Setting.simpleString("path.logs", Property.NodeScope); public static final Setting> PATH_REPO_SETTING = Setting.listSetting("path.repo", Collections.emptyList(), Function.identity(), Property.NodeScope); - public static final Setting PATH_SHARED_DATA_SETTING = Setting.simpleString("path.shared_data", Property.NodeScope); + public static final Setting PATH_SHARED_DATA_SETTING = Setting.simpleString("path.shared_data", + Property.NodeScope, Property.Deprecated); public static final Setting PIDFILE_SETTING = Setting.simpleString("pidfile", Property.NodeScope); private final Settings settings; diff --git a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java index f1cdb5ae575..9bd6a5eb364 100644 --- a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java +++ b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java @@ -38,7 +38,6 @@ import org.elasticsearch.common.Randomness; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.io.FileSystemUtils; -import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -162,7 +161,7 @@ public final class NodeEnvironment implements Closeable { * If true automatically append node lock id to custom data paths. */ public static final Setting ADD_NODE_LOCK_ID_TO_CUSTOM_PATH = - Setting.boolSetting("node.add_lock_id_to_custom_path", true, Property.NodeScope); + Setting.boolSetting("node.add_lock_id_to_custom_path", true, Property.NodeScope, Property.Deprecated); /** diff --git a/docs/reference/indices/shadow-replicas.asciidoc b/docs/reference/indices/shadow-replicas.asciidoc index 3a0b23852b0..625165d5bdd 100644 --- a/docs/reference/indices/shadow-replicas.asciidoc +++ b/docs/reference/indices/shadow-replicas.asciidoc @@ -1,7 +1,7 @@ [[indices-shadow-replicas]] == Shadow replica indices -experimental[] +deprecated[5.2.0, Shadow replicas don't see much usage and we are planning to remove them] If you would like to use a shared filesystem, you can use the shadow replicas settings to choose where on disk the data for an index should be kept, as well diff --git a/docs/reference/migration/migrate_6_0/indices.asciidoc b/docs/reference/migration/migrate_6_0/indices.asciidoc index be726ce155a..7062ac7cb1e 100644 --- a/docs/reference/migration/migrate_6_0/indices.asciidoc +++ b/docs/reference/migration/migrate_6_0/indices.asciidoc @@ -27,3 +27,8 @@ PUT _template/template_2 } -------------------------------------------------- // CONSOLE + + +=== Shadow Replicas are deprecated + +<> don't see much usage and we are planning to remove them. From e976aa09bbd98b631cb12b86e9dc787d45fdb969 Mon Sep 17 00:00:00 2001 From: Boaz Leskes Date: Mon, 16 Jan 2017 16:11:59 +0100 Subject: [PATCH 16/28] Don'y use `INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING` directly as it triggers (many) deprecation logging #22025 deprecated this setting (pending it's removal) but it's frequent usage will spam the deprecation logs and also fails test. As temporary work around we should not use the setting object directly. --- .../org/elasticsearch/cluster/metadata/IndexMetaData.java | 4 ++-- .../java/org/elasticsearch/gateway/PrimaryShardAllocator.java | 4 +++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java index dc7849812f1..db46cc502f9 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java @@ -1218,7 +1218,7 @@ public class IndexMetaData implements Diffable, ToXContent { * {@link #isIndexUsingShadowReplicas(org.elasticsearch.common.settings.Settings)}. */ public static boolean isOnSharedFilesystem(Settings settings) { - // don't use the settings directly, not to trigger manny deprecation + // don't use the settings directly, not to trigger manny deprecation logging return settings.getAsBoolean(SETTING_SHARED_FILESYSTEM, isIndexUsingShadowReplicas(settings)); } @@ -1228,7 +1228,7 @@ public class IndexMetaData implements Diffable, ToXContent { * setting for this is false. */ public static boolean isIndexUsingShadowReplicas(Settings settings) { - // don't use the settings directly, not to trigger manny deprecation + // don't use the settings directly, not to trigger manny deprecation logging return settings.getAsBoolean(SETTING_SHADOW_REPLICAS, false); } diff --git a/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java b/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java index 45292d43c8e..bb2cc475464 100644 --- a/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java +++ b/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java @@ -476,8 +476,10 @@ public abstract class PrimaryShardAllocator extends BaseGatewayShardAllocator { * recovered on any node */ private boolean recoverOnAnyNode(IndexMetaData metaData) { + // don't use the settings directly, not to trigger manny deprecation logging return (IndexMetaData.isOnSharedFilesystem(metaData.getSettings()) || IndexMetaData.isOnSharedFilesystem(this.settings)) - && IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING.get(metaData.getSettings(), this.settings); + && (metaData.getSettings().getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false) || + this.settings.getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false)); } protected abstract FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation); From b88768155000f6535af3c4a9d16b7ff2bc38c4bf Mon Sep 17 00:00:00 2001 From: Boaz Leskes Date: Mon, 16 Jan 2017 16:15:32 +0100 Subject: [PATCH 17/28] Revert "Don'y use `INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING` directly as it triggers (many) deprecation logging" This reverts commit e976aa09bbd98b631cb12b86e9dc787d45fdb969. --- .../org/elasticsearch/cluster/metadata/IndexMetaData.java | 4 ++-- .../java/org/elasticsearch/gateway/PrimaryShardAllocator.java | 4 +--- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java index db46cc502f9..dc7849812f1 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java @@ -1218,7 +1218,7 @@ public class IndexMetaData implements Diffable, ToXContent { * {@link #isIndexUsingShadowReplicas(org.elasticsearch.common.settings.Settings)}. */ public static boolean isOnSharedFilesystem(Settings settings) { - // don't use the settings directly, not to trigger manny deprecation logging + // don't use the settings directly, not to trigger manny deprecation return settings.getAsBoolean(SETTING_SHARED_FILESYSTEM, isIndexUsingShadowReplicas(settings)); } @@ -1228,7 +1228,7 @@ public class IndexMetaData implements Diffable, ToXContent { * setting for this is false. */ public static boolean isIndexUsingShadowReplicas(Settings settings) { - // don't use the settings directly, not to trigger manny deprecation logging + // don't use the settings directly, not to trigger manny deprecation return settings.getAsBoolean(SETTING_SHADOW_REPLICAS, false); } diff --git a/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java b/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java index bb2cc475464..45292d43c8e 100644 --- a/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java +++ b/core/src/main/java/org/elasticsearch/gateway/PrimaryShardAllocator.java @@ -476,10 +476,8 @@ public abstract class PrimaryShardAllocator extends BaseGatewayShardAllocator { * recovered on any node */ private boolean recoverOnAnyNode(IndexMetaData metaData) { - // don't use the settings directly, not to trigger manny deprecation logging return (IndexMetaData.isOnSharedFilesystem(metaData.getSettings()) || IndexMetaData.isOnSharedFilesystem(this.settings)) - && (metaData.getSettings().getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false) || - this.settings.getAsBoolean(IndexMetaData.SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false)); + && IndexMetaData.INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING.get(metaData.getSettings(), this.settings); } protected abstract FetchResult fetchData(ShardRouting shard, RoutingAllocation allocation); From f88ab7606739052db87f5f03e04b1af5c7934436 Mon Sep 17 00:00:00 2001 From: Boaz Leskes Date: Mon, 16 Jan 2017 16:15:41 +0100 Subject: [PATCH 18/28] Revert "Add a deprecation notice to shadow replicas (#22025)" This reverts commit 0da190234c87838df5d37f2375e901351e05e03d. --- .../elasticsearch/cluster/metadata/IndexMetaData.java | 11 ++++------- .../main/java/org/elasticsearch/env/Environment.java | 3 +-- .../java/org/elasticsearch/env/NodeEnvironment.java | 3 ++- docs/reference/indices/shadow-replicas.asciidoc | 2 +- docs/reference/migration/migrate_6_0/indices.asciidoc | 5 ----- 5 files changed, 8 insertions(+), 16 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java index dc7849812f1..8c2dc3d47ed 100644 --- a/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java +++ b/core/src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java @@ -190,11 +190,11 @@ public class IndexMetaData implements Diffable, ToXContent { Setting.intSetting(SETTING_NUMBER_OF_REPLICAS, 1, 0, Property.Dynamic, Property.IndexScope); public static final String SETTING_SHADOW_REPLICAS = "index.shadow_replicas"; public static final Setting INDEX_SHADOW_REPLICAS_SETTING = - Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, Property.IndexScope, Property.Deprecated); + Setting.boolSetting(SETTING_SHADOW_REPLICAS, false, Property.IndexScope); public static final String SETTING_SHARED_FILESYSTEM = "index.shared_filesystem"; public static final Setting INDEX_SHARED_FILESYSTEM_SETTING = - Setting.boolSetting(SETTING_SHARED_FILESYSTEM, INDEX_SHADOW_REPLICAS_SETTING, Property.IndexScope, Property.Deprecated); + Setting.boolSetting(SETTING_SHARED_FILESYSTEM, false, Property.IndexScope); public static final String SETTING_AUTO_EXPAND_REPLICAS = "index.auto_expand_replicas"; public static final Setting INDEX_AUTO_EXPAND_REPLICAS_SETTING = AutoExpandReplicas.SETTING; @@ -232,11 +232,10 @@ public class IndexMetaData implements Diffable, ToXContent { public static final String SETTING_INDEX_UUID = "index.uuid"; public static final String SETTING_DATA_PATH = "index.data_path"; public static final Setting INDEX_DATA_PATH_SETTING = - new Setting<>(SETTING_DATA_PATH, "", Function.identity(), Property.IndexScope, Property.Deprecated); + new Setting<>(SETTING_DATA_PATH, "", Function.identity(), Property.IndexScope); public static final String SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE = "index.shared_filesystem.recover_on_any_node"; public static final Setting INDEX_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE_SETTING = - Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, - Property.Dynamic, Property.IndexScope, Property.Deprecated); + Setting.boolSetting(SETTING_SHARED_FS_ALLOW_RECOVERY_ON_ANY_NODE, false, Property.Dynamic, Property.IndexScope); public static final String INDEX_UUID_NA_VALUE = "_na_"; public static final String INDEX_ROUTING_REQUIRE_GROUP_PREFIX = "index.routing.allocation.require"; @@ -1218,7 +1217,6 @@ public class IndexMetaData implements Diffable, ToXContent { * {@link #isIndexUsingShadowReplicas(org.elasticsearch.common.settings.Settings)}. */ public static boolean isOnSharedFilesystem(Settings settings) { - // don't use the settings directly, not to trigger manny deprecation return settings.getAsBoolean(SETTING_SHARED_FILESYSTEM, isIndexUsingShadowReplicas(settings)); } @@ -1228,7 +1226,6 @@ public class IndexMetaData implements Diffable, ToXContent { * setting for this is false. */ public static boolean isIndexUsingShadowReplicas(Settings settings) { - // don't use the settings directly, not to trigger manny deprecation return settings.getAsBoolean(SETTING_SHADOW_REPLICAS, false); } diff --git a/core/src/main/java/org/elasticsearch/env/Environment.java b/core/src/main/java/org/elasticsearch/env/Environment.java index 9c7026f2e9e..4b544aa3882 100644 --- a/core/src/main/java/org/elasticsearch/env/Environment.java +++ b/core/src/main/java/org/elasticsearch/env/Environment.java @@ -56,8 +56,7 @@ public class Environment { public static final Setting PATH_LOGS_SETTING = Setting.simpleString("path.logs", Property.NodeScope); public static final Setting> PATH_REPO_SETTING = Setting.listSetting("path.repo", Collections.emptyList(), Function.identity(), Property.NodeScope); - public static final Setting PATH_SHARED_DATA_SETTING = Setting.simpleString("path.shared_data", - Property.NodeScope, Property.Deprecated); + public static final Setting PATH_SHARED_DATA_SETTING = Setting.simpleString("path.shared_data", Property.NodeScope); public static final Setting PIDFILE_SETTING = Setting.simpleString("pidfile", Property.NodeScope); private final Settings settings; diff --git a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java index 9bd6a5eb364..f1cdb5ae575 100644 --- a/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java +++ b/core/src/main/java/org/elasticsearch/env/NodeEnvironment.java @@ -38,6 +38,7 @@ import org.elasticsearch.common.Randomness; import org.elasticsearch.common.SuppressForbidden; import org.elasticsearch.common.UUIDs; import org.elasticsearch.common.io.FileSystemUtils; +import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -161,7 +162,7 @@ public final class NodeEnvironment implements Closeable { * If true automatically append node lock id to custom data paths. */ public static final Setting ADD_NODE_LOCK_ID_TO_CUSTOM_PATH = - Setting.boolSetting("node.add_lock_id_to_custom_path", true, Property.NodeScope, Property.Deprecated); + Setting.boolSetting("node.add_lock_id_to_custom_path", true, Property.NodeScope); /** diff --git a/docs/reference/indices/shadow-replicas.asciidoc b/docs/reference/indices/shadow-replicas.asciidoc index 625165d5bdd..3a0b23852b0 100644 --- a/docs/reference/indices/shadow-replicas.asciidoc +++ b/docs/reference/indices/shadow-replicas.asciidoc @@ -1,7 +1,7 @@ [[indices-shadow-replicas]] == Shadow replica indices -deprecated[5.2.0, Shadow replicas don't see much usage and we are planning to remove them] +experimental[] If you would like to use a shared filesystem, you can use the shadow replicas settings to choose where on disk the data for an index should be kept, as well diff --git a/docs/reference/migration/migrate_6_0/indices.asciidoc b/docs/reference/migration/migrate_6_0/indices.asciidoc index 7062ac7cb1e..be726ce155a 100644 --- a/docs/reference/migration/migrate_6_0/indices.asciidoc +++ b/docs/reference/migration/migrate_6_0/indices.asciidoc @@ -27,8 +27,3 @@ PUT _template/template_2 } -------------------------------------------------- // CONSOLE - - -=== Shadow Replicas are deprecated - -<> don't see much usage and we are planning to remove them. From 7a8884d9faf8c47ead5b9e00eaeac58aff0fa2aa Mon Sep 17 00:00:00 2001 From: Tim Brooks Date: Mon, 16 Jan 2017 09:17:44 -0600 Subject: [PATCH 19/28] Wrap rest httpclient with doPrivileged blocks (#22603) This is related to #22116. A number of modules (reindex, etc) use the rest client. The rest client opens connections using the apache http client. To avoid throwing SecurityException when using the SecurityManager these operations must be privileged. This is tricky because connections are opened within the httpclient code on its reactor thread. The way I confronted this was to wrap the creation of the client (and creation of reactor thread) in a doPrivileged block. The new thread inherits the existing security context. --- .../java/org/elasticsearch/client/RestClientBuilder.java | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java index d881bd70a44..4466a61d9df 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java @@ -28,6 +28,8 @@ import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; import org.apache.http.nio.conn.SchemeIOSessionStrategy; +import java.security.AccessController; +import java.security.PrivilegedAction; import java.util.Objects; /** @@ -177,7 +179,12 @@ public final class RestClientBuilder { if (failureListener == null) { failureListener = new RestClient.FailureListener(); } - CloseableHttpAsyncClient httpClient = createHttpClient(); + CloseableHttpAsyncClient httpClient = AccessController.doPrivileged(new PrivilegedAction() { + @Override + public CloseableHttpAsyncClient run() { + return createHttpClient(); + } + }); RestClient restClient = new RestClient(httpClient, maxRetryTimeout, defaultHeaders, hosts, pathPrefix, failureListener); httpClient.start(); return restClient; From 2791c699601f7294fa342b51eb27f4266dc1acb2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christoph=20B=C3=BCscher?= Date: Mon, 16 Jan 2017 16:19:07 +0100 Subject: [PATCH 20/28] Update profile.asciidoc Making the "Human readable output" section a note instead of an own section. --- docs/reference/search/profile.asciidoc | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/docs/reference/search/profile.asciidoc b/docs/reference/search/profile.asciidoc index 7a0296d4d1f..6b48d9c3198 100644 --- a/docs/reference/search/profile.asciidoc +++ b/docs/reference/search/profile.asciidoc @@ -216,11 +216,9 @@ a `query` array and a `collector` array. Alongside the `search` object is an `a There will also be a `rewrite` metric showing the total time spent rewriting the query (in nanoseconds). -=== Human readable output - -As with other statistics apis, the Profile API supports human readable outputs for the time values. This can be turned on by adding -`?human=true` to the query string. In this case, in addition to the `"time_in_nanos"` field, the output contains the additional `"time"` -field with containing a rounded, human readable time value , (e.g. `"time": "391,9ms"`, `"time": "123.3micros"`). +NOTE: As with other statistics apis, the Profile API supports human readable outputs. This can be turned on by adding +`?human=true` to the query string. In this case, the output contains the additional `"time"` field containing rounded, +human readable timing information (e.g. `"time": "391,9ms"`, `"time": "123.3micros"`). === Profiling Queries From eea4db5512fb616608e283d295d583e62d4607c6 Mon Sep 17 00:00:00 2001 From: Michael McCandless Date: Mon, 16 Jan 2017 10:34:47 -0500 Subject: [PATCH 21/28] Fix thread safety of Stempel's token filter factory (#22610) Closes #21911 --- .../pl/PolishStemTokenFilterFactory.java | 11 +---- .../analysis/AnalysisPolishFactoryTests.java | 45 +++++++++++++++++-- 2 files changed, 43 insertions(+), 13 deletions(-) diff --git a/plugins/analysis-stempel/src/main/java/org/elasticsearch/index/analysis/pl/PolishStemTokenFilterFactory.java b/plugins/analysis-stempel/src/main/java/org/elasticsearch/index/analysis/pl/PolishStemTokenFilterFactory.java index afc7d527a6c..aa3194c5831 100644 --- a/plugins/analysis-stempel/src/main/java/org/elasticsearch/index/analysis/pl/PolishStemTokenFilterFactory.java +++ b/plugins/analysis-stempel/src/main/java/org/elasticsearch/index/analysis/pl/PolishStemTokenFilterFactory.java @@ -35,20 +35,11 @@ import java.io.IOException; public class PolishStemTokenFilterFactory extends AbstractTokenFilterFactory { - private final StempelStemmer stemmer; - public PolishStemTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { super(indexSettings, name, settings); - Trie tire; - try { - tire = StempelStemmer.load(PolishAnalyzer.class.getResourceAsStream(PolishAnalyzer.DEFAULT_STEMMER_FILE)); - } catch (IOException ex) { - throw new RuntimeException("Unable to load default stemming tables", ex); - } - stemmer = new StempelStemmer(tire); } @Override public TokenStream create(TokenStream tokenStream) { - return new StempelFilter(tokenStream, stemmer); + return new StempelFilter(tokenStream, new StempelStemmer(PolishAnalyzer.getDefaultTable())); } } diff --git a/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java b/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java index abf739d010a..e68cb260b0b 100644 --- a/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java +++ b/plugins/analysis-stempel/src/test/java/org/elasticsearch/index/analysis/AnalysisPolishFactoryTests.java @@ -19,12 +19,24 @@ package org.elasticsearch.index.analysis; -import org.elasticsearch.AnalysisFactoryTestCase; -import org.elasticsearch.index.analysis.pl.PolishStemTokenFilterFactory; - +import java.io.IOException; import java.util.HashMap; import java.util.Map; +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.BaseTokenStreamTestCase; +import org.apache.lucene.analysis.MockTokenizer; +import org.apache.lucene.analysis.TokenFilter; +import org.apache.lucene.analysis.Tokenizer; +import org.elasticsearch.AnalysisFactoryTestCase; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; +import org.elasticsearch.common.UUIDs; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.index.analysis.pl.PolishStemTokenFilterFactory; + public class AnalysisPolishFactoryTests extends AnalysisFactoryTestCase { @Override @@ -34,4 +46,31 @@ public class AnalysisPolishFactoryTests extends AnalysisFactoryTestCase { return filters; } + public void testThreadSafety() throws IOException { + // TODO: is this the right boilerplate? I forked this out of TransportAnalyzeAction.java: + Settings settings = Settings.builder() + // for _na_ + .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0) + .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetaData.SETTING_INDEX_UUID, UUIDs.randomBase64UUID()) + .put(Environment.PATH_HOME_SETTING.getKey(), createTempDir().toString()) + .build(); + Environment environment = new Environment(settings); + IndexMetaData metaData = IndexMetaData.builder(IndexMetaData.INDEX_UUID_NA_VALUE).settings(settings).build(); + IndexSettings indexSettings = new IndexSettings(metaData, Settings.EMPTY); + testThreadSafety(new PolishStemTokenFilterFactory(indexSettings, environment, "stempelpolishstem", settings)); + } + + // TODO: move to AnalysisFactoryTestCase so we can more easily test thread safety for all factories + private void testThreadSafety(TokenFilterFactory factory) throws IOException { + final Analyzer analyzer = new Analyzer() { + @Override + protected TokenStreamComponents createComponents(String fieldName) { + Tokenizer tokenizer = new MockTokenizer(); + return new TokenStreamComponents(tokenizer, factory.create(tokenizer)); + } + }; + BaseTokenStreamTestCase.checkRandomData(random(), analyzer, 100); + } } From 193111919cc07b19bab4b45a74ee3ab1ca2e15d8 Mon Sep 17 00:00:00 2001 From: Luca Cavanna Date: Mon, 16 Jan 2017 18:54:44 +0100 Subject: [PATCH 22/28] move ignore parameter support from yaml test client to low level rest client (#22637) All the language clients support a special ignore parameter that doesn't get passed to elasticsearch with the request, but used to indicate which error code should not lead to an exception if returned for a specific request. Moving this to the low level REST client will allow the high level REST client to make use of it too, for instance so that it doesn't have to intercept ResponseExceptions when the get api returns a 404. --- .../org/elasticsearch/client/RestClient.java | 49 +++++++++++--- .../client/RestClientSingleHostTests.java | 64 +++++++++++++++---- .../client/RestClientTestUtil.java | 2 +- docs/java-rest/usage.asciidoc | 9 ++- .../test/rest/yaml/ClientYamlTestClient.java | 28 +------- 5 files changed, 101 insertions(+), 51 deletions(-) diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java index 89c3309dbbd..ce33224b7ec 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java @@ -49,6 +49,7 @@ import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.Comparator; +import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.List; @@ -282,15 +283,44 @@ public class RestClient implements Closeable { public void performRequestAsync(String method, String endpoint, Map params, HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory, ResponseListener responseListener, Header... headers) { - URI uri = buildUri(pathPrefix, endpoint, params); + Objects.requireNonNull(params, "params must not be null"); + Map requestParams = new HashMap<>(params); + //ignore is a special parameter supported by the clients, shouldn't be sent to es + String ignoreString = requestParams.remove("ignore"); + Set ignoreErrorCodes; + if (ignoreString == null) { + if (HttpHead.METHOD_NAME.equals(method)) { + //404 never causes error if returned for a HEAD request + ignoreErrorCodes = Collections.singleton(404); + } else { + ignoreErrorCodes = Collections.emptySet(); + } + } else { + String[] ignoresArray = ignoreString.split(","); + ignoreErrorCodes = new HashSet<>(); + if (HttpHead.METHOD_NAME.equals(method)) { + //404 never causes error if returned for a HEAD request + ignoreErrorCodes.add(404); + } + for (String ignoreCode : ignoresArray) { + try { + ignoreErrorCodes.add(Integer.valueOf(ignoreCode)); + } catch (NumberFormatException e) { + throw new IllegalArgumentException("ignore value should be a number, found [" + ignoreString + "] instead", e); + } + } + } + URI uri = buildUri(pathPrefix, endpoint, requestParams); HttpRequestBase request = createHttpRequest(method, uri, entity); setHeaders(request, headers); FailureTrackingResponseListener failureTrackingResponseListener = new FailureTrackingResponseListener(responseListener); long startTime = System.nanoTime(); - performRequestAsync(startTime, nextHost().iterator(), request, httpAsyncResponseConsumerFactory, failureTrackingResponseListener); + performRequestAsync(startTime, nextHost().iterator(), request, ignoreErrorCodes, httpAsyncResponseConsumerFactory, + failureTrackingResponseListener); } private void performRequestAsync(final long startTime, final Iterator hosts, final HttpRequestBase request, + final Set ignoreErrorCodes, final HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory, final FailureTrackingResponseListener listener) { final HttpHost host = hosts.next(); @@ -304,7 +334,7 @@ public class RestClient implements Closeable { RequestLogger.logResponse(logger, request, host, httpResponse); int statusCode = httpResponse.getStatusLine().getStatusCode(); Response response = new Response(request.getRequestLine(), host, httpResponse); - if (isSuccessfulResponse(request.getMethod(), statusCode)) { + if (isSuccessfulResponse(statusCode) || ignoreErrorCodes.contains(response.getStatusLine().getStatusCode())) { onResponse(host); listener.onSuccess(response); } else { @@ -312,7 +342,7 @@ public class RestClient implements Closeable { if (isRetryStatus(statusCode)) { //mark host dead and retry against next one onFailure(host); - retryIfPossible(responseException, hosts, request); + retryIfPossible(responseException); } else { //mark host alive and don't retry, as the error should be a request problem onResponse(host); @@ -329,13 +359,13 @@ public class RestClient implements Closeable { try { RequestLogger.logFailedRequest(logger, request, host, failure); onFailure(host); - retryIfPossible(failure, hosts, request); + retryIfPossible(failure); } catch(Exception e) { listener.onDefinitiveFailure(e); } } - private void retryIfPossible(Exception exception, Iterator hosts, HttpRequestBase request) { + private void retryIfPossible(Exception exception) { if (hosts.hasNext()) { //in case we are retrying, check whether maxRetryTimeout has been reached long timeElapsedMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime); @@ -347,7 +377,7 @@ public class RestClient implements Closeable { } else { listener.trackFailure(exception); request.reset(); - performRequestAsync(startTime, hosts, request, httpAsyncResponseConsumerFactory, listener); + performRequestAsync(startTime, hosts, request, ignoreErrorCodes, httpAsyncResponseConsumerFactory, listener); } } else { listener.onDefinitiveFailure(exception); @@ -452,8 +482,8 @@ public class RestClient implements Closeable { client.close(); } - private static boolean isSuccessfulResponse(String method, int statusCode) { - return statusCode < 300 || (HttpHead.METHOD_NAME.equals(method) && statusCode == 404); + private static boolean isSuccessfulResponse(int statusCode) { + return statusCode < 300; } private static boolean isRetryStatus(int statusCode) { @@ -510,7 +540,6 @@ public class RestClient implements Closeable { } private static URI buildUri(String pathPrefix, String path, Map params) { - Objects.requireNonNull(params, "params must not be null"); Objects.requireNonNull(path, "path must not be null"); try { String fullPath; diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java index 865f9b1817a..2489eb717ff 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java @@ -216,23 +216,45 @@ public class RestClientSingleHostTests extends RestClientTestCase { */ public void testErrorStatusCodes() throws IOException { for (String method : getHttpMethods()) { + Set expectedIgnores = new HashSet<>(); + String ignoreParam = ""; + if (HttpHead.METHOD_NAME.equals(method)) { + expectedIgnores.add(404); + } + if (randomBoolean()) { + int numIgnores = randomIntBetween(1, 3); + for (int i = 0; i < numIgnores; i++) { + Integer code = randomFrom(getAllErrorStatusCodes()); + expectedIgnores.add(code); + ignoreParam += code; + if (i < numIgnores - 1) { + ignoreParam += ","; + } + } + } //error status codes should cause an exception to be thrown for (int errorStatusCode : getAllErrorStatusCodes()) { try { - Response response = performRequest(method, "/" + errorStatusCode); - if (method.equals("HEAD") && errorStatusCode == 404) { - //no exception gets thrown although we got a 404 - assertThat(response.getStatusLine().getStatusCode(), equalTo(errorStatusCode)); + Map params; + if (ignoreParam.isEmpty()) { + params = Collections.emptyMap(); + } else { + params = Collections.singletonMap("ignore", ignoreParam); + } + Response response = performRequest(method, "/" + errorStatusCode, params); + if (expectedIgnores.contains(errorStatusCode)) { + //no exception gets thrown although we got an error status code, as it was configured to be ignored + assertEquals(errorStatusCode, response.getStatusLine().getStatusCode()); } else { fail("request should have failed"); } } catch(ResponseException e) { - if (method.equals("HEAD") && errorStatusCode == 404) { + if (expectedIgnores.contains(errorStatusCode)) { throw e; } - assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(errorStatusCode)); + assertEquals(errorStatusCode, e.getResponse().getStatusLine().getStatusCode()); } - if (errorStatusCode <= 500) { + if (errorStatusCode <= 500 || expectedIgnores.contains(errorStatusCode)) { failureListener.assertNotCalled(); } else { failureListener.assertCalled(httpHost); @@ -351,11 +373,10 @@ public class RestClientSingleHostTests extends RestClientTestCase { private HttpUriRequest performRandomRequest(String method) throws Exception { String uriAsString = "/" + randomStatusCode(getRandom()); URIBuilder uriBuilder = new URIBuilder(uriAsString); - Map params = Collections.emptyMap(); + final Map params = new HashMap<>(); boolean hasParams = randomBoolean(); if (hasParams) { int numParams = randomIntBetween(1, 3); - params = new HashMap<>(numParams); for (int i = 0; i < numParams; i++) { String paramKey = "param-" + i; String paramValue = randomAsciiOfLengthBetween(3, 10); @@ -363,6 +384,14 @@ public class RestClientSingleHostTests extends RestClientTestCase { uriBuilder.addParameter(paramKey, paramValue); } } + if (randomBoolean()) { + //randomly add some ignore parameter, which doesn't get sent as part of the request + String ignore = Integer.toString(randomFrom(RestClientTestUtil.getAllErrorStatusCodes())); + if (randomBoolean()) { + ignore += "," + Integer.toString(randomFrom(RestClientTestUtil.getAllErrorStatusCodes())); + } + params.put("ignore", ignore); + } URI uri = uriBuilder.build(); HttpUriRequest request; @@ -433,16 +462,25 @@ public class RestClientSingleHostTests extends RestClientTestCase { } private Response performRequest(String method, String endpoint, Header... headers) throws IOException { - switch(randomIntBetween(0, 2)) { + return performRequest(method, endpoint, Collections.emptyMap(), headers); + } + + private Response performRequest(String method, String endpoint, Map params, Header... headers) throws IOException { + int methodSelector; + if (params.isEmpty()) { + methodSelector = randomIntBetween(0, 2); + } else { + methodSelector = randomIntBetween(1, 2); + } + switch(methodSelector) { case 0: return restClient.performRequest(method, endpoint, headers); case 1: - return restClient.performRequest(method, endpoint, Collections.emptyMap(), headers); + return restClient.performRequest(method, endpoint, params, headers); case 2: - return restClient.performRequest(method, endpoint, Collections.emptyMap(), (HttpEntity)null, headers); + return restClient.performRequest(method, endpoint, params, (HttpEntity)null, headers); default: throw new UnsupportedOperationException(); } } - } diff --git a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java index dbf85578b19..a0a6641abbc 100644 --- a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java +++ b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java @@ -59,7 +59,7 @@ final class RestClientTestUtil { } static int randomStatusCode(Random random) { - return RandomPicks.randomFrom(random, ALL_ERROR_STATUS_CODES); + return RandomPicks.randomFrom(random, ALL_STATUS_CODES); } static int randomOkStatusCode(Random random) { diff --git a/docs/java-rest/usage.asciidoc b/docs/java-rest/usage.asciidoc index b3097ef9d0b..69eb16280ed 100644 --- a/docs/java-rest/usage.asciidoc +++ b/docs/java-rest/usage.asciidoc @@ -206,7 +206,14 @@ access to the returned response. NOTE: A `ResponseException` is **not** thrown for `HEAD` requests that return a `404` status code because it is an expected `HEAD` response that simply denotes that the resource is not found. All other HTTP methods (e.g., `GET`) -throw a `ResponseException` for `404` responses. +throw a `ResponseException` for `404` responses unless the `ignore` parameter +contains `404`. `ignore` is a special client parameter that doesn't get sent +to Elasticsearch and contains a comma separated list of error status codes. +It allows to control whether some error status code should be treated as an +expected response rather than as an exception. This is useful for instance +with the get api as it can return `404` when the document is missing, in which +case the response body will not contain an error but rather the usual get api +response, just without the document as it was not found. === Example requests diff --git a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java index 3b719b6d04f..748a08384ca 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java +++ b/test/framework/src/main/java/org/elasticsearch/test/rest/yaml/ClientYamlTestClient.java @@ -40,8 +40,6 @@ import org.elasticsearch.test.rest.yaml.restspec.ClientYamlSuiteRestSpec; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; -import java.util.ArrayList; -import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; @@ -59,7 +57,7 @@ public class ClientYamlTestClient { * Query params that don't need to be declared in the spec, they are supported by default. */ private static final Set ALWAYS_ACCEPTED_QUERY_STRING_PARAMS = Sets.newHashSet( - "error_trace", "filter_path", "human", "pretty", "source"); + "ignore", "error_trace", "human", "filter_path", "pretty", "source"); private final ClientYamlSuiteRestSpec restSpec; private final RestClient restClient; @@ -101,31 +99,12 @@ public class ClientYamlTestClient { } } - List ignores = new ArrayList<>(); - Map requestParams; - if (params == null) { - requestParams = Collections.emptyMap(); - } else { - requestParams = new HashMap<>(params); - if (params.isEmpty() == false) { - //ignore is a special parameter supported by the clients, shouldn't be sent to es - String ignoreString = requestParams.remove("ignore"); - if (ignoreString != null) { - try { - ignores.add(Integer.valueOf(ignoreString)); - } catch (NumberFormatException e) { - throw new IllegalArgumentException("ignore value should be a number, found [" + ignoreString + "] instead"); - } - } - } - } - ClientYamlSuiteRestApi restApi = restApi(apiName); //divide params between ones that go within query string and ones that go within path Map pathParts = new HashMap<>(); Map queryStringParams = new HashMap<>(); - for (Map.Entry entry : requestParams.entrySet()) { + for (Map.Entry entry : params.entrySet()) { if (restApi.getPathParts().contains(entry.getKey())) { pathParts.put(entry.getKey(), entry.getValue()); } else { @@ -197,9 +176,6 @@ public class ClientYamlTestClient { Response response = restClient.performRequest(requestMethod, requestPath, queryStringParams, requestBody, requestHeaders); return new ClientYamlTestResponse(response); } catch(ResponseException e) { - if (ignores.contains(e.getResponse().getStatusLine().getStatusCode())) { - return new ClientYamlTestResponse(e.getResponse()); - } throw new ClientYamlTestResponseException(e); } } From f30b1f82eea3ae892f33e143c8237f89baf09a95 Mon Sep 17 00:00:00 2001 From: Simon Willnauer Date: Mon, 16 Jan 2017 21:06:08 +0100 Subject: [PATCH 23/28] Remove HttpServer and HttpServerAdapter in favor of a simple dispatch method (#22636) Today we have quite some abstractions that are essentially providing a simple dispatch method to the plugins defining a `HttpServerTransport`. This commit removes `HttpServer` and `HttpServerAdaptor` and introduces a simple `Dispatcher` functional interface that delegate to `RestController` by default. Relates to #18482 --- .../resources/checkstyle_suppressions.xml | 1 - .../elasticsearch/action/ActionModule.java | 19 +- .../node/info/TransportNodesInfoAction.java | 2 +- .../node/stats/TransportNodesStatsAction.java | 2 +- .../stats/TransportClusterStatsAction.java | 2 +- .../ingest/DeletePipelineTransportAction.java | 2 +- .../ingest/GetPipelineTransportAction.java | 2 +- .../ingest/PutPipelineTransportAction.java | 2 +- .../SimulatePipelineTransportAction.java | 3 +- .../elasticsearch/bootstrap/Bootstrap.java | 3 +- .../cli/EnvironmentAwareCommand.java | 2 +- .../client/transport/TransportClient.java | 6 +- .../common/network/NetworkModule.java | 5 +- .../common/settings/SettingsModule.java | 6 +- .../org/elasticsearch/http/HttpServer.java | 206 ---------------- .../elasticsearch/http/HttpServerAdapter.java | 30 --- .../http/HttpServerTransport.java | 16 +- .../index/translog/TranslogToolCli.java | 4 - .../InternalSettingsPreparer.java | 9 +- .../java/org/elasticsearch/node/Node.java | 41 ++-- .../org/elasticsearch/node/NodeModule.java | 2 - .../node/{service => }/NodeService.java | 22 +- .../plugins/InstallPluginCommand.java | 3 - .../plugins/ListPluginsCommand.java | 3 - .../elasticsearch/plugins/NetworkPlugin.java | 7 +- .../elasticsearch/rest/RestController.java | 132 +++++++++- .../common/network/NetworkModuleTests.java | 15 +- .../elasticsearch/http/HttpServerTests.java | 231 ------------------ ...gestProcessorNotInstalledOnAllNodesIT.java | 2 +- .../InternalSettingsPreparerTests.java | 1 + .../rest/RestControllerTests.java | 203 ++++++++++++++- .../cluster/RestNodesStatsActionTests.java | 3 +- .../indices/RestIndicesStatsActionTests.java | 2 +- .../action/cat/RestIndicesActionTests.java | 2 +- .../action/cat/RestRecoveryActionTests.java | 2 +- .../netty4/Netty4HttpServerTransport.java | 16 +- .../elasticsearch/transport/Netty4Plugin.java | 9 +- .../http/netty4/Netty4HttpChannelTests.java | 9 +- .../Netty4HttpServerPipeliningTests.java | 2 +- .../Netty4HttpServerTransportTests.java | 9 +- .../java/org/elasticsearch/node/MockNode.java | 4 - .../test/AbstractQueryTestCase.java | 2 +- .../test/InternalTestCluster.java | 3 +- 43 files changed, 464 insertions(+), 583 deletions(-) delete mode 100644 core/src/main/java/org/elasticsearch/http/HttpServer.java delete mode 100644 core/src/main/java/org/elasticsearch/http/HttpServerAdapter.java rename core/src/main/java/org/elasticsearch/node/{internal => }/InternalSettingsPreparer.java (96%) rename core/src/main/java/org/elasticsearch/node/{service => }/NodeService.java (88%) delete mode 100644 core/src/test/java/org/elasticsearch/http/HttpServerTests.java diff --git a/buildSrc/src/main/resources/checkstyle_suppressions.xml b/buildSrc/src/main/resources/checkstyle_suppressions.xml index a3e5af6c4d4..a089d677dc9 100644 --- a/buildSrc/src/main/resources/checkstyle_suppressions.xml +++ b/buildSrc/src/main/resources/checkstyle_suppressions.xml @@ -430,7 +430,6 @@ - diff --git a/core/src/main/java/org/elasticsearch/action/ActionModule.java b/core/src/main/java/org/elasticsearch/action/ActionModule.java index a24ed5f8083..0560b11cbb8 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionModule.java +++ b/core/src/main/java/org/elasticsearch/action/ActionModule.java @@ -19,7 +19,6 @@ package org.elasticsearch.action; -import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Map; @@ -196,6 +195,7 @@ import org.elasticsearch.action.termvectors.TransportShardMultiTermsVectorAction import org.elasticsearch.action.termvectors.TransportTermVectorsAction; import org.elasticsearch.action.update.TransportUpdateAction; import org.elasticsearch.action.update.UpdateAction; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.common.NamedRegistry; import org.elasticsearch.common.inject.AbstractModule; @@ -205,6 +205,7 @@ import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.ActionPlugin.ActionHandler; import org.elasticsearch.rest.RestController; @@ -332,7 +333,8 @@ public class ActionModule extends AbstractModule { private final RestController restController; public ActionModule(boolean transportClient, Settings settings, IndexNameExpressionResolver resolver, - ClusterSettings clusterSettings, ThreadPool threadPool, List actionPlugins) { + ClusterSettings clusterSettings, ThreadPool threadPool, List actionPlugins, + NodeClient nodeClient, CircuitBreakerService circuitBreakerService) { this.transportClient = transportClient; this.settings = settings; this.actionPlugins = actionPlugins; @@ -352,9 +354,14 @@ public class ActionModule extends AbstractModule { restWrapper = newRestWrapper; } } - restController = new RestController(settings, headers, restWrapper); + if (transportClient) { + restController = null; + } else { + restController = new RestController(settings, headers, restWrapper, nodeClient, circuitBreakerService); + } } + public Map> getActions() { return actions; } @@ -648,8 +655,10 @@ public class ActionModule extends AbstractModule { } } - // Bind the RestController which is required (by Node) even if rest isn't enabled. - bind(RestController.class).toInstance(restController); + if (restController != null) { + // Bind the RestController which is required (by Node) even if rest isn't enabled. + bind(RestController.class).toInstance(restController); + } // Setup the RestHandlers if (NetworkModule.HTTP_ENABLED.get(settings)) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java index c26554b25e0..7d80b84d5d2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java index b4cef38d28d..e4034582f96 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java index 45eb83dd9e1..d77bc599258 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java @@ -39,7 +39,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java index 74ce894b053..45cb83634f8 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java @@ -30,7 +30,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java index 8bac5c7b804..f64b36d47ae 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java @@ -30,7 +30,7 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java index 82cd8d8eb7b..7dde9818049 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java @@ -36,7 +36,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; import org.elasticsearch.ingest.IngestInfo; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java index 4f9a219c8ad..61fd400a1d3 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.ingest; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.HandledTransportAction; @@ -28,7 +27,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java index 33f3f922fa4..2b47908c352 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java +++ b/core/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java @@ -40,7 +40,6 @@ import org.elasticsearch.common.logging.LogConfigurator; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.network.IfConfig; import org.elasticsearch.common.settings.KeyStoreWrapper; -import org.elasticsearch.common.settings.SecureSetting; import org.elasticsearch.common.settings.SecureSettings; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.BoundTransportAddress; @@ -50,7 +49,7 @@ import org.elasticsearch.monitor.os.OsProbe; import org.elasticsearch.monitor.process.ProcessProbe; import org.elasticsearch.node.Node; import org.elasticsearch.node.NodeValidationException; -import org.elasticsearch.node.internal.InternalSettingsPreparer; +import org.elasticsearch.node.InternalSettingsPreparer; import java.io.ByteArrayOutputStream; import java.io.IOException; diff --git a/core/src/main/java/org/elasticsearch/cli/EnvironmentAwareCommand.java b/core/src/main/java/org/elasticsearch/cli/EnvironmentAwareCommand.java index b19fc4ca957..8372a6b8ab8 100644 --- a/core/src/main/java/org/elasticsearch/cli/EnvironmentAwareCommand.java +++ b/core/src/main/java/org/elasticsearch/cli/EnvironmentAwareCommand.java @@ -24,7 +24,7 @@ import joptsimple.OptionSpec; import joptsimple.util.KeyValuePair; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; +import org.elasticsearch.node.InternalSettingsPreparer; import java.util.HashMap; import java.util.Locale; diff --git a/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java b/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java index 803c0e6d1d1..51bed4a4582 100644 --- a/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java +++ b/core/src/main/java/org/elasticsearch/client/transport/TransportClient.java @@ -46,7 +46,7 @@ import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.node.Node; -import org.elasticsearch.node.internal.InternalSettingsPreparer; +import org.elasticsearch.node.InternalSettingsPreparer; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.NetworkPlugin; import org.elasticsearch.plugins.Plugin; @@ -160,7 +160,7 @@ public abstract class TransportClient extends AbstractClient { } modules.add(b -> b.bind(ThreadPool.class).toInstance(threadPool)); ActionModule actionModule = new ActionModule(true, settings, null, settingsModule.getClusterSettings(), - threadPool, pluginsService.filterPlugins(ActionPlugin.class)); + threadPool, pluginsService.filterPlugins(ActionPlugin.class), null, null); modules.add(actionModule); CircuitBreakerService circuitBreakerService = Node.createCircuitBreakerService(settingsModule.getSettings(), @@ -170,7 +170,7 @@ public abstract class TransportClient extends AbstractClient { resourcesToClose.add(bigArrays); modules.add(settingsModule); NetworkModule networkModule = new NetworkModule(settings, true, pluginsService.filterPlugins(NetworkPlugin.class), threadPool, - bigArrays, circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService); + bigArrays, circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService, null); final Transport transport = networkModule.getTransportSupplier().get(); final TransportService transportService = new TransportService(settings, transport, threadPool, networkModule.getTransportInterceptor(), diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java b/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java index 04f6b62dde1..81d228da230 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkModule.java @@ -38,6 +38,7 @@ import org.elasticsearch.common.xcontent.NamedXContentRegistry.FromXContent; import org.elasticsearch.http.HttpServerTransport; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.plugins.NetworkPlugin; +import org.elasticsearch.rest.RestController; import org.elasticsearch.tasks.RawTaskStatus; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; @@ -109,13 +110,13 @@ public final class NetworkModule { CircuitBreakerService circuitBreakerService, NamedWriteableRegistry namedWriteableRegistry, NamedXContentRegistry xContentRegistry, - NetworkService networkService) { + NetworkService networkService, HttpServerTransport.Dispatcher dispatcher) { this.settings = settings; this.transportClient = transportClient; for (NetworkPlugin plugin : plugins) { if (transportClient == false && HTTP_ENABLED.get(settings)) { Map> httpTransportFactory = plugin.getHttpTransports(settings, threadPool, bigArrays, - circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService); + circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService, dispatcher); for (Map.Entry> entry : httpTransportFactory.entrySet()) { registerHttpTransport(entry.getKey(), entry.getValue()); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java b/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java index 60276ce14f7..44d18208803 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java +++ b/core/src/main/java/org/elasticsearch/common/settings/SettingsModule.java @@ -54,6 +54,7 @@ public class SettingsModule implements Module { private final Logger logger; private final IndexScopedSettings indexScopedSettings; private final ClusterSettings clusterSettings; + private final SettingsFilter settingsFilter; public SettingsModule(Settings settings, Setting... additionalSettings) { this(settings, Arrays.asList(additionalSettings), Collections.emptyList()); @@ -137,12 +138,13 @@ public class SettingsModule implements Module { final Predicate acceptOnlyClusterSettings = TRIBE_CLIENT_NODE_SETTINGS_PREDICATE.negate(); clusterSettings.validate(settings.filter(acceptOnlyClusterSettings)); validateTribeSettings(settings, clusterSettings); + this.settingsFilter = new SettingsFilter(settings, settingsFilterPattern); } @Override public void configure(Binder binder) { binder.bind(Settings.class).toInstance(settings); - binder.bind(SettingsFilter.class).toInstance(new SettingsFilter(settings, settingsFilterPattern)); + binder.bind(SettingsFilter.class).toInstance(settingsFilter); binder.bind(ClusterSettings.class).toInstance(clusterSettings); binder.bind(IndexScopedSettings.class).toInstance(indexScopedSettings); } @@ -218,4 +220,6 @@ public class SettingsModule implements Module { public ClusterSettings getClusterSettings() { return clusterSettings; } + + public SettingsFilter getSettingsFilter() { return settingsFilter; } } diff --git a/core/src/main/java/org/elasticsearch/http/HttpServer.java b/core/src/main/java/org/elasticsearch/http/HttpServer.java deleted file mode 100644 index 06bc392587a..00000000000 --- a/core/src/main/java/org/elasticsearch/http/HttpServer.java +++ /dev/null @@ -1,206 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.http; - -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.elasticsearch.client.node.NodeClient; -import org.elasticsearch.common.Nullable; -import org.elasticsearch.common.breaker.CircuitBreaker; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.component.AbstractLifecycleComponent; -import org.elasticsearch.common.io.Streams; -import org.elasticsearch.common.io.stream.BytesStreamOutput; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.concurrent.ThreadContext; -import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.RestStatus; - -import java.io.ByteArrayOutputStream; -import java.io.IOException; -import java.io.InputStream; -import java.util.concurrent.atomic.AtomicBoolean; - -import static org.elasticsearch.rest.RestStatus.FORBIDDEN; -import static org.elasticsearch.rest.RestStatus.INTERNAL_SERVER_ERROR; - -/** - * A component to serve http requests, backed by rest handlers. - */ -public class HttpServer extends AbstractLifecycleComponent implements HttpServerAdapter { - private final HttpServerTransport transport; - - private final RestController restController; - - private final NodeClient client; - - private final CircuitBreakerService circuitBreakerService; - - public HttpServer(Settings settings, HttpServerTransport transport, RestController restController, - NodeClient client, CircuitBreakerService circuitBreakerService) { - super(settings); - this.transport = transport; - this.restController = restController; - this.client = client; - this.circuitBreakerService = circuitBreakerService; - transport.httpServerAdapter(this); - } - - - @Override - protected void doStart() { - transport.start(); - if (logger.isInfoEnabled()) { - logger.info("{}", transport.boundAddress()); - } - } - - @Override - protected void doStop() { - transport.stop(); - } - - @Override - protected void doClose() { - transport.close(); - } - - public HttpInfo info() { - return transport.info(); - } - - public HttpStats stats() { - return transport.stats(); - } - - public void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext threadContext) { - if (request.rawPath().equals("/favicon.ico")) { - handleFavicon(request, channel); - return; - } - RestChannel responseChannel = channel; - try { - int contentLength = request.content().length(); - if (restController.canTripCircuitBreaker(request)) { - inFlightRequestsBreaker(circuitBreakerService).addEstimateBytesAndMaybeBreak(contentLength, ""); - } else { - inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(contentLength); - } - // iff we could reserve bytes for the request we need to send the response also over this channel - responseChannel = new ResourceHandlingHttpChannel(channel, circuitBreakerService, contentLength); - restController.dispatchRequest(request, responseChannel, client, threadContext); - } catch (Exception e) { - try { - responseChannel.sendResponse(new BytesRestResponse(channel, e)); - } catch (Exception inner) { - inner.addSuppressed(e); - logger.error((Supplier) () -> - new ParameterizedMessage("failed to send failure response for uri [{}]", request.uri()), inner); - } - } - } - - void handleFavicon(RestRequest request, RestChannel channel) { - if (request.method() == RestRequest.Method.GET) { - try { - try (InputStream stream = getClass().getResourceAsStream("/config/favicon.ico")) { - ByteArrayOutputStream out = new ByteArrayOutputStream(); - Streams.copy(stream, out); - BytesRestResponse restResponse = new BytesRestResponse(RestStatus.OK, "image/x-icon", out.toByteArray()); - channel.sendResponse(restResponse); - } - } catch (IOException e) { - channel.sendResponse(new BytesRestResponse(INTERNAL_SERVER_ERROR, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } - } else { - channel.sendResponse(new BytesRestResponse(FORBIDDEN, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); - } - } - - private static final class ResourceHandlingHttpChannel implements RestChannel { - private final RestChannel delegate; - private final CircuitBreakerService circuitBreakerService; - private final int contentLength; - private final AtomicBoolean closed = new AtomicBoolean(); - - public ResourceHandlingHttpChannel(RestChannel delegate, CircuitBreakerService circuitBreakerService, int contentLength) { - this.delegate = delegate; - this.circuitBreakerService = circuitBreakerService; - this.contentLength = contentLength; - } - - @Override - public XContentBuilder newBuilder() throws IOException { - return delegate.newBuilder(); - } - - @Override - public XContentBuilder newErrorBuilder() throws IOException { - return delegate.newErrorBuilder(); - } - - @Override - public XContentBuilder newBuilder(@Nullable BytesReference autoDetectSource, boolean useFiltering) throws IOException { - return delegate.newBuilder(autoDetectSource, useFiltering); - } - - @Override - public BytesStreamOutput bytesOutput() { - return delegate.bytesOutput(); - } - - @Override - public RestRequest request() { - return delegate.request(); - } - - @Override - public boolean detailedErrorsEnabled() { - return delegate.detailedErrorsEnabled(); - } - - @Override - public void sendResponse(RestResponse response) { - close(); - delegate.sendResponse(response); - } - - private void close() { - // attempt to close once atomically - if (closed.compareAndSet(false, true) == false) { - throw new IllegalStateException("Channel is already closed"); - } - inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(-contentLength); - } - - } - - private static CircuitBreaker inFlightRequestsBreaker(CircuitBreakerService circuitBreakerService) { - // We always obtain a fresh breaker to reflect changes to the breaker configuration. - return circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS); - } -} diff --git a/core/src/main/java/org/elasticsearch/http/HttpServerAdapter.java b/core/src/main/java/org/elasticsearch/http/HttpServerAdapter.java deleted file mode 100644 index a7e61143893..00000000000 --- a/core/src/main/java/org/elasticsearch/http/HttpServerAdapter.java +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.http; - -import org.elasticsearch.common.util.concurrent.ThreadContext; -import org.elasticsearch.rest.RestChannel; -import org.elasticsearch.rest.RestRequest; - -public interface HttpServerAdapter { - - void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext context); - -} diff --git a/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java b/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java index 4dc4a888d8a..89c04198e7f 100644 --- a/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java +++ b/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java @@ -21,6 +21,9 @@ package org.elasticsearch.http; import org.elasticsearch.common.component.LifecycleComponent; import org.elasticsearch.common.transport.BoundTransportAddress; +import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.rest.RestChannel; +import org.elasticsearch.rest.RestRequest; public interface HttpServerTransport extends LifecycleComponent { @@ -33,6 +36,15 @@ public interface HttpServerTransport extends LifecycleComponent { HttpStats stats(); - void httpServerAdapter(HttpServerAdapter httpServerAdapter); - + @FunctionalInterface + interface Dispatcher { + /** + * Dispatches the {@link RestRequest} to the relevant request handler or responds to the given rest channel directly if + * the request can't be handled by any request handler. + * @param request the request to dispatch + * @param channel the response channel of this request + * @param threadContext the nodes thread context + */ + void dispatch(RestRequest request, RestChannel channel, ThreadContext threadContext); + } } diff --git a/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java b/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java index 3b77466a916..944296d6813 100644 --- a/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java +++ b/core/src/main/java/org/elasticsearch/index/translog/TranslogToolCli.java @@ -21,10 +21,6 @@ package org.elasticsearch.index.translog; import org.elasticsearch.cli.MultiCommand; import org.elasticsearch.cli.Terminal; -import org.elasticsearch.common.logging.LogConfigurator; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; /** * Class encapsulating and dispatching commands from the {@code elasticsearch-translog} command line tool diff --git a/core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java b/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java similarity index 96% rename from core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java rename to core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java index 840378ffd08..1b3ffe4327b 100644 --- a/core/src/main/java/org/elasticsearch/node/internal/InternalSettingsPreparer.java +++ b/core/src/main/java/org/elasticsearch/node/InternalSettingsPreparer.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.node.internal; +package org.elasticsearch.node; import java.io.IOException; import java.nio.file.Files; @@ -106,7 +106,8 @@ public class InternalSettingsPreparer { } } if (foundSuffixes.size() > 1) { - throw new SettingsException("multiple settings files found with suffixes: " + Strings.collectionToDelimitedString(foundSuffixes, ",")); + throw new SettingsException("multiple settings files found with suffixes: " + + Strings.collectionToDelimitedString(foundSuffixes, ",")); } // re-initialize settings now that the config file has been loaded @@ -195,7 +196,9 @@ public class InternalSettingsPreparer { private static String promptForValue(String key, Terminal terminal, boolean secret) { if (terminal == null) { - throw new UnsupportedOperationException("found property [" + key + "] with value [" + (secret ? SECRET_PROMPT_VALUE : TEXT_PROMPT_VALUE) +"]. prompting for property values is only supported when running elasticsearch in the foreground"); + throw new UnsupportedOperationException("found property [" + key + "] with value [" + + (secret ? SECRET_PROMPT_VALUE : TEXT_PROMPT_VALUE) + + "]. prompting for property values is only supported when running elasticsearch in the foreground"); } if (secret) { diff --git a/core/src/main/java/org/elasticsearch/node/Node.java b/core/src/main/java/org/elasticsearch/node/Node.java index 3fcda6c0a3b..97ab20c7767 100644 --- a/core/src/main/java/org/elasticsearch/node/Node.java +++ b/core/src/main/java/org/elasticsearch/node/Node.java @@ -84,7 +84,6 @@ import org.elasticsearch.gateway.GatewayAllocator; import org.elasticsearch.gateway.GatewayModule; import org.elasticsearch.gateway.GatewayService; import org.elasticsearch.gateway.MetaStateService; -import org.elasticsearch.http.HttpServer; import org.elasticsearch.http.HttpServerTransport; import org.elasticsearch.index.analysis.AnalysisRegistry; import org.elasticsearch.indices.IndicesModule; @@ -101,8 +100,6 @@ import org.elasticsearch.indices.store.IndicesStore; import org.elasticsearch.ingest.IngestService; import org.elasticsearch.monitor.MonitorService; import org.elasticsearch.monitor.jvm.JvmInfo; -import org.elasticsearch.node.internal.InternalSettingsPreparer; -import org.elasticsearch.node.service.NodeService; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.AnalysisPlugin; import org.elasticsearch.plugins.ClusterPlugin; @@ -117,6 +114,7 @@ import org.elasticsearch.plugins.RepositoryPlugin; import org.elasticsearch.plugins.ScriptPlugin; import org.elasticsearch.plugins.SearchPlugin; import org.elasticsearch.repositories.RepositoriesModule; +import org.elasticsearch.rest.RestController; import org.elasticsearch.script.ScriptModule; import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.SearchModule; @@ -342,14 +340,18 @@ public class Node implements Closeable { modules.add(clusterModule); IndicesModule indicesModule = new IndicesModule(pluginsService.filterPlugins(MapperPlugin.class)); modules.add(indicesModule); + SearchModule searchModule = new SearchModule(settings, false, pluginsService.filterPlugins(SearchPlugin.class)); - ActionModule actionModule = new ActionModule(false, settings, clusterModule.getIndexNameExpressionResolver(), - settingsModule.getClusterSettings(), threadPool, pluginsService.filterPlugins(ActionPlugin.class)); - modules.add(actionModule); - modules.add(new GatewayModule()); CircuitBreakerService circuitBreakerService = createCircuitBreakerService(settingsModule.getSettings(), settingsModule.getClusterSettings()); resourcesToClose.add(circuitBreakerService); + ActionModule actionModule = new ActionModule(false, settings, clusterModule.getIndexNameExpressionResolver(), + settingsModule.getClusterSettings(), threadPool, pluginsService.filterPlugins(ActionPlugin.class), client, + circuitBreakerService); + modules.add(actionModule); + modules.add(new GatewayModule()); + + BigArrays bigArrays = createBigArrays(settings, circuitBreakerService); resourcesToClose.add(bigArrays); modules.add(settingsModule); @@ -388,30 +390,35 @@ public class Node implements Closeable { pluginsService.filterPlugins(Plugin.class).stream() .map(Plugin::getCustomMetaDataUpgrader) .collect(Collectors.toList()); + final RestController restController = actionModule.getRestController(); final NetworkModule networkModule = new NetworkModule(settings, false, pluginsService.filterPlugins(NetworkPlugin.class), - threadPool, bigArrays, circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService); + threadPool, bigArrays, circuitBreakerService, namedWriteableRegistry, xContentRegistry, networkService, + restController::dispatchRequest); final MetaDataUpgrader metaDataUpgrader = new MetaDataUpgrader(customMetaDataUpgraders); final Transport transport = networkModule.getTransportSupplier().get(); final TransportService transportService = newTransportService(settings, transport, threadPool, networkModule.getTransportInterceptor(), localNodeFactory, settingsModule.getClusterSettings()); final Consumer httpBind; + final HttpServerTransport httpServerTransport; if (networkModule.isHttpEnabled()) { - HttpServerTransport httpServerTransport = networkModule.getHttpServerTransportSupplier().get(); - HttpServer httpServer = new HttpServer(settings, httpServerTransport, actionModule.getRestController(), client, - circuitBreakerService); + httpServerTransport = networkModule.getHttpServerTransportSupplier().get(); httpBind = b -> { - b.bind(HttpServer.class).toInstance(httpServer); b.bind(HttpServerTransport.class).toInstance(httpServerTransport); }; } else { httpBind = b -> { - b.bind(HttpServer.class).toProvider(Providers.of(null)); + b.bind(HttpServerTransport.class).toProvider(Providers.of(null)); }; + httpServerTransport = null; } - final DiscoveryModule discoveryModule = new DiscoveryModule(this.settings, threadPool, transportService, namedWriteableRegistry, networkService, clusterService, pluginsService.filterPlugins(DiscoveryPlugin.class)); + NodeService nodeService = new NodeService(settings, threadPool, monitorService, discoveryModule.getDiscovery(), + transportService, indicesService, pluginsService, circuitBreakerService, scriptModule.getScriptService(), + httpServerTransport, ingestService, clusterService, settingsModule.getSettingsFilter()); + modules.add(b -> { + b.bind(NodeService.class).toInstance(nodeService); b.bind(NamedXContentRegistry.class).toInstance(xContentRegistry); b.bind(PluginsService.class).toInstance(pluginsService); b.bind(Client.class).toInstance(client); @@ -628,7 +635,7 @@ public class Node implements Closeable { } if (NetworkModule.HTTP_ENABLED.get(settings)) { - injector.getInstance(HttpServer.class).start(); + injector.getInstance(HttpServerTransport.class).start(); } // start nodes now, after the http server, because it may take some time @@ -658,7 +665,7 @@ public class Node implements Closeable { injector.getInstance(TribeService.class).stop(); injector.getInstance(ResourceWatcherService.class).stop(); if (NetworkModule.HTTP_ENABLED.get(settings)) { - injector.getInstance(HttpServer.class).stop(); + injector.getInstance(HttpServerTransport.class).stop(); } injector.getInstance(SnapshotsService.class).stop(); @@ -708,7 +715,7 @@ public class Node implements Closeable { toClose.add(injector.getInstance(NodeService.class)); toClose.add(() -> stopWatch.stop().start("http")); if (NetworkModule.HTTP_ENABLED.get(settings)) { - toClose.add(injector.getInstance(HttpServer.class)); + toClose.add(injector.getInstance(HttpServerTransport.class)); } toClose.add(() -> stopWatch.stop().start("snapshot_service")); toClose.add(injector.getInstance(SnapshotsService.class)); diff --git a/core/src/main/java/org/elasticsearch/node/NodeModule.java b/core/src/main/java/org/elasticsearch/node/NodeModule.java index 6a8f8b90681..929e889503e 100644 --- a/core/src/main/java/org/elasticsearch/node/NodeModule.java +++ b/core/src/main/java/org/elasticsearch/node/NodeModule.java @@ -22,7 +22,6 @@ package org.elasticsearch.node; import org.elasticsearch.cluster.routing.allocation.DiskThresholdMonitor; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.monitor.MonitorService; -import org.elasticsearch.node.service.NodeService; public class NodeModule extends AbstractModule { @@ -38,7 +37,6 @@ public class NodeModule extends AbstractModule { protected void configure() { bind(Node.class).toInstance(node); bind(MonitorService.class).toInstance(monitorService); - bind(NodeService.class).asEagerSingleton(); bind(DiskThresholdMonitor.class).asEagerSingleton(); } } diff --git a/core/src/main/java/org/elasticsearch/node/service/NodeService.java b/core/src/main/java/org/elasticsearch/node/NodeService.java similarity index 88% rename from core/src/main/java/org/elasticsearch/node/service/NodeService.java rename to core/src/main/java/org/elasticsearch/node/NodeService.java index 7d9a148b271..cb245487152 100644 --- a/core/src/main/java/org/elasticsearch/node/service/NodeService.java +++ b/core/src/main/java/org/elasticsearch/node/NodeService.java @@ -17,7 +17,7 @@ * under the License. */ -package org.elasticsearch.node.service; +package org.elasticsearch.node; import org.elasticsearch.Build; import org.elasticsearch.Version; @@ -27,11 +27,10 @@ import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.discovery.Discovery; -import org.elasticsearch.http.HttpServer; +import org.elasticsearch.http.HttpServerTransport; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.ingest.IngestService; @@ -55,17 +54,16 @@ public class NodeService extends AbstractComponent implements Closeable { private final IngestService ingestService; private final SettingsFilter settingsFilter; private ScriptService scriptService; + private final HttpServerTransport httpServerTransport; - @Nullable - private final HttpServer httpServer; private final Discovery discovery; - @Inject - public NodeService(Settings settings, ThreadPool threadPool, MonitorService monitorService, Discovery discovery, + NodeService(Settings settings, ThreadPool threadPool, MonitorService monitorService, Discovery discovery, TransportService transportService, IndicesService indicesService, PluginsService pluginService, - CircuitBreakerService circuitBreakerService, ScriptService scriptService, @Nullable HttpServer httpServer, - IngestService ingestService, ClusterService clusterService, SettingsFilter settingsFilter) { + CircuitBreakerService circuitBreakerService, ScriptService scriptService, + @Nullable HttpServerTransport httpServerTransport, IngestService ingestService, ClusterService clusterService, + SettingsFilter settingsFilter) { super(settings); this.threadPool = threadPool; this.monitorService = monitorService; @@ -74,7 +72,7 @@ public class NodeService extends AbstractComponent implements Closeable { this.discovery = discovery; this.pluginService = pluginService; this.circuitBreakerService = circuitBreakerService; - this.httpServer = httpServer; + this.httpServerTransport = httpServerTransport; this.ingestService = ingestService; this.settingsFilter = settingsFilter; this.scriptService = scriptService; @@ -91,7 +89,7 @@ public class NodeService extends AbstractComponent implements Closeable { jvm ? monitorService.jvmService().info() : null, threadPool ? this.threadPool.info() : null, transport ? transportService.info() : null, - http ? (httpServer == null ? null : httpServer.info()) : null, + http ? (httpServerTransport == null ? null : httpServerTransport.info()) : null, plugin ? (pluginService == null ? null : pluginService.info()) : null, ingest ? (ingestService == null ? null : ingestService.info()) : null, indices ? indicesService.getTotalIndexingBufferBytes() : null @@ -111,7 +109,7 @@ public class NodeService extends AbstractComponent implements Closeable { threadPool ? this.threadPool.stats() : null, fs ? monitorService.fsService().stats() : null, transport ? transportService.stats() : null, - http ? (httpServer == null ? null : httpServer.stats()) : null, + http ? (httpServerTransport == null ? null : httpServerTransport.stats()) : null, circuitBreaker ? circuitBreakerService.stats() : null, script ? scriptService.stats() : null, discoveryStats ? discovery.stats() : null, diff --git a/core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java b/core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java index dea6fcba312..b502b2a4016 100644 --- a/core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java +++ b/core/src/main/java/org/elasticsearch/plugins/InstallPluginCommand.java @@ -33,9 +33,7 @@ import org.elasticsearch.cli.UserException; import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.hash.MessageDigests; import org.elasticsearch.common.io.FileSystemUtils; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; import java.io.BufferedReader; import java.io.IOException; @@ -63,7 +61,6 @@ import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Locale; -import java.util.Map; import java.util.Objects; import java.util.Set; import java.util.TreeSet; diff --git a/core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java b/core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java index 3f21c44a8f4..a674e7c6e24 100644 --- a/core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java +++ b/core/src/main/java/org/elasticsearch/plugins/ListPluginsCommand.java @@ -22,9 +22,7 @@ package org.elasticsearch.plugins; import joptsimple.OptionSet; import org.elasticsearch.cli.EnvironmentAwareCommand; import org.elasticsearch.cli.Terminal; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.env.Environment; -import org.elasticsearch.node.internal.InternalSettingsPreparer; import java.io.IOException; import java.nio.file.DirectoryStream; @@ -33,7 +31,6 @@ import java.nio.file.Path; import java.util.ArrayList; import java.util.Collections; import java.util.List; -import java.util.Map; /** * A command for the plugin cli to list plugins installed in elasticsearch. diff --git a/core/src/main/java/org/elasticsearch/plugins/NetworkPlugin.java b/core/src/main/java/org/elasticsearch/plugins/NetworkPlugin.java index ceb7e077e11..33fab61c24a 100644 --- a/core/src/main/java/org/elasticsearch/plugins/NetworkPlugin.java +++ b/core/src/main/java/org/elasticsearch/plugins/NetworkPlugin.java @@ -69,8 +69,11 @@ public interface NetworkPlugin { * See {@link org.elasticsearch.common.network.NetworkModule#HTTP_TYPE_SETTING} to configure a specific implementation. */ default Map> getHttpTransports(Settings settings, ThreadPool threadPool, BigArrays bigArrays, - CircuitBreakerService circuitBreakerService, NamedWriteableRegistry namedWriteableRegistry, - NamedXContentRegistry xContentRegistry, NetworkService networkService) { + CircuitBreakerService circuitBreakerService, + NamedWriteableRegistry namedWriteableRegistry, + NamedXContentRegistry xContentRegistry, + NetworkService networkService, + HttpServerTransport.Dispatcher dispatcher) { return Collections.emptyMap(); } } diff --git a/core/src/main/java/org/elasticsearch/rest/RestController.java b/core/src/main/java/org/elasticsearch/rest/RestController.java index c701e8ff0ee..5ac82b7e454 100644 --- a/core/src/main/java/org/elasticsearch/rest/RestController.java +++ b/core/src/main/java/org/elasticsearch/rest/RestController.java @@ -19,23 +19,35 @@ package org.elasticsearch.rest; +import java.io.ByteArrayOutputStream; import java.io.IOException; +import java.io.InputStream; import java.util.Objects; import java.util.Set; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.UnaryOperator; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.breaker.CircuitBreaker; import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.component.AbstractComponent; +import org.elasticsearch.common.io.Streams; +import org.elasticsearch.common.io.stream.BytesStreamOutput; import org.elasticsearch.common.logging.DeprecationLogger; import org.elasticsearch.common.path.PathTrie; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.http.HttpServerTransport; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import static org.elasticsearch.rest.RestStatus.BAD_REQUEST; +import static org.elasticsearch.rest.RestStatus.FORBIDDEN; +import static org.elasticsearch.rest.RestStatus.INTERNAL_SERVER_ERROR; import static org.elasticsearch.rest.RestStatus.OK; public class RestController extends AbstractComponent { @@ -48,18 +60,27 @@ public class RestController extends AbstractComponent { private final UnaryOperator handlerWrapper; + private final NodeClient client; + + private final CircuitBreakerService circuitBreakerService; + /** Rest headers that are copied to internal requests made during a rest request. */ private final Set headersToCopy; - public RestController(Settings settings, Set headersToCopy, UnaryOperator handlerWrapper) { + public RestController(Settings settings, Set headersToCopy, UnaryOperator handlerWrapper, + NodeClient client, CircuitBreakerService circuitBreakerService) { super(settings); this.headersToCopy = headersToCopy; if (handlerWrapper == null) { handlerWrapper = h -> h; // passthrough if no wrapper set } this.handlerWrapper = handlerWrapper; + this.client = client; + this.circuitBreakerService = circuitBreakerService; } + + /** * Registers a REST handler to be executed when the provided {@code method} and {@code path} match the request. * @@ -137,7 +158,34 @@ public class RestController extends AbstractComponent { return (handler != null) ? handler.canTripCircuitBreaker() : true; } - public void dispatchRequest(final RestRequest request, final RestChannel channel, final NodeClient client, ThreadContext threadContext) throws Exception { + public void dispatchRequest(RestRequest request, RestChannel channel, ThreadContext threadContext) { + if (request.rawPath().equals("/favicon.ico")) { + handleFavicon(request, channel); + return; + } + RestChannel responseChannel = channel; + try { + int contentLength = request.content().length(); + if (canTripCircuitBreaker(request)) { + inFlightRequestsBreaker(circuitBreakerService).addEstimateBytesAndMaybeBreak(contentLength, ""); + } else { + inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(contentLength); + } + // iff we could reserve bytes for the request we need to send the response also over this channel + responseChannel = new ResourceHandlingHttpChannel(channel, circuitBreakerService, contentLength); + dispatchRequest(request, responseChannel, client, threadContext); + } catch (Exception e) { + try { + responseChannel.sendResponse(new BytesRestResponse(channel, e)); + } catch (Exception inner) { + inner.addSuppressed(e); + logger.error((Supplier) () -> + new ParameterizedMessage("failed to send failure response for uri [{}]", request.uri()), inner); + } + } + } + + void dispatchRequest(final RestRequest request, final RestChannel channel, final NodeClient client, ThreadContext threadContext) throws Exception { if (!checkRequestParameters(request, channel)) { return; } @@ -223,4 +271,84 @@ public class RestController extends AbstractComponent { // my_index/my_type/http%3A%2F%2Fwww.google.com return request.rawPath(); } + + void handleFavicon(RestRequest request, RestChannel channel) { + if (request.method() == RestRequest.Method.GET) { + try { + try (InputStream stream = getClass().getResourceAsStream("/config/favicon.ico")) { + ByteArrayOutputStream out = new ByteArrayOutputStream(); + Streams.copy(stream, out); + BytesRestResponse restResponse = new BytesRestResponse(RestStatus.OK, "image/x-icon", out.toByteArray()); + channel.sendResponse(restResponse); + } + } catch (IOException e) { + channel.sendResponse(new BytesRestResponse(INTERNAL_SERVER_ERROR, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); + } + } else { + channel.sendResponse(new BytesRestResponse(FORBIDDEN, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY)); + } + } + + private static final class ResourceHandlingHttpChannel implements RestChannel { + private final RestChannel delegate; + private final CircuitBreakerService circuitBreakerService; + private final int contentLength; + private final AtomicBoolean closed = new AtomicBoolean(); + + public ResourceHandlingHttpChannel(RestChannel delegate, CircuitBreakerService circuitBreakerService, int contentLength) { + this.delegate = delegate; + this.circuitBreakerService = circuitBreakerService; + this.contentLength = contentLength; + } + + @Override + public XContentBuilder newBuilder() throws IOException { + return delegate.newBuilder(); + } + + @Override + public XContentBuilder newErrorBuilder() throws IOException { + return delegate.newErrorBuilder(); + } + + @Override + public XContentBuilder newBuilder(@Nullable BytesReference autoDetectSource, boolean useFiltering) throws IOException { + return delegate.newBuilder(autoDetectSource, useFiltering); + } + + @Override + public BytesStreamOutput bytesOutput() { + return delegate.bytesOutput(); + } + + @Override + public RestRequest request() { + return delegate.request(); + } + + @Override + public boolean detailedErrorsEnabled() { + return delegate.detailedErrorsEnabled(); + } + + @Override + public void sendResponse(RestResponse response) { + close(); + delegate.sendResponse(response); + } + + private void close() { + // attempt to close once atomically + if (closed.compareAndSet(false, true) == false) { + throw new IllegalStateException("Channel is already closed"); + } + inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(-contentLength); + } + + } + + private static CircuitBreaker inFlightRequestsBreaker(CircuitBreakerService circuitBreakerService) { + // We always obtain a fresh breaker to reflect changes to the breaker configuration. + return circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS); + } } diff --git a/core/src/test/java/org/elasticsearch/common/network/NetworkModuleTests.java b/core/src/test/java/org/elasticsearch/common/network/NetworkModuleTests.java index 11799c99cb1..3476c99d484 100644 --- a/core/src/test/java/org/elasticsearch/common/network/NetworkModuleTests.java +++ b/core/src/test/java/org/elasticsearch/common/network/NetworkModuleTests.java @@ -30,7 +30,6 @@ import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.http.HttpInfo; -import org.elasticsearch.http.HttpServerAdapter; import org.elasticsearch.http.HttpServerTransport; import org.elasticsearch.http.HttpStats; import org.elasticsearch.indices.breaker.CircuitBreakerService; @@ -89,8 +88,6 @@ public class NetworkModuleTests extends ModuleTestCase { public HttpStats stats() { return null; } - @Override - public void httpServerAdapter(HttpServerAdapter httpServerAdapter) {} } @@ -155,7 +152,8 @@ public class NetworkModuleTests extends ModuleTestCase { CircuitBreakerService circuitBreakerService, NamedWriteableRegistry namedWriteableRegistry, NamedXContentRegistry xContentRegistry, - NetworkService networkService) { + NetworkService networkService, + HttpServerTransport.Dispatcher requestDispatcher) { return Collections.singletonMap("custom", custom); } }); @@ -195,7 +193,8 @@ public class NetworkModuleTests extends ModuleTestCase { CircuitBreakerService circuitBreakerService, NamedWriteableRegistry namedWriteableRegistry, NamedXContentRegistry xContentRegistry, - NetworkService networkService) { + NetworkService networkService, + HttpServerTransport.Dispatcher requestDispatcher) { Map> supplierMap = new HashMap<>(); supplierMap.put("custom", custom); supplierMap.put("default_custom", def); @@ -228,7 +227,8 @@ public class NetworkModuleTests extends ModuleTestCase { CircuitBreakerService circuitBreakerService, NamedWriteableRegistry namedWriteableRegistry, NamedXContentRegistry xContentRegistry, - NetworkService networkService) { + NetworkService networkService, + HttpServerTransport.Dispatcher requestDispatcher) { Map> supplierMap = new HashMap<>(); supplierMap.put("custom", custom); supplierMap.put("default_custom", def); @@ -276,6 +276,7 @@ public class NetworkModuleTests extends ModuleTestCase { } private NetworkModule newNetworkModule(Settings settings, boolean transportClient, NetworkPlugin... plugins) { - return new NetworkModule(settings, transportClient, Arrays.asList(plugins), threadPool, null, null, null, xContentRegistry(), null); + return new NetworkModule(settings, transportClient, Arrays.asList(plugins), threadPool, null, null, null, xContentRegistry(), null, + (a, b, c) -> {}); } } diff --git a/core/src/test/java/org/elasticsearch/http/HttpServerTests.java b/core/src/test/java/org/elasticsearch/http/HttpServerTests.java deleted file mode 100644 index db9eebfe5fc..00000000000 --- a/core/src/test/java/org/elasticsearch/http/HttpServerTests.java +++ /dev/null @@ -1,231 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ -package org.elasticsearch.http; - -import java.util.Collections; -import java.util.Map; - -import org.elasticsearch.common.breaker.CircuitBreaker; -import org.elasticsearch.common.bytes.BytesArray; -import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.common.component.AbstractLifecycleComponent; -import org.elasticsearch.common.settings.ClusterSettings; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.transport.BoundTransportAddress; -import org.elasticsearch.common.transport.TransportAddress; -import org.elasticsearch.common.unit.ByteSizeValue; -import org.elasticsearch.common.util.concurrent.ThreadContext; -import org.elasticsearch.common.xcontent.NamedXContentRegistry; -import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService; -import org.elasticsearch.rest.AbstractRestChannel; -import org.elasticsearch.rest.BytesRestResponse; -import org.elasticsearch.rest.RestController; -import org.elasticsearch.rest.RestRequest; -import org.elasticsearch.rest.RestResponse; -import org.elasticsearch.rest.RestStatus; -import org.elasticsearch.test.ESTestCase; -import org.junit.Before; - -public class HttpServerTests extends ESTestCase { - private static final ByteSizeValue BREAKER_LIMIT = new ByteSizeValue(20); - private HttpServer httpServer; - private CircuitBreaker inFlightRequestsBreaker; - - @Before - public void setup() { - Settings settings = Settings.EMPTY; - CircuitBreakerService circuitBreakerService = new HierarchyCircuitBreakerService( - Settings.builder() - .put(HierarchyCircuitBreakerService.IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_LIMIT_SETTING.getKey(), BREAKER_LIMIT) - .build(), - new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)); - // we can do this here only because we know that we don't adjust breaker settings dynamically in the test - inFlightRequestsBreaker = circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS); - - HttpServerTransport httpServerTransport = new TestHttpServerTransport(); - RestController restController = new RestController(settings, Collections.emptySet(), null); - restController.registerHandler(RestRequest.Method.GET, "/", - (request, channel, client) -> channel.sendResponse( - new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY))); - restController.registerHandler(RestRequest.Method.GET, "/error", (request, channel, client) -> { - throw new IllegalArgumentException("test error"); - }); - - httpServer = new HttpServer(settings, httpServerTransport, restController, null, circuitBreakerService); - httpServer.start(); - } - - public void testDispatchRequestAddsAndFreesBytesOnSuccess() { - int contentLength = BREAKER_LIMIT.bytesAsInt(); - String content = randomAsciiOfLength(contentLength); - TestRestRequest request = new TestRestRequest("/", content); - AssertingChannel channel = new AssertingChannel(request, true, RestStatus.OK); - - httpServer.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); - - assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); - assertEquals(0, inFlightRequestsBreaker.getUsed()); - } - - public void testDispatchRequestAddsAndFreesBytesOnError() { - int contentLength = BREAKER_LIMIT.bytesAsInt(); - String content = randomAsciiOfLength(contentLength); - TestRestRequest request = new TestRestRequest("/error", content); - AssertingChannel channel = new AssertingChannel(request, true, RestStatus.BAD_REQUEST); - - httpServer.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); - - assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); - assertEquals(0, inFlightRequestsBreaker.getUsed()); - } - - public void testDispatchRequestAddsAndFreesBytesOnlyOnceOnError() { - int contentLength = BREAKER_LIMIT.bytesAsInt(); - String content = randomAsciiOfLength(contentLength); - // we will produce an error in the rest handler and one more when sending the error response - TestRestRequest request = new TestRestRequest("/error", content); - ExceptionThrowingChannel channel = new ExceptionThrowingChannel(request, true); - - httpServer.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); - - assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); - assertEquals(0, inFlightRequestsBreaker.getUsed()); - } - - public void testDispatchRequestLimitsBytes() { - int contentLength = BREAKER_LIMIT.bytesAsInt() + 1; - String content = randomAsciiOfLength(contentLength); - TestRestRequest request = new TestRestRequest("/", content); - AssertingChannel channel = new AssertingChannel(request, true, RestStatus.SERVICE_UNAVAILABLE); - - httpServer.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); - - assertEquals(1, inFlightRequestsBreaker.getTrippedCount()); - assertEquals(0, inFlightRequestsBreaker.getUsed()); - } - - private static final class TestHttpServerTransport extends AbstractLifecycleComponent implements - HttpServerTransport { - - public TestHttpServerTransport() { - super(Settings.EMPTY); - } - - @Override - protected void doStart() { - } - - @Override - protected void doStop() { - } - - @Override - protected void doClose() { - } - - @Override - public BoundTransportAddress boundAddress() { - TransportAddress transportAddress = buildNewFakeTransportAddress(); - return new BoundTransportAddress(new TransportAddress[] {transportAddress} ,transportAddress); - } - - @Override - public HttpInfo info() { - return null; - } - - @Override - public HttpStats stats() { - return null; - } - - @Override - public void httpServerAdapter(HttpServerAdapter httpServerAdapter) { - - } - } - - private static final class AssertingChannel extends AbstractRestChannel { - private final RestStatus expectedStatus; - - protected AssertingChannel(RestRequest request, boolean detailedErrorsEnabled, RestStatus expectedStatus) { - super(request, detailedErrorsEnabled); - this.expectedStatus = expectedStatus; - } - - @Override - public void sendResponse(RestResponse response) { - assertEquals(expectedStatus, response.status()); - } - } - - private static final class ExceptionThrowingChannel extends AbstractRestChannel { - - protected ExceptionThrowingChannel(RestRequest request, boolean detailedErrorsEnabled) { - super(request, detailedErrorsEnabled); - } - - @Override - public void sendResponse(RestResponse response) { - throw new IllegalStateException("always throwing an exception for testing"); - } - } - - private static final class TestRestRequest extends RestRequest { - - private final BytesReference content; - - private TestRestRequest(String path, String content) { - super(NamedXContentRegistry.EMPTY, Collections.emptyMap(), path); - this.content = new BytesArray(content); - } - - @Override - public Method method() { - return Method.GET; - } - - @Override - public String uri() { - return null; - } - - @Override - public boolean hasContent() { - return true; - } - - @Override - public BytesReference content() { - return content; - } - - @Override - public String header(String name) { - return null; - } - - @Override - public Iterable> headers() { - return null; - } - - } -} diff --git a/core/src/test/java/org/elasticsearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java b/core/src/test/java/org/elasticsearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java index 02fe0d03c77..83072636b76 100644 --- a/core/src/test/java/org/elasticsearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java +++ b/core/src/test/java/org/elasticsearch/ingest/IngestProcessorNotInstalledOnAllNodesIT.java @@ -22,7 +22,7 @@ package org.elasticsearch.ingest; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ingest.WritePipelineResponse; import org.elasticsearch.common.bytes.BytesReference; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.test.ESIntegTestCase; diff --git a/core/src/test/java/org/elasticsearch/node/internal/InternalSettingsPreparerTests.java b/core/src/test/java/org/elasticsearch/node/internal/InternalSettingsPreparerTests.java index 2dc95f8e9f1..94b3f3737cb 100644 --- a/core/src/test/java/org/elasticsearch/node/internal/InternalSettingsPreparerTests.java +++ b/core/src/test/java/org/elasticsearch/node/internal/InternalSettingsPreparerTests.java @@ -24,6 +24,7 @@ import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.settings.SettingsException; import org.elasticsearch.env.Environment; +import org.elasticsearch.node.InternalSettingsPreparer; import org.elasticsearch.test.ESTestCase; import org.junit.After; import org.junit.Before; diff --git a/core/src/test/java/org/elasticsearch/rest/RestControllerTests.java b/core/src/test/java/org/elasticsearch/rest/RestControllerTests.java index cce5c463759..e7064554908 100644 --- a/core/src/test/java/org/elasticsearch/rest/RestControllerTests.java +++ b/core/src/test/java/org/elasticsearch/rest/RestControllerTests.java @@ -29,11 +29,26 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.UnaryOperator; import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.breaker.CircuitBreaker; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.component.AbstractLifecycleComponent; import org.elasticsearch.common.logging.DeprecationLogger; +import org.elasticsearch.common.settings.ClusterSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.BoundTransportAddress; +import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.util.concurrent.ThreadContext; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.http.HttpInfo; +import org.elasticsearch.http.HttpServerTransport; +import org.elasticsearch.http.HttpStats; +import org.elasticsearch.indices.breaker.CircuitBreakerService; +import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.test.rest.FakeRestRequest; +import org.junit.Before; import static org.mockito.Matchers.any; import static org.mockito.Matchers.eq; @@ -43,10 +58,39 @@ import static org.mockito.Mockito.verify; public class RestControllerTests extends ESTestCase { + private static final ByteSizeValue BREAKER_LIMIT = new ByteSizeValue(20); + private CircuitBreaker inFlightRequestsBreaker; + private RestController restController; + private HierarchyCircuitBreakerService circuitBreakerService; + + @Before + public void setup() { + Settings settings = Settings.EMPTY; + circuitBreakerService = new HierarchyCircuitBreakerService( + Settings.builder() + .put(HierarchyCircuitBreakerService.IN_FLIGHT_REQUESTS_CIRCUIT_BREAKER_LIMIT_SETTING.getKey(), BREAKER_LIMIT) + .build(), + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)); + // we can do this here only because we know that we don't adjust breaker settings dynamically in the test + inFlightRequestsBreaker = circuitBreakerService.getBreaker(CircuitBreaker.IN_FLIGHT_REQUESTS); + + HttpServerTransport httpServerTransport = new TestHttpServerTransport(); + restController = new RestController(settings, Collections.emptySet(), null, null, circuitBreakerService); + restController.registerHandler(RestRequest.Method.GET, "/", + (request, channel, client) -> channel.sendResponse( + new BytesRestResponse(RestStatus.OK, BytesRestResponse.TEXT_CONTENT_TYPE, BytesArray.EMPTY))); + restController.registerHandler(RestRequest.Method.GET, "/error", (request, channel, client) -> { + throw new IllegalArgumentException("test error"); + }); + + httpServerTransport.start(); + } + + public void testApplyRelevantHeaders() throws Exception { final ThreadContext threadContext = new ThreadContext(Settings.EMPTY); Set headers = new HashSet<>(Arrays.asList("header.1", "header.2")); - final RestController restController = new RestController(Settings.EMPTY, headers, null); + final RestController restController = new RestController(Settings.EMPTY, headers, null, null, circuitBreakerService); restController.registerHandler(RestRequest.Method.GET, "/", (RestRequest request, RestChannel channel, NodeClient client) -> { assertEquals("true", threadContext.getHeader("header.1")); @@ -66,7 +110,7 @@ public class RestControllerTests extends ESTestCase { } public void testCanTripCircuitBreaker() throws Exception { - RestController controller = new RestController(Settings.EMPTY, Collections.emptySet(), null); + RestController controller = new RestController(Settings.EMPTY, Collections.emptySet(), null, null, circuitBreakerService); // trip circuit breaker by default controller.registerHandler(RestRequest.Method.GET, "/trip", new FakeRestHandler(true)); controller.registerHandler(RestRequest.Method.GET, "/do-not-trip", new FakeRestHandler(false)); @@ -126,7 +170,8 @@ public class RestControllerTests extends ESTestCase { assertSame(handler, h); return (RestRequest request, RestChannel channel, NodeClient client) -> wrapperCalled.set(true); }; - final RestController restController = new RestController(Settings.EMPTY, Collections.emptySet(), wrapper); + final RestController restController = new RestController(Settings.EMPTY, Collections.emptySet(), wrapper, null, + circuitBreakerService); restController.registerHandler(RestRequest.Method.GET, "/", handler); final ThreadContext threadContext = new ThreadContext(Settings.EMPTY); restController.dispatchRequest(new FakeRestRequest.Builder(xContentRegistry()).build(), null, null, threadContext); @@ -154,4 +199,156 @@ public class RestControllerTests extends ESTestCase { return canTripCircuitBreaker; } } + + public void testDispatchRequestAddsAndFreesBytesOnSuccess() { + int contentLength = BREAKER_LIMIT.bytesAsInt(); + String content = randomAsciiOfLength(contentLength); + TestRestRequest request = new TestRestRequest("/", content); + AssertingChannel channel = new AssertingChannel(request, true, RestStatus.OK); + + restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + + assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); + assertEquals(0, inFlightRequestsBreaker.getUsed()); + } + + public void testDispatchRequestAddsAndFreesBytesOnError() { + int contentLength = BREAKER_LIMIT.bytesAsInt(); + String content = randomAsciiOfLength(contentLength); + TestRestRequest request = new TestRestRequest("/error", content); + AssertingChannel channel = new AssertingChannel(request, true, RestStatus.BAD_REQUEST); + + restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + + assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); + assertEquals(0, inFlightRequestsBreaker.getUsed()); + } + + public void testDispatchRequestAddsAndFreesBytesOnlyOnceOnError() { + int contentLength = BREAKER_LIMIT.bytesAsInt(); + String content = randomAsciiOfLength(contentLength); + // we will produce an error in the rest handler and one more when sending the error response + TestRestRequest request = new TestRestRequest("/error", content); + ExceptionThrowingChannel channel = new ExceptionThrowingChannel(request, true); + + restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + + assertEquals(0, inFlightRequestsBreaker.getTrippedCount()); + assertEquals(0, inFlightRequestsBreaker.getUsed()); + } + + public void testDispatchRequestLimitsBytes() { + int contentLength = BREAKER_LIMIT.bytesAsInt() + 1; + String content = randomAsciiOfLength(contentLength); + TestRestRequest request = new TestRestRequest("/", content); + AssertingChannel channel = new AssertingChannel(request, true, RestStatus.SERVICE_UNAVAILABLE); + + restController.dispatchRequest(request, channel, new ThreadContext(Settings.EMPTY)); + + assertEquals(1, inFlightRequestsBreaker.getTrippedCount()); + assertEquals(0, inFlightRequestsBreaker.getUsed()); + } + + private static final class TestHttpServerTransport extends AbstractLifecycleComponent implements + HttpServerTransport { + + public TestHttpServerTransport() { + super(Settings.EMPTY); + } + + @Override + protected void doStart() { + } + + @Override + protected void doStop() { + } + + @Override + protected void doClose() { + } + + @Override + public BoundTransportAddress boundAddress() { + TransportAddress transportAddress = buildNewFakeTransportAddress(); + return new BoundTransportAddress(new TransportAddress[] {transportAddress} ,transportAddress); + } + + @Override + public HttpInfo info() { + return null; + } + + @Override + public HttpStats stats() { + return null; + } + } + + private static final class AssertingChannel extends AbstractRestChannel { + private final RestStatus expectedStatus; + + protected AssertingChannel(RestRequest request, boolean detailedErrorsEnabled, RestStatus expectedStatus) { + super(request, detailedErrorsEnabled); + this.expectedStatus = expectedStatus; + } + + @Override + public void sendResponse(RestResponse response) { + assertEquals(expectedStatus, response.status()); + } + } + + private static final class ExceptionThrowingChannel extends AbstractRestChannel { + + protected ExceptionThrowingChannel(RestRequest request, boolean detailedErrorsEnabled) { + super(request, detailedErrorsEnabled); + } + + @Override + public void sendResponse(RestResponse response) { + throw new IllegalStateException("always throwing an exception for testing"); + } + } + + private static final class TestRestRequest extends RestRequest { + + private final BytesReference content; + + private TestRestRequest(String path, String content) { + super(NamedXContentRegistry.EMPTY, Collections.emptyMap(), path); + this.content = new BytesArray(content); + } + + @Override + public Method method() { + return Method.GET; + } + + @Override + public String uri() { + return null; + } + + @Override + public boolean hasContent() { + return true; + } + + @Override + public BytesReference content() { + return content; + } + + @Override + public String header(String name) { + return null; + } + + @Override + public Iterable> headers() { + return null; + } + + } } diff --git a/core/src/test/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsActionTests.java b/core/src/test/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsActionTests.java index 9de530d417d..ba478331ca0 100644 --- a/core/src/test/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsActionTests.java +++ b/core/src/test/java/org/elasticsearch/rest/action/admin/cluster/RestNodesStatsActionTests.java @@ -21,6 +21,7 @@ package org.elasticsearch.rest.action.admin.cluster; import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestRequest; import org.elasticsearch.test.ESTestCase; @@ -43,7 +44,7 @@ public class RestNodesStatsActionTests extends ESTestCase { @Override public void setUp() throws Exception { super.setUp(); - action = new RestNodesStatsAction(Settings.EMPTY, new RestController(Settings.EMPTY, Collections.emptySet(), null)); + action = new RestNodesStatsAction(Settings.EMPTY, new RestController(Settings.EMPTY, Collections.emptySet(), null, null, null)); } public void testUnrecognizedMetric() throws IOException { diff --git a/core/src/test/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsActionTests.java b/core/src/test/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsActionTests.java index feac1672c11..0aa6e497836 100644 --- a/core/src/test/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsActionTests.java +++ b/core/src/test/java/org/elasticsearch/rest/action/admin/indices/RestIndicesStatsActionTests.java @@ -41,7 +41,7 @@ public class RestIndicesStatsActionTests extends ESTestCase { @Override public void setUp() throws Exception { super.setUp(); - action = new RestIndicesStatsAction(Settings.EMPTY, new RestController(Settings.EMPTY, Collections.emptySet(), null)); + action = new RestIndicesStatsAction(Settings.EMPTY, new RestController(Settings.EMPTY, Collections.emptySet(), null, null, null)); } public void testUnrecognizedMetric() throws IOException { diff --git a/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java b/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java index 2a22541bbc7..ae66664b456 100644 --- a/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java +++ b/core/src/test/java/org/elasticsearch/rest/action/cat/RestIndicesActionTests.java @@ -74,7 +74,7 @@ public class RestIndicesActionTests extends ESTestCase { public void testBuildTable() { final Settings settings = Settings.EMPTY; - final RestController restController = new RestController(settings, Collections.emptySet(), null); + final RestController restController = new RestController(settings, Collections.emptySet(), null, null, null); final RestIndicesAction action = new RestIndicesAction(settings, restController, new IndexNameExpressionResolver(settings)); // build a (semi-)random table diff --git a/core/src/test/java/org/elasticsearch/rest/action/cat/RestRecoveryActionTests.java b/core/src/test/java/org/elasticsearch/rest/action/cat/RestRecoveryActionTests.java index fa93e4a80d3..88623687bf1 100644 --- a/core/src/test/java/org/elasticsearch/rest/action/cat/RestRecoveryActionTests.java +++ b/core/src/test/java/org/elasticsearch/rest/action/cat/RestRecoveryActionTests.java @@ -50,7 +50,7 @@ public class RestRecoveryActionTests extends ESTestCase { public void testRestRecoveryAction() { final Settings settings = Settings.EMPTY; - final RestController restController = new RestController(settings, Collections.emptySet(), null); + final RestController restController = new RestController(settings, Collections.emptySet(), null, null, null); final RestRecoveryAction action = new RestRecoveryAction(settings, restController, restController); final int totalShards = randomIntBetween(1, 32); final int successfulShards = Math.max(0, totalShards - randomIntBetween(1, 2)); diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java index c6c2899e4b3..138ed0a67be 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java @@ -63,7 +63,6 @@ import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.http.BindHttpException; import org.elasticsearch.http.HttpInfo; -import org.elasticsearch.http.HttpServerAdapter; import org.elasticsearch.http.HttpServerTransport; import org.elasticsearch.http.HttpStats; import org.elasticsearch.http.netty4.cors.Netty4CorsConfig; @@ -210,6 +209,7 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem protected final ByteSizeValue maxCumulationBufferCapacity; protected final int maxCompositeBufferComponents; + private final Dispatcher dispatcher; protected volatile ServerBootstrap serverBootstrap; @@ -220,17 +220,17 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem // package private for testing Netty4OpenChannelsHandler serverOpenChannels; - protected volatile HttpServerAdapter httpServerAdapter; private final Netty4CorsConfig corsConfig; public Netty4HttpServerTransport(Settings settings, NetworkService networkService, BigArrays bigArrays, ThreadPool threadPool, - NamedXContentRegistry xContentRegistry) { + NamedXContentRegistry xContentRegistry, Dispatcher dispatcher) { super(settings); this.networkService = networkService; this.bigArrays = bigArrays; this.threadPool = threadPool; this.xContentRegistry = xContentRegistry; + this.dispatcher = dispatcher; ByteSizeValue maxContentLength = SETTING_HTTP_MAX_CONTENT_LENGTH.get(settings); this.maxChunkSize = SETTING_HTTP_MAX_CHUNK_SIZE.get(settings); @@ -286,11 +286,6 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem return this.settings; } - @Override - public void httpServerAdapter(HttpServerAdapter httpServerAdapter) { - this.httpServerAdapter = httpServerAdapter; - } - @Override protected void doStart() { boolean success = false; @@ -331,6 +326,9 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem serverBootstrap.childOption(ChannelOption.SO_REUSEADDR, reuseAddress); this.boundAddress = createBoundHttpAddress(); + if (logger.isInfoEnabled()) { + logger.info("{}", boundAddress); + } success = true; } finally { if (success == false) { @@ -511,7 +509,7 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem } protected void dispatchRequest(RestRequest request, RestChannel channel) { - httpServerAdapter.dispatchRequest(request, channel, threadPool.getThreadContext()); + dispatcher.dispatch(request, channel, threadPool.getThreadContext()); } protected void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java index 6a435c19efa..0516a449629 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java @@ -93,9 +93,12 @@ public class Netty4Plugin extends Plugin implements NetworkPlugin { @Override public Map> getHttpTransports(Settings settings, ThreadPool threadPool, BigArrays bigArrays, - CircuitBreakerService circuitBreakerService, NamedWriteableRegistry namedWriteableRegistry, - NamedXContentRegistry xContentRegistry, NetworkService networkService) { + CircuitBreakerService circuitBreakerService, + NamedWriteableRegistry namedWriteableRegistry, + NamedXContentRegistry xContentRegistry, + NetworkService networkService, + HttpServerTransport.Dispatcher dispatcher) { return Collections.singletonMap(NETTY_HTTP_TRANSPORT_NAME, - () -> new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry)); + () -> new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry, dispatcher)); } } diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpChannelTests.java b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpChannelTests.java index 457b2242af4..c7427e717b3 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpChannelTests.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpChannelTests.java @@ -188,7 +188,8 @@ public class Netty4HttpChannelTests extends ESTestCase { public void testHeadersSet() { Settings settings = Settings.builder().build(); try (Netty4HttpServerTransport httpServerTransport = - new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry())) { + new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry(), + (request, channel, context) -> {})) { httpServerTransport.start(); final FullHttpRequest httpRequest = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/"); httpRequest.headers().add(HttpHeaderNames.ORIGIN, "remote"); @@ -218,7 +219,8 @@ public class Netty4HttpChannelTests extends ESTestCase { public void testConnectionClose() throws Exception { final Settings settings = Settings.builder().build(); try (Netty4HttpServerTransport httpServerTransport = - new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry())) { + new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry(), + (request, channel, context) -> {})) { httpServerTransport.start(); final FullHttpRequest httpRequest; final boolean close = randomBoolean(); @@ -253,7 +255,8 @@ public class Netty4HttpChannelTests extends ESTestCase { private FullHttpResponse executeRequest(final Settings settings, final String originValue, final String host) { // construct request and send it over the transport layer try (Netty4HttpServerTransport httpServerTransport = - new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry())) { + new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, xContentRegistry(), + (request, channel, context) -> {})) { httpServerTransport.start(); final FullHttpRequest httpRequest = new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/"); if (originValue != null) { diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerPipeliningTests.java b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerPipeliningTests.java index 5c7a249f74a..c0f8746d514 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerPipeliningTests.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerPipeliningTests.java @@ -160,7 +160,7 @@ public class Netty4HttpServerPipeliningTests extends ESTestCase { Netty4HttpServerPipeliningTests.this.networkService, Netty4HttpServerPipeliningTests.this.bigArrays, Netty4HttpServerPipeliningTests.this.threadPool, - xContentRegistry()); + xContentRegistry(), (request, channel, context) -> {}); } @Override diff --git a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerTransportTests.java b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerTransportTests.java index 7481ba4c3a3..e3dd6d8a78e 100644 --- a/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerTransportTests.java +++ b/modules/transport-netty4/src/test/java/org/elasticsearch/http/netty4/Netty4HttpServerTransportTests.java @@ -121,9 +121,8 @@ public class Netty4HttpServerTransportTests extends ESTestCase { */ public void testExpectContinueHeader() throws Exception { try (Netty4HttpServerTransport transport = new Netty4HttpServerTransport(Settings.EMPTY, networkService, bigArrays, threadPool, - xContentRegistry())) { - transport.httpServerAdapter((request, channel, context) -> - channel.sendResponse(new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, new BytesArray("done")))); + xContentRegistry(), (request, channel, context) -> + channel.sendResponse(new BytesRestResponse(OK, BytesRestResponse.TEXT_CONTENT_TYPE, new BytesArray("done"))))) { transport.start(); TransportAddress remoteAddress = randomFrom(transport.boundAddress().boundAddresses()); @@ -145,12 +144,12 @@ public class Netty4HttpServerTransportTests extends ESTestCase { public void testBindUnavailableAddress() { try (Netty4HttpServerTransport transport = new Netty4HttpServerTransport(Settings.EMPTY, networkService, bigArrays, threadPool, - xContentRegistry())) { + xContentRegistry(), (request, channel, context) -> {})) { transport.start(); TransportAddress remoteAddress = randomFrom(transport.boundAddress().boundAddresses()); Settings settings = Settings.builder().put("http.port", remoteAddress.getPort()).build(); try (Netty4HttpServerTransport otherTransport = new Netty4HttpServerTransport(settings, networkService, bigArrays, threadPool, - xContentRegistry())) { + xContentRegistry(), (request, channel, context) -> {})) { BindHttpException bindHttpException = expectThrows(BindHttpException.class, () -> otherTransport.start()); assertEquals("Failed to bind to [" + remoteAddress.getPort() + "]", bindHttpException.getMessage()); } diff --git a/test/framework/src/main/java/org/elasticsearch/node/MockNode.java b/test/framework/src/main/java/org/elasticsearch/node/MockNode.java index 6a2e6bacae4..8774ba5836b 100644 --- a/test/framework/src/main/java/org/elasticsearch/node/MockNode.java +++ b/test/framework/src/main/java/org/elasticsearch/node/MockNode.java @@ -29,18 +29,14 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.BoundTransportAddress; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.common.util.MockBigArrays; -import org.elasticsearch.discovery.zen.UnicastHostsProvider; -import org.elasticsearch.discovery.zen.ZenPing; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.indices.recovery.RecoverySettings; -import org.elasticsearch.node.internal.InternalSettingsPreparer; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.MockSearchService; import org.elasticsearch.search.SearchService; import org.elasticsearch.search.fetch.FetchPhase; -import org.elasticsearch.test.discovery.MockZenPing; import org.elasticsearch.test.transport.MockTransportService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.Transport; diff --git a/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java b/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java index 01bab59eb27..2a230106994 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/test/AbstractQueryTestCase.java @@ -81,7 +81,7 @@ import org.elasticsearch.indices.analysis.AnalysisModule; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.indices.mapper.MapperRegistry; -import org.elasticsearch.node.internal.InternalSettingsPreparer; +import org.elasticsearch.node.InternalSettingsPreparer; import org.elasticsearch.plugins.MapperPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.PluginsService; diff --git a/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java b/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java index 6c575126276..b4aee750a31 100644 --- a/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java +++ b/test/framework/src/main/java/org/elasticsearch/test/InternalTestCluster.java @@ -27,7 +27,6 @@ import com.carrotsearch.randomizedtesting.generators.RandomStrings; import org.apache.logging.log4j.Logger; import org.apache.lucene.util.IOUtils; import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse; import org.elasticsearch.action.admin.cluster.node.stats.NodeStats; import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse; import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; @@ -85,7 +84,7 @@ import org.elasticsearch.indices.recovery.RecoverySettings; import org.elasticsearch.node.MockNode; import org.elasticsearch.node.Node; import org.elasticsearch.node.NodeValidationException; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.script.ScriptService; import org.elasticsearch.search.SearchService; From d80e3eea6c93300f8d6e8ea90fb1e5e38e0dee3e Mon Sep 17 00:00:00 2001 From: Boaz Leskes Date: Mon, 16 Jan 2017 21:14:41 +0100 Subject: [PATCH 24/28] Replace EngineClosedException with AlreadyClosedExcpetion (#22631) `EngineClosedException` is a ES level exception that is used to indicate that the engine is closed when operation starts. It doesn't really add much value and we can use `AlreadyClosedException` from Lucene (which may already bubble if things go wrong during operations). Having two exception can just add confusion and lead to bugs, like wrong handling of `EngineClosedException` when dealing with document level failures. The latter was exposed by `IndexWithShadowReplicasIT`. This PR also removes the AwaitFix from the `IndexWithShadowReplicasIT` tests (which was what cause this to be discovered). While debugging the source of the issue I found some mismatches in document uid management in the tests. The term that was passed to the engine didn't correspond to the uid in the parsed doc - those are fixed as well. --- .../action/bulk/TransportShardBulkAction.java | 4 - .../org/elasticsearch/index/IndexService.java | 5 +- .../elasticsearch/index/engine/Engine.java | 7 +- .../index/engine/EngineClosedException.java | 1 + .../index/engine/InternalEngine.java | 23 +- .../index/engine/ShadowEngine.java | 3 - .../elasticsearch/index/shard/IndexShard.java | 20 +- .../indices/IndexingMemoryController.java | 4 +- .../TransportReplicationActionTests.java | 6 +- .../elasticsearch/index/IndexModuleTests.java | 5 +- .../index/IndexWithShadowReplicasIT.java | 7 - .../index/engine/InternalEngineTests.java | 669 ++++++++++-------- .../index/engine/ShadowEngineTests.java | 135 ++-- .../index/mapper/TextFieldMapperTests.java | 4 +- .../index/shard/IndexShardIT.java | 8 +- .../index/shard/IndexShardTests.java | 23 +- .../shard/IndexingOperationListenerTests.java | 9 +- .../index/shard/RefreshListenersTests.java | 7 +- .../index/translog/TranslogTests.java | 15 +- 19 files changed, 510 insertions(+), 445 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java index b4c3daee08f..2a9ee444941 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java @@ -50,14 +50,12 @@ import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.Mapping; import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.IndexShardClosedException; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.indices.IndicesService; @@ -391,8 +389,6 @@ public class TransportShardBulkAction extends TransportWriteAction indexSettings.getFlushThresholdSize().getBytes(); - } catch (AlreadyClosedException | EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // that's fine we are already close - no need to flush } } @@ -1304,7 +1303,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl public void activateThrottling() { try { getEngine().activateThrottling(); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // ignore } } @@ -1312,13 +1311,13 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl public void deactivateThrottling() { try { getEngine().deactivateThrottling(); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // ignore } } private void handleRefreshException(Exception e) { - if (e instanceof EngineClosedException) { + if (e instanceof AlreadyClosedException) { // ignore } else if (e instanceof RefreshFailedEngineException) { RefreshFailedEngineException rfee = (RefreshFailedEngineException) e; @@ -1530,7 +1529,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl Engine getEngine() { Engine engine = getEngineOrNull(); if (engine == null) { - throw new EngineClosedException(shardId); + throw new AlreadyClosedException("engine is closed"); } return engine; } @@ -1667,7 +1666,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl private Engine createNewEngine(EngineConfig config) { synchronized (mutex) { if (state == IndexShardState.CLOSED) { - throw new EngineClosedException(shardId); + throw new AlreadyClosedException(shardId + " can't create engine - shard is closed"); } assert this.currentEngineReference.get() == null; Engine engine = newEngine(config); @@ -1769,7 +1768,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl try { final Engine engine = getEngine(); engine.getTranslog().ensureSynced(candidates.stream().map(Tuple::v1)); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // that's fine since we already synced everything on engine close - this also is conform with the methods // documentation } catch (IOException ex) { // if this fails we are in deep shit - fail the request @@ -1884,8 +1883,7 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl * refresh listeners. * Otherwise false. * - * @throws EngineClosedException if the engine is already closed - * @throws AlreadyClosedException if the internal indexwriter in the engine is already closed + * @throws AlreadyClosedException if the engine or internal indexwriter in the engine is already closed */ public boolean isRefreshNeeded() { return getEngine().refreshNeeded() || (refreshListeners != null && refreshListeners.refreshNeeded()); diff --git a/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java b/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java index d7c86a77a33..cbbd7b84213 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java +++ b/core/src/main/java/org/elasticsearch/indices/IndexingMemoryController.java @@ -21,6 +21,7 @@ package org.elasticsearch.indices; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.store.AlreadyClosedException; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; @@ -30,7 +31,6 @@ import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardState; import org.elasticsearch.index.shard.IndexingOperationListener; @@ -384,7 +384,7 @@ public class IndexingMemoryController extends AbstractComponent implements Index protected void checkIdle(IndexShard shard, long inactiveTimeNS) { try { shard.checkIdle(inactiveTimeNS); - } catch (EngineClosedException e) { + } catch (AlreadyClosedException e) { logger.trace((Supplier) () -> new ParameterizedMessage("ignore exception while checking if shard {} is inactive", shard.shardId()), e); } } diff --git a/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java b/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java index b929681032e..8e5950fe9f9 100644 --- a/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java +++ b/core/src/test/java/org/elasticsearch/action/support/replication/TransportReplicationActionTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.support.replication; +import org.apache.lucene.store.AlreadyClosedException; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.UnavailableShardsException; @@ -55,7 +56,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexService; -import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.IndexShardClosedException; import org.elasticsearch.index.shard.IndexShardState; @@ -431,12 +431,12 @@ public class TransportReplicationActionTests extends ESTestCase { } } - private ElasticsearchException randomRetryPrimaryException(ShardId shardId) { + private Exception randomRetryPrimaryException(ShardId shardId) { return randomFrom( new ShardNotFoundException(shardId), new IndexNotFoundException(shardId.getIndex()), new IndexShardClosedException(shardId), - new EngineClosedException(shardId), + new AlreadyClosedException(shardId + " primary is closed"), new ReplicationOperation.RetryOnPrimaryException(shardId, "hello") ); } diff --git a/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java b/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java index 832352b0278..0958bbc6055 100644 --- a/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java +++ b/core/src/test/java/org/elasticsearch/index/IndexModuleTests.java @@ -48,7 +48,9 @@ import org.elasticsearch.index.cache.query.IndexQueryCache; import org.elasticsearch.index.cache.query.QueryCache; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.engine.EngineException; +import org.elasticsearch.index.engine.InternalEngineTests; import org.elasticsearch.index.fielddata.IndexFieldDataCache; +import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.shard.IndexEventListener; import org.elasticsearch.index.shard.IndexSearcherWrapper; import org.elasticsearch.index.shard.IndexingOperationListener; @@ -247,7 +249,8 @@ public class IndexModuleTests extends ESTestCase { assertEquals(IndexingSlowLog.class, indexService.getIndexOperationListeners().get(0).getClass()); assertSame(listener, indexService.getIndexOperationListeners().get(1)); - Engine.Index index = new Engine.Index(new Term("_uid", "1"), null); + ParsedDocument doc = InternalEngineTests.createParsedDoc("1", "test", null); + Engine.Index index = new Engine.Index(new Term("_uid", doc.uid()), doc); ShardId shardId = new ShardId(new Index("foo", "bar"), 0); for (IndexingOperationListener l : indexService.getIndexOperationListeners()) { l.preIndex(shardId, index); diff --git a/core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java b/core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java index 2f74194e256..e713f14be14 100644 --- a/core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java +++ b/core/src/test/java/org/elasticsearch/index/IndexWithShadowReplicasIT.java @@ -19,7 +19,6 @@ package org.elasticsearch.index; -import org.apache.lucene.util.LuceneTestCase; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.DocWriteResponse; @@ -34,7 +33,6 @@ import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.metadata.IndexMetaData; -import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.RoutingNode; import org.elasticsearch.cluster.routing.RoutingNodes; import org.elasticsearch.common.Priority; @@ -58,9 +56,6 @@ import org.elasticsearch.test.ESIntegTestCase; import org.elasticsearch.test.InternalTestCluster; import org.elasticsearch.test.junit.annotations.TestLogging; import org.elasticsearch.test.transport.MockTransportService; -import org.elasticsearch.transport.ConnectionProfile; -import org.elasticsearch.transport.Transport; -import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportRequest; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportService; @@ -91,7 +86,6 @@ import static org.hamcrest.Matchers.greaterThanOrEqualTo; /** * Tests for indices that use shadow replicas and a shared filesystem */ -@LuceneTestCase.AwaitsFix(bugUrl = "fix this fails intermittently") @ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.TEST, numDataNodes = 0) public class IndexWithShadowReplicasIT extends ESIntegTestCase { @@ -459,7 +453,6 @@ public class IndexWithShadowReplicasIT extends ESIntegTestCase { assertHitCount(resp, numPhase1Docs + numPhase2Docs); } - @AwaitsFix(bugUrl = "uncaught exception") public void testPrimaryRelocationWhereRecoveryFails() throws Exception { Path dataPath = createTempDir(); Settings nodeSettings = Settings.builder() diff --git a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java index 6f85d65bc91..ef05d8f27ca 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java @@ -106,9 +106,9 @@ import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.RootObjectMapper; import org.elasticsearch.index.mapper.SeqNoFieldMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.seqno.SequenceNumbersService; -import org.elasticsearch.index.shard.DocsStats; import org.elasticsearch.index.shard.IndexSearcherWrapper; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardUtils; @@ -145,6 +145,7 @@ import java.util.HashSet; import java.util.List; import java.util.Locale; import java.util.Map; +import java.util.Objects; import java.util.Set; import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.CountDownLatch; @@ -157,7 +158,6 @@ import java.util.function.LongSupplier; import java.util.function.Supplier; import static java.util.Collections.emptyMap; -import static java.util.Collections.max; import static org.elasticsearch.index.engine.Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.PEER_RECOVERY; import static org.elasticsearch.index.engine.Engine.Operation.Origin.PRIMARY; @@ -261,19 +261,22 @@ public class InternalEngineTests extends ESTestCase { } - private Document testDocumentWithTextField() { + private static Document testDocumentWithTextField() { Document document = testDocument(); document.add(new TextField("value", "test", Field.Store.YES)); return document; } - private Document testDocument() { + private static Document testDocument() { return new Document(); } + public static ParsedDocument createParsedDoc(String id, String type, String routing) { + return testParsedDocument(id, type, routing, testDocumentWithTextField(), new BytesArray("{ \"value\" : \"test\" }"), null); + } - private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, Document document, BytesReference source, Mapping mappingUpdate) { - Field uidField = new Field("_uid", uid, UidFieldMapper.Defaults.FIELD_TYPE); + private static ParsedDocument testParsedDocument(String id, String type, String routing, Document document, BytesReference source, Mapping mappingUpdate) { + Field uidField = new Field("_uid", Uid.createUid(type, id), UidFieldMapper.Defaults.FIELD_TYPE); Field versionField = new NumericDocValuesField("_version", 0); SeqNoFieldMapper.SequenceID seqID = SeqNoFieldMapper.SequenceID.emptySeqID(); document.add(uidField); @@ -401,11 +404,11 @@ public class InternalEngineTests extends ESTestCase { assertThat(engine.segmentsStats(false).getMemoryInBytes(), equalTo(0L)); // create two docs and refresh - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - Engine.Index first = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + Engine.Index first = indexForDoc(doc); Engine.IndexResult firstResult = engine.index(first); - ParsedDocument doc2 = testParsedDocument("2", "2", "test", null, testDocumentWithTextField(), B_2, null); - Engine.Index second = new Engine.Index(newUid("2"), doc2); + ParsedDocument doc2 = testParsedDocument("2", "test", null, testDocumentWithTextField(), B_2, null); + Engine.Index second = indexForDoc(doc2); Engine.IndexResult secondResult = engine.index(second); assertThat(secondResult.getTranslogLocation(), greaterThan(firstResult.getTranslogLocation())); engine.refresh("test"); @@ -437,8 +440,8 @@ public class InternalEngineTests extends ESTestCase { assertThat(segments.get(0).getDeletedDocs(), equalTo(0)); assertThat(segments.get(0).isCompound(), equalTo(true)); - ParsedDocument doc3 = testParsedDocument("3", "3", "test", null, testDocumentWithTextField(), B_3, null); - engine.index(new Engine.Index(newUid("3"), doc3)); + ParsedDocument doc3 = testParsedDocument("3", "test", null, testDocumentWithTextField(), B_3, null); + engine.index(indexForDoc(doc3)); engine.refresh("test"); segments = engine.segments(false); @@ -464,7 +467,7 @@ public class InternalEngineTests extends ESTestCase { assertThat(segments.get(1).isCompound(), equalTo(true)); - engine.delete(new Engine.Delete("test", "1", newUid("1"))); + engine.delete(new Engine.Delete("test", "1", newUid(doc))); engine.refresh("test"); segments = engine.segments(false); @@ -484,8 +487,8 @@ public class InternalEngineTests extends ESTestCase { assertThat(segments.get(1).isCompound(), equalTo(true)); engine.onSettingsChanged(); - ParsedDocument doc4 = testParsedDocument("4", "4", "test", null, testDocumentWithTextField(), B_3, null); - engine.index(new Engine.Index(newUid("4"), doc4)); + ParsedDocument doc4 = testParsedDocument("4", "test", null, testDocumentWithTextField(), B_3, null); + engine.index(indexForDoc(doc4)); engine.refresh("test"); segments = engine.segments(false); @@ -518,19 +521,19 @@ public class InternalEngineTests extends ESTestCase { List segments = engine.segments(true); assertThat(segments.isEmpty(), equalTo(true)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); engine.refresh("test"); segments = engine.segments(true); assertThat(segments.size(), equalTo(1)); assertThat(segments.get(0).ramTree, notNullValue()); - ParsedDocument doc2 = testParsedDocument("2", "2", "test", null, testDocumentWithTextField(), B_2, null); - engine.index(new Engine.Index(newUid("2"), doc2)); + ParsedDocument doc2 = testParsedDocument("2", "test", null, testDocumentWithTextField(), B_2, null); + engine.index(indexForDoc(doc2)); engine.refresh("test"); - ParsedDocument doc3 = testParsedDocument("3", "3", "test", null, testDocumentWithTextField(), B_3, null); - engine.index(new Engine.Index(newUid("3"), doc3)); + ParsedDocument doc3 = testParsedDocument("3", "test", null, testDocumentWithTextField(), B_3, null); + engine.index(indexForDoc(doc3)); engine.refresh("test"); segments = engine.segments(true); @@ -544,12 +547,12 @@ public class InternalEngineTests extends ESTestCase { public void testSegmentsWithMergeFlag() throws Exception { try (Store store = createStore(); Engine engine = createEngine(defaultSettings, store, createTempDir(), new TieredMergePolicy())) { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); engine.index(index); engine.flush(); assertThat(engine.segments(false).size(), equalTo(1)); - index = new Engine.Index(newUid("2"), doc); + index = indexForDoc(testParsedDocument("2", "test", null, testDocument(), B_1, null)); engine.index(index); engine.flush(); List segments = engine.segments(false); @@ -557,7 +560,7 @@ public class InternalEngineTests extends ESTestCase { for (Segment segment : segments) { assertThat(segment.getMergeId(), nullValue()); } - index = new Engine.Index(newUid("3"), doc); + index = indexForDoc(testParsedDocument("3", "test", null, testDocument(), B_1, null)); engine.index(index); engine.flush(); segments = engine.segments(false); @@ -566,7 +569,7 @@ public class InternalEngineTests extends ESTestCase { assertThat(segment.getMergeId(), nullValue()); } - index = new Engine.Index(newUid("4"), doc); + index = indexForDoc(doc); engine.index(index); engine.flush(); final long gen1 = store.readLastCommittedSegmentsInfo().getGeneration(); @@ -598,8 +601,8 @@ public class InternalEngineTests extends ESTestCase { Engine engine = createEngine(defaultSettings, store, createTempDir(), NoMergePolicy.INSTANCE)) { assertThat(engine.segmentsStats(true).getFileSizes().size(), equalTo(0)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); engine.refresh("test"); SegmentsStats stats = engine.segmentsStats(true); @@ -608,8 +611,8 @@ public class InternalEngineTests extends ESTestCase { ObjectObjectCursor firstEntry = stats.getFileSizes().iterator().next(); - ParsedDocument doc2 = testParsedDocument("2", "2", "test", null, testDocumentWithTextField(), B_2, null); - engine.index(new Engine.Index(newUid("2"), doc2)); + ParsedDocument doc2 = testParsedDocument("2", "test", null, testDocumentWithTextField(), B_2, null); + engine.index(indexForDoc(doc2)); engine.refresh("test"); assertThat(engine.segmentsStats(true).getFileSizes().get(firstEntry.key), greaterThan(firstEntry.value)); @@ -709,8 +712,8 @@ public class InternalEngineTests extends ESTestCase { public void testFlushIsDisabledDuringTranslogRecovery() throws IOException { assertFalse(engine.isRecovering()); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); engine.close(); engine = new InternalEngine(copy(engine.config(), EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG)); @@ -718,8 +721,8 @@ public class InternalEngineTests extends ESTestCase { assertTrue(engine.isRecovering()); engine.recoverFromTranslog(); assertFalse(engine.isRecovering()); - doc = testParsedDocument("2", "2", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("2"), doc)); + doc = testParsedDocument("2", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); engine.flush(); } @@ -730,13 +733,13 @@ public class InternalEngineTests extends ESTestCase { try { initialEngine = engine; for (int i = 0; i < ops; i++) { - final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), SOURCE, null); + final ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), SOURCE, null); if (randomBoolean()) { - final Engine.Index operation = new Engine.Index(newUid("test#1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); + final Engine.Index operation = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); operations.add(operation); initialEngine.index(operation); } else { - final Engine.Delete operation = new Engine.Delete("test", "1", newUid("test#1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()); + final Engine.Delete operation = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, i, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime()); operations.add(operation); initialEngine.delete(operation); } @@ -766,8 +769,8 @@ public class InternalEngineTests extends ESTestCase { initialEngine = engine; for (int i = 0; i < docs; i++) { final String id = Integer.toString(i); - final ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocumentWithTextField(), SOURCE, null); - initialEngine.index(new Engine.Index(newUid(id), doc)); + final ParsedDocument doc = testParsedDocument(id, "test", null, testDocumentWithTextField(), SOURCE, null); + initialEngine.index(indexForDoc(doc)); } } finally { IOUtils.close(initialEngine); @@ -795,33 +798,30 @@ public class InternalEngineTests extends ESTestCase { } public void testConcurrentGetAndFlush() throws Exception { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); final AtomicReference latestGetResult = new AtomicReference<>(); - latestGetResult.set(engine.get(new Engine.Get(true, newUid("1")))); + latestGetResult.set(engine.get(new Engine.Get(true, newUid(doc)))); final AtomicBoolean flushFinished = new AtomicBoolean(false); final CyclicBarrier barrier = new CyclicBarrier(2); - Thread getThread = new Thread() { - @Override - public void run() { - try { - barrier.await(); - } catch (InterruptedException | BrokenBarrierException e) { - throw new RuntimeException(e); + Thread getThread = new Thread(() -> { + try { + barrier.await(); + } catch (InterruptedException | BrokenBarrierException e) { + throw new RuntimeException(e); + } + while (flushFinished.get() == false) { + Engine.GetResult previousGetResult = latestGetResult.get(); + if (previousGetResult != null) { + previousGetResult.release(); } - while (flushFinished.get() == false) { - Engine.GetResult previousGetResult = latestGetResult.get(); - if (previousGetResult != null) { - previousGetResult.release(); - } - latestGetResult.set(engine.get(new Engine.Get(true, newUid("1")))); - if (latestGetResult.get().exists() == false) { - break; - } + latestGetResult.set(engine.get(new Engine.Get(true, newUid(doc)))); + if (latestGetResult.get().exists() == false) { + break; } } - }; + }); getThread.start(); barrier.await(); engine.flush(); @@ -839,8 +839,8 @@ public class InternalEngineTests extends ESTestCase { // create a document Document document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, document, B_1, null); + engine.index(indexForDoc(doc)); // its not there... searchResult = engine.acquireSearcher("test"); @@ -849,12 +849,12 @@ public class InternalEngineTests extends ESTestCase { searchResult.close(); // but, not there non realtime - Engine.GetResult getResult = engine.get(new Engine.Get(false, newUid("1"))); + Engine.GetResult getResult = engine.get(new Engine.Get(false, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); getResult.release(); // but, we can still get it (in realtime) - getResult = engine.get(new Engine.Get(true, newUid("1"))); + getResult = engine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -869,7 +869,7 @@ public class InternalEngineTests extends ESTestCase { searchResult.close(); // also in non realtime - getResult = engine.get(new Engine.Get(false, newUid("1"))); + getResult = engine.get(new Engine.Get(false, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -878,8 +878,8 @@ public class InternalEngineTests extends ESTestCase { document = testDocument(); document.add(new TextField("value", "test1", Field.Store.YES)); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_2), SourceFieldMapper.Defaults.FIELD_TYPE)); - doc = testParsedDocument("1", "1", "test", null, document, B_2, null); - engine.index(new Engine.Index(newUid("1"), doc)); + doc = testParsedDocument("1", "test", null, document, B_2, null); + engine.index(indexForDoc(doc)); // its not updated yet... searchResult = engine.acquireSearcher("test"); @@ -889,7 +889,7 @@ public class InternalEngineTests extends ESTestCase { searchResult.close(); // but, we can still get it (in realtime) - getResult = engine.get(new Engine.Get(true, newUid("1"))); + getResult = engine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -904,7 +904,7 @@ public class InternalEngineTests extends ESTestCase { searchResult.close(); // now delete - engine.delete(new Engine.Delete("test", "1", newUid("1"))); + engine.delete(new Engine.Delete("test", "1", newUid(doc))); // its not deleted yet searchResult = engine.acquireSearcher("test"); @@ -914,7 +914,7 @@ public class InternalEngineTests extends ESTestCase { searchResult.close(); // but, get should not see it (in realtime) - getResult = engine.get(new Engine.Get(true, newUid("1"))); + getResult = engine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); getResult.release(); @@ -930,8 +930,8 @@ public class InternalEngineTests extends ESTestCase { // add it back document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - engine.index(new Engine.Index(newUid("1"), doc, Versions.MATCH_DELETED)); + doc = testParsedDocument("1", "test", null, document, B_1, null); + engine.index(new Engine.Index(newUid(doc), doc, Versions.MATCH_DELETED)); // its not there... searchResult = engine.acquireSearcher("test"); @@ -954,7 +954,7 @@ public class InternalEngineTests extends ESTestCase { engine.flush(); // and, verify get (in real time) - getResult = engine.get(new Engine.Get(true, newUid("1"))); + getResult = engine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -963,8 +963,8 @@ public class InternalEngineTests extends ESTestCase { // now do an update document = testDocument(); document.add(new TextField("value", "test1", Field.Store.YES)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + doc = testParsedDocument("1", "test", null, document, B_1, null); + engine.index(indexForDoc(doc)); // its not updated yet... searchResult = engine.acquireSearcher("test"); @@ -989,8 +989,8 @@ public class InternalEngineTests extends ESTestCase { searchResult.close(); // create a document - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); // its not there... searchResult = engine.acquireSearcher("test"); @@ -1008,7 +1008,7 @@ public class InternalEngineTests extends ESTestCase { // don't release the search result yet... // delete, refresh and do a new search, it should not be there - engine.delete(new Engine.Delete("test", "1", newUid("1"))); + engine.delete(new Engine.Delete("test", "1", newUid(doc))); engine.refresh("test"); Engine.Searcher updateSearchResult = engine.acquireSearcher("test"); MatcherAssert.assertThat(updateSearchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(0)); @@ -1025,8 +1025,8 @@ public class InternalEngineTests extends ESTestCase { Engine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), new LogByteSizeMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) { final String syncId = randomUnicodeOfCodepointLengthBetween(10, 20); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); Engine.CommitId commitID = engine.flush(); assertThat(commitID, equalTo(new Engine.CommitId(store.readLastCommittedSegmentsInfo().getId()))); byte[] wrongBytes = Base64.getDecoder().decode(commitID.toString()); @@ -1034,7 +1034,7 @@ public class InternalEngineTests extends ESTestCase { Engine.CommitId wrongId = new Engine.CommitId(wrongBytes); assertEquals("should fail to sync flush with wrong id (but no docs)", engine.syncFlush(syncId + "1", wrongId), Engine.SyncedFlushResult.COMMIT_MISMATCH); - engine.index(new Engine.Index(newUid("2"), doc)); + engine.index(indexForDoc(doc)); assertEquals("should fail to sync flush with right id but pending doc", engine.syncFlush(syncId + "2", commitID), Engine.SyncedFlushResult.PENDING_OPERATIONS); commitID = engine.flush(); @@ -1052,20 +1052,20 @@ public class InternalEngineTests extends ESTestCase { InternalEngine engine = new InternalEngine(config(defaultSettings, store, createTempDir(), new LogDocMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) { final String syncId = randomUnicodeOfCodepointLengthBetween(10, 20); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - Engine.Index doc1 = new Engine.Index(newUid("1"), doc); + Engine.Index doc1 = indexForDoc(testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null)); engine.index(doc1); assertEquals(engine.getLastWriteNanos(), doc1.startTime()); engine.flush(); - Engine.Index doc2 = new Engine.Index(newUid("2"), doc); + Engine.Index doc2 = indexForDoc(testParsedDocument("2", "test", null, testDocumentWithTextField(), B_1, null)); engine.index(doc2); assertEquals(engine.getLastWriteNanos(), doc2.startTime()); engine.flush(); final boolean forceMergeFlushes = randomBoolean(); + final ParsedDocument parsedDoc3 = testParsedDocument("3", "test", null, testDocumentWithTextField(), B_1, null); if (forceMergeFlushes) { - engine.index(new Engine.Index(newUid("3"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime() - engine.engineConfig.getFlushMergesAfter().nanos(), -1, false)); + engine.index(new Engine.Index(newUid(parsedDoc3), parsedDoc3, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime() - engine.engineConfig.getFlushMergesAfter().nanos(), -1, false)); } else { - engine.index(new Engine.Index(newUid("3"), doc)); + engine.index(indexForDoc(parsedDoc3)); } Engine.CommitId commitID = engine.flush(); assertEquals("should succeed to flush commit with right id and no pending doc", engine.syncFlush(syncId, commitID), @@ -1087,7 +1087,7 @@ public class InternalEngineTests extends ESTestCase { assertEquals(engine.getLastCommittedSegmentInfos().getUserData().get(Engine.SYNC_COMMIT_ID), syncId); if (randomBoolean()) { - Engine.Index doc4 = new Engine.Index(newUid("4"), doc); + Engine.Index doc4 = indexForDoc(testParsedDocument("4", "test", null, testDocumentWithTextField(), B_1, null)); engine.index(doc4); assertEquals(engine.getLastWriteNanos(), doc4.startTime()); } else { @@ -1105,8 +1105,8 @@ public class InternalEngineTests extends ESTestCase { public void testSyncedFlushSurvivesEngineRestart() throws IOException { final String syncId = randomUnicodeOfCodepointLengthBetween(10, 20); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); final Engine.CommitId commitID = engine.flush(); assertEquals("should succeed to flush commit with right id and no pending doc", engine.syncFlush(syncId, commitID), Engine.SyncedFlushResult.SUCCESS); @@ -1128,15 +1128,15 @@ public class InternalEngineTests extends ESTestCase { public void testSyncedFlushVanishesOnReplay() throws IOException { final String syncId = randomUnicodeOfCodepointLengthBetween(10, 20); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); final Engine.CommitId commitID = engine.flush(); assertEquals("should succeed to flush commit with right id and no pending doc", engine.syncFlush(syncId, commitID), Engine.SyncedFlushResult.SUCCESS); assertEquals(store.readLastCommittedSegmentsInfo().getUserData().get(Engine.SYNC_COMMIT_ID), syncId); assertEquals(engine.getLastCommittedSegmentInfos().getUserData().get(Engine.SYNC_COMMIT_ID), syncId); - doc = testParsedDocument("2", "2", "test", null, testDocumentWithTextField(), new BytesArray("{}"), null); - engine.index(new Engine.Index(newUid("2"), doc)); + doc = testParsedDocument("2", "test", null, testDocumentWithTextField(), new BytesArray("{}"), null); + engine.index(indexForDoc(doc)); EngineConfig config = engine.config(); engine.close(); engine = new InternalEngine(copy(config, EngineConfig.OpenMode.OPEN_INDEX_AND_TRANSLOG)); @@ -1145,79 +1145,79 @@ public class InternalEngineTests extends ESTestCase { } public void testVersioningNewCreate() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index create = new Engine.Index(newUid("1"), doc, Versions.MATCH_DELETED); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index create = new Engine.Index(newUid(doc), doc, Versions.MATCH_DELETED); Engine.IndexResult indexResult = engine.index(create); assertThat(indexResult.getVersion(), equalTo(1L)); - create = new Engine.Index(newUid("1"), doc, indexResult.getSeqNo(), create.primaryTerm(), indexResult.getVersion(), create.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); + create = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), create.primaryTerm(), indexResult.getVersion(), create.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); indexResult = replicaEngine.index(create); assertThat(indexResult.getVersion(), equalTo(1L)); } public void testVersioningNewIndex() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); - index = new Engine.Index(newUid("1"), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); indexResult = replicaEngine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); } public void testExternalVersioningNewIndex() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 12, VersionType.EXTERNAL, PRIMARY, 0, -1, false); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 12, VersionType.EXTERNAL, PRIMARY, 0, -1, false); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(12L)); - index = new Engine.Index(newUid("1"), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, 0, -1, false); indexResult = replicaEngine.index(index); assertThat(indexResult.getVersion(), equalTo(12L)); } public void testVersioningIndexConflict() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); - index = new Engine.Index(newUid("1"), doc); + index = indexForDoc(doc); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(2L)); - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); // future versions should not work as well - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); } public void testExternalVersioningIndexConflict() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 12, VersionType.EXTERNAL, PRIMARY, 0, -1, false); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 12, VersionType.EXTERNAL, PRIMARY, 0, -1, false); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(12L)); - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 14, VersionType.EXTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 14, VersionType.EXTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(14L)); - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 13, VersionType.EXTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 13, VersionType.EXTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); } public void testForceVersioningNotAllowedExceptForOlderIndices() throws Exception { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 42, VersionType.FORCE, PRIMARY, 0, -1, false); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 42, VersionType.FORCE, PRIMARY, 0, -1, false); Engine.IndexResult indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); @@ -1229,13 +1229,13 @@ public class InternalEngineTests extends ESTestCase { .build()); try (Store store = createStore(); Engine engine = createEngine(oldIndexSettings, store, createTempDir(), NoMergePolicy.INSTANCE)) { - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 84, VersionType.FORCE, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 84, VersionType.FORCE, PRIMARY, 0, -1, false); Engine.IndexResult result = engine.index(index); assertTrue(result.hasFailure()); assertThat(result.getFailure(), instanceOf(IllegalArgumentException.class)); assertThat(result.getFailure().getMessage(), containsString("version type [FORCE] may not be used for non-translog operations")); - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 84, VersionType.FORCE, + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 84, VersionType.FORCE, Engine.Operation.Origin.LOCAL_TRANSLOG_RECOVERY, 0, -1, false); result = engine.index(index); assertThat(result.getVersion(), equalTo(84L)); @@ -1243,42 +1243,42 @@ public class InternalEngineTests extends ESTestCase { } public void testVersioningIndexConflictWithFlush() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); - index = new Engine.Index(newUid("1"), doc); + index = indexForDoc(doc); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(2L)); engine.flush(); - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); // future versions should not work as well - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); } public void testExternalVersioningIndexConflictWithFlush() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 12, VersionType.EXTERNAL, PRIMARY, 0, -1, false); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 12, VersionType.EXTERNAL, PRIMARY, 0, -1, false); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(12L)); - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 14, VersionType.EXTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 14, VersionType.EXTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(14L)); engine.flush(); - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 13, VersionType.EXTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 13, VersionType.EXTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); @@ -1290,8 +1290,8 @@ public class InternalEngineTests extends ESTestCase { new LogByteSizeMergePolicy(), IndexRequest.UNSET_AUTO_GENERATED_TIMESTAMP, null))) { // use log MP here we test some behavior in ESMP int numDocs = randomIntBetween(10, 100); for (int i = 0; i < numDocs; i++) { - ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid(Integer.toString(i)), doc); + ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); engine.index(index); engine.refresh("test"); } @@ -1301,8 +1301,8 @@ public class InternalEngineTests extends ESTestCase { engine.forceMerge(true, 1, false, false, false); assertEquals(engine.segments(true).size(), 1); - ParsedDocument doc = testParsedDocument(Integer.toString(0), Integer.toString(0), "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid(Integer.toString(0)), doc); + ParsedDocument doc = testParsedDocument(Integer.toString(0), "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); engine.delete(new Engine.Delete(index.type(), index.id(), index.uid())); engine.forceMerge(true, 10, true, false, false); //expunge deletes @@ -1312,8 +1312,8 @@ public class InternalEngineTests extends ESTestCase { assertEquals(engine.config().getMergePolicy().toString(), numDocs - 1, test.reader().maxDoc()); } - doc = testParsedDocument(Integer.toString(1), Integer.toString(1), "test", null, testDocument(), B_1, null); - index = new Engine.Index(newUid(Integer.toString(1)), doc); + doc = testParsedDocument(Integer.toString(1), "test", null, testDocument(), B_1, null); + index = indexForDoc(doc); engine.delete(new Engine.Delete(index.type(), index.id(), index.uid())); engine.forceMerge(true, 10, false, false, false); //expunge deletes @@ -1347,8 +1347,8 @@ public class InternalEngineTests extends ESTestCase { int numDocs = randomIntBetween(1, 20); for (int j = 0; j < numDocs; j++) { i++; - ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid(Integer.toString(i)), doc); + ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); engine.index(index); } engine.refresh("test"); @@ -1359,7 +1359,7 @@ public class InternalEngineTests extends ESTestCase { return; } } - } catch (AlreadyClosedException | EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // fine } } @@ -1380,57 +1380,57 @@ public class InternalEngineTests extends ESTestCase { } public void testVersioningDeleteConflict() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); - index = new Engine.Index(newUid("1"), doc); + index = indexForDoc(doc); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(2L)); - Engine.Delete delete = new Engine.Delete("test", "1", newUid("1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, PRIMARY, 0); + Engine.Delete delete = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, PRIMARY, 0); Engine.DeleteResult result = engine.delete(delete); assertTrue(result.hasFailure()); assertThat(result.getFailure(), instanceOf(VersionConflictEngineException.class)); // future versions should not work as well - delete = new Engine.Delete("test", "1", newUid("1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0); + delete = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0); result = engine.delete(delete); assertTrue(result.hasFailure()); assertThat(result.getFailure(), instanceOf(VersionConflictEngineException.class)); // now actually delete - delete = new Engine.Delete("test", "1", newUid("1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0); + delete = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0); result = engine.delete(delete); assertThat(result.getVersion(), equalTo(3L)); // now check if we can index to a delete doc with version - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); } public void testVersioningDeleteConflictWithFlush() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); - index = new Engine.Index(newUid("1"), doc); + index = indexForDoc(doc); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(2L)); engine.flush(); - Engine.Delete delete = new Engine.Delete("test", "1", newUid("1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, PRIMARY, 0); + Engine.Delete delete = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1L, VersionType.INTERNAL, PRIMARY, 0); Engine.DeleteResult deleteResult = engine.delete(delete); assertTrue(deleteResult.hasFailure()); assertThat(deleteResult.getFailure(), instanceOf(VersionConflictEngineException.class)); // future versions should not work as well - delete = new Engine.Delete("test", "1", newUid("1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0); + delete = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 3L, VersionType.INTERNAL, PRIMARY, 0); deleteResult = engine.delete(delete); assertTrue(deleteResult.hasFailure()); assertThat(deleteResult.getFailure(), instanceOf(VersionConflictEngineException.class)); @@ -1438,58 +1438,58 @@ public class InternalEngineTests extends ESTestCase { engine.flush(); // now actually delete - delete = new Engine.Delete("test", "1", newUid("1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0); + delete = new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0); deleteResult = engine.delete(delete); assertThat(deleteResult.getVersion(), equalTo(3L)); engine.flush(); // now check if we can index to a delete doc with version - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0, -1, false); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2L, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); } public void testVersioningCreateExistsException() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index create = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index create = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); Engine.IndexResult indexResult = engine.index(create); assertThat(indexResult.getVersion(), equalTo(1L)); - create = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); + create = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(create); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); } public void testVersioningCreateExistsExceptionWithFlush() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index create = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index create = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); Engine.IndexResult indexResult = engine.index(create); assertThat(indexResult.getVersion(), equalTo(1L)); engine.flush(); - create = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); + create = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, 0, -1, false); indexResult = engine.index(create); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); } public void testVersioningReplicaConflict1() { - final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - final Engine.Index v1Index = new Engine.Index(newUid("1"), doc); + final ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + final Engine.Index v1Index = indexForDoc(doc); final Engine.IndexResult v1Result = engine.index(v1Index); assertThat(v1Result.getVersion(), equalTo(1L)); - final Engine.Index v2Index = new Engine.Index(newUid("1"), doc); + final Engine.Index v2Index = indexForDoc(doc); final Engine.IndexResult v2Result = engine.index(v2Index); assertThat(v2Result.getVersion(), equalTo(2L)); // apply the second index to the replica, should work fine final Engine.Index replicaV2Index = new Engine.Index( - newUid("1"), + newUid(doc), doc, v2Result.getSeqNo(), v2Index.primaryTerm(), @@ -1504,7 +1504,7 @@ public class InternalEngineTests extends ESTestCase { // now, the old one should produce an indexing result final Engine.Index replicaV1Index = new Engine.Index( - newUid("1"), + newUid(doc), doc, v1Result.getSeqNo(), v1Index.primaryTerm(), @@ -1527,14 +1527,14 @@ public class InternalEngineTests extends ESTestCase { } public void testVersioningReplicaConflict2() { - final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - final Engine.Index v1Index = new Engine.Index(newUid("1"), doc); + final ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + final Engine.Index v1Index = indexForDoc(doc); final Engine.IndexResult v1Result = engine.index(v1Index); assertThat(v1Result.getVersion(), equalTo(1L)); // apply the first index to the replica, should work fine final Engine.Index replicaV1Index = new Engine.Index( - newUid("1"), + newUid(doc), doc, v1Result.getSeqNo(), v1Index.primaryTerm(), @@ -1548,12 +1548,12 @@ public class InternalEngineTests extends ESTestCase { assertThat(replicaV1Result.getVersion(), equalTo(1L)); // index it again - final Engine.Index v2Index = new Engine.Index(newUid("1"), doc); + final Engine.Index v2Index = indexForDoc(doc); final Engine.IndexResult v2Result = engine.index(v2Index); assertThat(v2Result.getVersion(), equalTo(2L)); // now delete it - final Engine.Delete delete = new Engine.Delete("test", "1", newUid("1")); + final Engine.Delete delete = new Engine.Delete("test", "1", newUid(doc)); final Engine.DeleteResult deleteResult = engine.delete(delete); assertThat(deleteResult.getVersion(), equalTo(3L)); @@ -1561,7 +1561,7 @@ public class InternalEngineTests extends ESTestCase { final Engine.Delete replicaDelete = new Engine.Delete( "test", "1", - newUid("1"), + newUid(doc), deleteResult.getSeqNo(), delete.primaryTerm(), deleteResult.getVersion(), @@ -1579,7 +1579,7 @@ public class InternalEngineTests extends ESTestCase { // now do the second index on the replica, it should result in the current version final Engine.Index replicaV2Index = new Engine.Index( - newUid("1"), + newUid(doc), doc, v2Result.getSeqNo(), v2Index.primaryTerm(), @@ -1596,33 +1596,33 @@ public class InternalEngineTests extends ESTestCase { } public void testBasicCreatedFlag() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); Engine.IndexResult indexResult = engine.index(index); assertTrue(indexResult.isCreated()); - index = new Engine.Index(newUid("1"), doc); + index = indexForDoc(doc); indexResult = engine.index(index); assertFalse(indexResult.isCreated()); - engine.delete(new Engine.Delete(null, "1", newUid("1"))); + engine.delete(new Engine.Delete(null, "1", newUid(doc))); - index = new Engine.Index(newUid("1"), doc); + index = indexForDoc(doc); indexResult = engine.index(index); assertTrue(indexResult.isCreated()); } public void testCreatedFlagAfterFlush() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocument(), B_1, null); - Engine.Index index = new Engine.Index(newUid("1"), doc); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocument(), B_1, null); + Engine.Index index = indexForDoc(doc); Engine.IndexResult indexResult = engine.index(index); assertTrue(indexResult.isCreated()); - engine.delete(new Engine.Delete(null, "1", newUid("1"))); + engine.delete(new Engine.Delete(null, "1", newUid(doc))); engine.flush(); - index = new Engine.Index(newUid("1"), doc); + index = indexForDoc(doc); indexResult = engine.index(index); assertTrue(indexResult.isCreated()); } @@ -1667,14 +1667,14 @@ public class InternalEngineTests extends ESTestCase { try { // First, with DEBUG, which should NOT log IndexWriter output: - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); engine.flush(); assertFalse(mockAppender.sawIndexWriterMessage); // Again, with TRACE, which should log IndexWriter output: Loggers.setLevel(rootLogger, Level.TRACE); - engine.index(new Engine.Index(newUid("2"), doc)); + engine.index(indexForDoc(doc)); engine.flush(); assertTrue(mockAppender.sawIndexWriterMessage); @@ -1723,8 +1723,8 @@ public class InternalEngineTests extends ESTestCase { } else { // index a document id = randomFrom(ids); - ParsedDocument doc = testParsedDocument("test#" + id, id, "test", null, testDocumentWithTextField(), SOURCE, null); - final Engine.Index index = new Engine.Index(newUid("test#" + id), doc, + ParsedDocument doc = testParsedDocument(id, "test", null, testDocumentWithTextField(), SOURCE, null); + final Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, rarely() ? 100 : Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, 0, -1, false); @@ -1820,22 +1820,19 @@ public class InternalEngineTests extends ESTestCase { // create N indexing threads to index documents simultaneously for (int threadNum = 0; threadNum < numIndexingThreads; threadNum++) { final int threadIdx = threadNum; - Thread indexingThread = new Thread() { - @Override - public void run() { - try { - barrier.await(); // wait for all threads to start at the same time - // index random number of docs - for (int i = 0; i < numDocsPerThread; i++) { - final String id = "thread" + threadIdx + "#" + i; - ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocument(), B_1, null); - engine.index(new Engine.Index(newUid(id), doc)); - } - } catch (Exception e) { - throw new RuntimeException(e); + Thread indexingThread = new Thread(() -> { + try { + barrier.await(); // wait for all threads to start at the same time + // index random number of docs + for (int i = 0; i < numDocsPerThread; i++) { + final String id = "thread" + threadIdx + "#" + i; + ParsedDocument doc = testParsedDocument(id, "test", null, testDocument(), B_1, null); + engine.index(indexForDoc(doc)); } + } catch (Exception e) { + throw new RuntimeException(e); } - }; + }); indexingThreads.add(indexingThread); } @@ -1930,15 +1927,15 @@ public class InternalEngineTests extends ESTestCase { try { // First, with DEBUG, which should NOT log IndexWriter output: - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + engine.index(indexForDoc(doc)); engine.flush(); assertFalse(mockAppender.sawIndexWriterMessage); assertFalse(mockAppender.sawIndexWriterIFDMessage); // Again, with TRACE, which should only log IndexWriter IFD output: Loggers.setLevel(iwIFDLogger, Level.TRACE); - engine.index(new Engine.Index(newUid("2"), doc)); + engine.index(indexForDoc(doc)); engine.flush(); assertFalse(mockAppender.sawIndexWriterMessage); assertTrue(mockAppender.sawIndexWriterIFDMessage); @@ -1959,14 +1956,14 @@ public class InternalEngineTests extends ESTestCase { Document document = testDocument(); document.add(new TextField("value", "test1", Field.Store.YES)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_2, null); - engine.index(new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false)); + ParsedDocument doc = testParsedDocument("1", "test", null, document, B_2, null); + engine.index(new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false)); // Delete document we just added: - engine.delete(new Engine.Delete("test", "1", newUid("1"), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime())); + engine.delete(new Engine.Delete("test", "1", newUid(doc), SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 10, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime())); // Get should not find the document - Engine.GetResult getResult = engine.get(new Engine.Get(true, newUid("1"))); + Engine.GetResult getResult = engine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); // Give the gc pruning logic a chance to kick in @@ -1984,23 +1981,23 @@ public class InternalEngineTests extends ESTestCase { assertThat(getResult.exists(), equalTo(false)); // Try to index uid=1 with a too-old version, should fail: - Engine.Index index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(index); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); // Get should still not find the document - getResult = engine.get(new Engine.Get(true, newUid("1"))); + getResult = engine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); // Try to index uid=2 with a too-old version, should fail: - Engine.Index index1 = new Engine.Index(newUid("2"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); + Engine.Index index1 = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false); indexResult = engine.index(index1); assertTrue(indexResult.hasFailure()); assertThat(indexResult.getFailure(), instanceOf(VersionConflictEngineException.class)); // Get should not find the document - getResult = engine.get(new Engine.Get(true, newUid("2"))); + getResult = engine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); } } @@ -2009,6 +2006,14 @@ public class InternalEngineTests extends ESTestCase { return new Term("_uid", id); } + protected Term newUid(ParsedDocument doc) { + return new Term("_uid", doc.uid()); + } + + private Engine.Index indexForDoc(ParsedDocument doc) { + return new Engine.Index(newUid(doc), doc); + } + public void testExtractShardId() { try (Engine.Searcher test = this.engine.acquireSearcher("test")) { ShardId shardId = ShardUtils.extractShardId(test.getDirectoryReader()); @@ -2091,8 +2096,8 @@ public class InternalEngineTests extends ESTestCase { public void testTranslogReplayWithFailure() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { - ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(Integer.toString(i)), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); } @@ -2141,8 +2146,8 @@ public class InternalEngineTests extends ESTestCase { public void testSkipTranslogReplay() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { - ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(Integer.toString(i)), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); } @@ -2234,8 +2239,8 @@ public class InternalEngineTests extends ESTestCase { } final int numExtraDocs = randomIntBetween(1, 10); for (int i = 0; i < numExtraDocs; i++) { - ParsedDocument doc = testParsedDocument("extra" + Integer.toString(i), "extra" + Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(Integer.toString(i)), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + ParsedDocument doc = testParsedDocument("extra" + Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); } @@ -2263,8 +2268,8 @@ public class InternalEngineTests extends ESTestCase { public void testTranslogReplay() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { - ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(Integer.toString(i)), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); } @@ -2305,17 +2310,16 @@ public class InternalEngineTests extends ESTestCase { final boolean flush = randomBoolean(); int randomId = randomIntBetween(numDocs + 1, numDocs + 10); - String uuidValue = "test#" + Integer.toString(randomId); - ParsedDocument doc = testParsedDocument(uuidValue, Integer.toString(randomId), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(uuidValue), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); + ParsedDocument doc = testParsedDocument(Integer.toString(randomId), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 1, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult indexResult = engine.index(firstIndexRequest); assertThat(indexResult.getVersion(), equalTo(1L)); if (flush) { engine.flush(); } - doc = testParsedDocument(uuidValue, Integer.toString(randomId), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index idxRequest = new Engine.Index(newUid(uuidValue), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); + doc = testParsedDocument(Integer.toString(randomId), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index idxRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, 2, VersionType.EXTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult result = engine.index(idxRequest); engine.refresh("test"); assertThat(result.getVersion(), equalTo(2L)); @@ -2332,7 +2336,7 @@ public class InternalEngineTests extends ESTestCase { } parser = (TranslogHandler) engine.config().getTranslogRecoveryPerformer(); assertEquals(flush ? 1 : 2, parser.recoveredOps.get()); - engine.delete(new Engine.Delete("test", Integer.toString(randomId), newUid(uuidValue))); + engine.delete(new Engine.Delete("test", Integer.toString(randomId), newUid(doc))); if (randomBoolean()) { engine.refresh("test"); } else { @@ -2381,8 +2385,8 @@ public class InternalEngineTests extends ESTestCase { public void testRecoverFromForeignTranslog() throws IOException { final int numDocs = randomIntBetween(1, 10); for (int i = 0; i < numDocs; i++) { - ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(Integer.toString(i)), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); Engine.IndexResult index = engine.index(firstIndexRequest); assertThat(index.getVersion(), equalTo(1L)); } @@ -2469,8 +2473,8 @@ public class InternalEngineTests extends ESTestCase { // create { - ParsedDocument doc = testParsedDocument(Integer.toString(0), Integer.toString(0), "test", null, testDocument(), new BytesArray("{}"), null); - Engine.Index firstIndexRequest = new Engine.Index(newUid(Integer.toString(0)), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); + ParsedDocument doc = testParsedDocument(Integer.toString(0), "test", null, testDocument(), new BytesArray("{}"), null); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_DELETED, VersionType.INTERNAL, PRIMARY, System.nanoTime(), -1, false); try (InternalEngine engine = new InternalEngine(copy(config, EngineConfig.OpenMode.CREATE_INDEX_AND_TRANSLOG))){ assertFalse(engine.isRecovering()); @@ -2529,11 +2533,11 @@ public class InternalEngineTests extends ESTestCase { } public void testCheckDocumentFailure() throws Exception { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - Exception documentFailure = engine.checkIfDocumentFailureOrThrow(new Engine.Index(newUid("1"), doc), new IOException("simulated document failure")); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + Exception documentFailure = engine.checkIfDocumentFailureOrThrow(indexForDoc(doc), new IOException("simulated document failure")); assertThat(documentFailure, instanceOf(IOException.class)); try { - engine.checkIfDocumentFailureOrThrow(new Engine.Index(newUid("1"), doc), new CorruptIndexException("simulated environment failure", "")); + engine.checkIfDocumentFailureOrThrow(indexForDoc(doc), new CorruptIndexException("simulated environment failure", "")); fail("expected exception to be thrown"); } catch (Exception envirnomentException) { assertThat(envirnomentException.getMessage(), containsString("simulated environment failure")); @@ -2541,7 +2545,7 @@ public class InternalEngineTests extends ESTestCase { } private static class ThrowingIndexWriter extends IndexWriter { - private boolean throwDocumentFailure; + private AtomicReference failureToThrow = new AtomicReference<>(); public ThrowingIndexWriter(Directory d, IndexWriterConfig conf) throws IOException { super(d, conf); @@ -2549,54 +2553,109 @@ public class InternalEngineTests extends ESTestCase { @Override public long addDocument(Iterable doc) throws IOException { - if (throwDocumentFailure) { - throw new IOException("simulated"); + maybeThrowFailure(); + return super.addDocument(doc); + } + + private void maybeThrowFailure() throws IOException { + Exception failure = failureToThrow.get(); + if (failure instanceof RuntimeException) { + throw (RuntimeException)failure; + } else if (failure instanceof IOException) { + throw (IOException)failure; } else { - return super.addDocument(doc); + assert failure == null : "unsupported failure class: " + failure.getClass().getCanonicalName(); } } @Override public long deleteDocuments(Term... terms) throws IOException { - if (throwDocumentFailure) { - throw new IOException("simulated"); - } else { - return super.deleteDocuments(terms); - } + maybeThrowFailure(); + return super.deleteDocuments(terms); } - public void setThrowDocumentFailure(boolean throwDocumentFailure) { - this.throwDocumentFailure = throwDocumentFailure; + public void setThrowFailure(IOException documentFailure) { + Objects.requireNonNull(documentFailure); + failureToThrow.set(documentFailure); + } + + public void setThrowFailure(RuntimeException runtimeFailure) { + Objects.requireNonNull(runtimeFailure); + failureToThrow.set(runtimeFailure); + } + + public void clearFailure() { + failureToThrow.set(null); } } public void testHandleDocumentFailure() throws Exception { try (Store store = createStore()) { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); + final ParsedDocument doc1 = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + final ParsedDocument doc2 = testParsedDocument("2", "test", null, testDocumentWithTextField(), B_1, null); + final ParsedDocument doc3 = testParsedDocument("3", "test", null, testDocumentWithTextField(), B_1, null); ThrowingIndexWriter throwingIndexWriter = new ThrowingIndexWriter(store.directory(), new IndexWriterConfig()); try (Engine engine = createEngine(defaultSettings, store, createTempDir(), NoMergePolicy.INSTANCE, () -> throwingIndexWriter)) { // test document failure while indexing - throwingIndexWriter.setThrowDocumentFailure(true); - Engine.IndexResult indexResult = engine.index(randomAppendOnly(1, doc, false)); + if (randomBoolean()) { + throwingIndexWriter.setThrowFailure(new IOException("simulated")); + } else { + throwingIndexWriter.setThrowFailure(new IllegalArgumentException("simulated max token length")); + } + Engine.IndexResult indexResult = engine.index(indexForDoc(doc1)); assertNotNull(indexResult.getFailure()); - throwingIndexWriter.setThrowDocumentFailure(false); - indexResult = engine.index(randomAppendOnly(1, doc, false)); + throwingIndexWriter.clearFailure(); + indexResult = engine.index(indexForDoc(doc1)); assertNull(indexResult.getFailure()); + engine.index(indexForDoc(doc2)); // test document failure while deleting - throwingIndexWriter.setThrowDocumentFailure(true); - Engine.DeleteResult deleteResult = engine.delete(new Engine.Delete("test", "", newUid("1"))); + if (randomBoolean()) { + throwingIndexWriter.setThrowFailure(new IOException("simulated")); + } else { + throwingIndexWriter.setThrowFailure(new IllegalArgumentException("simulated max token length")); + } + Engine.DeleteResult deleteResult = engine.delete(new Engine.Delete("test", "1", newUid(doc1))); assertNotNull(deleteResult.getFailure()); + + // test non document level failure is thrown + if (randomBoolean()) { + // simulate close by corruption + throwingIndexWriter.setThrowFailure(new CorruptIndexException("simulated", "hello")); + try { + if (randomBoolean()) { + engine.index(indexForDoc(doc3)); + } else { + engine.delete(new Engine.Delete("test", "2", newUid(doc2))); + } + fail("corruption should throw exceptions"); + } catch (Exception e) { + assertThat(e, instanceOf(CorruptIndexException.class)); + } + } else { + // normal close + engine.close(); + } + // now the engine is closed check we respond correctly + try { + if (randomBoolean()) { + engine.index(indexForDoc(doc1)); + } else { + engine.delete(new Engine.Delete("test", "", newUid(doc1))); + } + fail("engine should be closed"); + } catch (Exception e) { + assertThat(e, instanceOf(AlreadyClosedException.class)); + } } } - } public void testDoubleDelivery() throws IOException { - final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); - Engine.Index operation = randomAppendOnly(1, doc, false); - Engine.Index retry = randomAppendOnly(1, doc, true); + final ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); + Engine.Index operation = randomAppendOnly(doc, false, 1); + Engine.Index retry = randomAppendOnly(doc, true, 1); if (randomBoolean()) { Engine.IndexResult indexResult = engine.index(operation); assertFalse(engine.indexWriterHasDeletions()); @@ -2624,8 +2683,8 @@ public class InternalEngineTests extends ESTestCase { TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), 10); assertEquals(1, topDocs.totalHits); } - operation = randomAppendOnly(1, doc, false); - retry = randomAppendOnly(1, doc, true); + operation = randomAppendOnly(doc, false, 1); + retry = randomAppendOnly(doc, true, 1); if (randomBoolean()) { Engine.IndexResult indexResult = engine.index(operation); assertNotNull(indexResult.getTranslogLocation()); @@ -2650,20 +2709,20 @@ public class InternalEngineTests extends ESTestCase { public void testRetryWithAutogeneratedIdWorksAndNoDuplicateDocs() throws IOException { - final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); + final ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); boolean isRetry = false; long autoGeneratedIdTimestamp = 0; - Engine.Index index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); Engine.IndexResult indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); - index = new Engine.Index(newUid("1"), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + index = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); indexResult = replicaEngine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); isRetry = true; - index = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + index = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); indexResult = engine.index(index); assertThat(indexResult.getVersion(), equalTo(1L)); engine.refresh("test"); @@ -2672,7 +2731,7 @@ public class InternalEngineTests extends ESTestCase { assertEquals(1, topDocs.totalHits); } - index = new Engine.Index(newUid("1"), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + index = new Engine.Index(newUid(doc), doc, indexResult.getSeqNo(), index.primaryTerm(), indexResult.getVersion(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); indexResult = replicaEngine.index(index); assertThat(indexResult.hasFailure(), equalTo(false)); replicaEngine.refresh("test"); @@ -2684,20 +2743,20 @@ public class InternalEngineTests extends ESTestCase { public void testRetryWithAutogeneratedIdsAndWrongOrderWorksAndNoDuplicateDocs() throws IOException { - final ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); + final ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); boolean isRetry = true; long autoGeneratedIdTimestamp = 0; - Engine.Index firstIndexRequest = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index firstIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); Engine.IndexResult result = engine.index(firstIndexRequest); assertThat(result.getVersion(), equalTo(1L)); - Engine.Index firstIndexRequestReplica = new Engine.Index(newUid("1"), doc, result.getSeqNo(), firstIndexRequest.primaryTerm(), result.getVersion(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index firstIndexRequestReplica = new Engine.Index(newUid(doc), doc, result.getSeqNo(), firstIndexRequest.primaryTerm(), result.getVersion(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); Engine.IndexResult indexReplicaResult = replicaEngine.index(firstIndexRequestReplica); assertThat(indexReplicaResult.getVersion(), equalTo(1L)); isRetry = false; - Engine.Index secondIndexRequest = new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index secondIndexRequest = new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); Engine.IndexResult indexResult = engine.index(secondIndexRequest); assertTrue(indexResult.isCreated()); engine.refresh("test"); @@ -2706,7 +2765,7 @@ public class InternalEngineTests extends ESTestCase { assertEquals(1, topDocs.totalHits); } - Engine.Index secondIndexRequestReplica = new Engine.Index(newUid("1"), doc, result.getSeqNo(), secondIndexRequest.primaryTerm(), result.getVersion(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); + Engine.Index secondIndexRequestReplica = new Engine.Index(newUid(doc), doc, result.getSeqNo(), secondIndexRequest.primaryTerm(), result.getVersion(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, isRetry); replicaEngine.index(secondIndexRequestReplica); replicaEngine.refresh("test"); try (Engine.Searcher searcher = replicaEngine.acquireSearcher("test")) { @@ -2715,11 +2774,13 @@ public class InternalEngineTests extends ESTestCase { } } - public Engine.Index randomAppendOnly(int docId, ParsedDocument doc, boolean retry) { + public Engine.Index randomAppendOnly(ParsedDocument doc, boolean retry, final long autoGeneratedIdTimestamp) { if (randomBoolean()) { - return new Engine.Index(newUid(Integer.toString(docId)), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), docId, retry); + return new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, Versions.MATCH_ANY, + VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), autoGeneratedIdTimestamp, retry); } - return new Engine.Index(newUid(Integer.toString(docId)), doc, 0, 0, 1, VersionType.EXTERNAL, Engine.Operation.Origin.REPLICA, System.nanoTime(), docId, retry); + return new Engine.Index(newUid(doc), doc, 0, 0, 1, VersionType.EXTERNAL, + Engine.Operation.Origin.REPLICA, System.nanoTime(), autoGeneratedIdTimestamp, retry); } public void testRetryConcurrently() throws InterruptedException, IOException { @@ -2727,9 +2788,9 @@ public class InternalEngineTests extends ESTestCase { int numDocs = randomIntBetween(1000, 10000); List docs = new ArrayList<>(); for (int i = 0; i < numDocs; i++) { - final ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); - Engine.Index originalIndex = randomAppendOnly(i, doc, false); - Engine.Index retryIndex = randomAppendOnly(i, doc, true); + final ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); + Engine.Index originalIndex = randomAppendOnly(doc, false, i); + Engine.Index retryIndex = randomAppendOnly(doc, true, i); docs.add(originalIndex); docs.add(retryIndex); } @@ -2737,21 +2798,18 @@ public class InternalEngineTests extends ESTestCase { CountDownLatch startGun = new CountDownLatch(thread.length); AtomicInteger offset = new AtomicInteger(-1); for (int i = 0; i < thread.length; i++) { - thread[i] = new Thread() { - @Override - public void run() { - startGun.countDown(); - try { - startGun.await(); - } catch (InterruptedException e) { - throw new AssertionError(e); - } - int docOffset; - while ((docOffset = offset.incrementAndGet()) < docs.size()) { - engine.index(docs.get(docOffset)); - } + thread[i] = new Thread(() -> { + startGun.countDown(); + try { + startGun.await(); + } catch (InterruptedException e) { + throw new AssertionError(e); } - }; + int docOffset; + while ((docOffset = offset.incrementAndGet()) < docs.size()) { + engine.index(docs.get(docOffset)); + } + }); thread[i].start(); } for (int i = 0; i < thread.length; i++) { @@ -2790,8 +2848,8 @@ public class InternalEngineTests extends ESTestCase { assertEquals(0, engine.getNumIndexVersionsLookups()); List docs = new ArrayList<>(); for (int i = 0; i < numDocs; i++) { - final ParsedDocument doc = testParsedDocument(Integer.toString(i), Integer.toString(i), "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); - Engine.Index index = randomAppendOnly(i, doc, false); + final ParsedDocument doc = testParsedDocument(Integer.toString(i), "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); + Engine.Index index = randomAppendOnly(doc, false, i); docs.add(index); } Collections.shuffle(docs, random()); @@ -2862,16 +2920,16 @@ public class InternalEngineTests extends ESTestCase { } catch (InterruptedException e) { throw new AssertionError(e); } - throw new AlreadyClosedException("boom"); + throw new ElasticsearchException("something completely different"); } } }); InternalEngine internalEngine = new InternalEngine(config); int docId = 0; - final ParsedDocument doc = testParsedDocument(Integer.toString(docId), Integer.toString(docId), "test", null, + final ParsedDocument doc = testParsedDocument(Integer.toString(docId), "test", null, testDocumentWithTextField(), new BytesArray("{}".getBytes(Charset.defaultCharset())), null); - Engine.Index index = randomAppendOnly(docId, doc, false); + Engine.Index index = randomBoolean() ? indexForDoc(doc) : randomAppendOnly(doc, false, docId); internalEngine.index(index); Runnable r = () -> { try { @@ -2882,11 +2940,11 @@ public class InternalEngineTests extends ESTestCase { try { internalEngine.refresh("test"); fail(); - } catch (EngineClosedException ex) { - // we can't guarantee that we are entering the refresh call before it's fully - // closed so we also expecting ECE here - assertTrue(ex.toString(), ex.getCause() instanceof MockDirectoryWrapper.FakeIOException); - } catch (RefreshFailedEngineException | AlreadyClosedException ex) { + } catch (AlreadyClosedException ex) { + if (ex.getCause() != null) { + assertTrue(ex.toString(), ex.getCause() instanceof MockDirectoryWrapper.FakeIOException); + } + } catch (RefreshFailedEngineException ex) { // fine } finally { start.countDown(); @@ -2913,11 +2971,11 @@ public class InternalEngineTests extends ESTestCase { // create a document Document document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, document, B_1, null); + engine.index(indexForDoc(doc)); engine.refresh("test"); - seqID = getSequenceID(engine, new Engine.Get(false, newUid("1"))); + seqID = getSequenceID(engine, new Engine.Get(false, newUid(doc))); logger.info("--> got seqID: {}", seqID); assertThat(seqID.v1(), equalTo(0L)); assertThat(seqID.v2(), equalTo(0L)); @@ -2925,11 +2983,11 @@ public class InternalEngineTests extends ESTestCase { // Index the same document again document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - engine.index(new Engine.Index(newUid("1"), doc)); + doc = testParsedDocument("1", "test", null, document, B_1, null); + engine.index(indexForDoc(doc)); engine.refresh("test"); - seqID = getSequenceID(engine, new Engine.Get(false, newUid("1"))); + seqID = getSequenceID(engine, new Engine.Get(false, newUid(doc))); logger.info("--> got seqID: {}", seqID); assertThat(seqID.v1(), equalTo(1L)); assertThat(seqID.v2(), equalTo(0L)); @@ -2937,13 +2995,13 @@ public class InternalEngineTests extends ESTestCase { // Index the same document for the third time, this time changing the primary term document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - engine.index(new Engine.Index(newUid("1"), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 1, + doc = testParsedDocument("1", "test", null, document, B_1, null); + engine.index(new Engine.Index(newUid(doc), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 1, Versions.MATCH_ANY, VersionType.INTERNAL, Engine.Operation.Origin.PRIMARY, System.nanoTime(), -1, false)); engine.refresh("test"); - seqID = getSequenceID(engine, new Engine.Get(false, newUid("1"))); + seqID = getSequenceID(engine, new Engine.Get(false, newUid(doc))); logger.info("--> got seqID: {}", seqID); assertThat(seqID.v1(), equalTo(2L)); assertThat(seqID.v2(), equalTo(1L)); @@ -2994,11 +3052,10 @@ public class InternalEngineTests extends ESTestCase { final InternalEngine finalInitialEngine = initialEngine; for (int i = 0; i < docs; i++) { final String id = Integer.toString(i); - final Term uid = newUid(id); - final ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocumentWithTextField(), SOURCE, null); + final ParsedDocument doc = testParsedDocument(id, "test", null, testDocumentWithTextField(), SOURCE, null); skip.set(randomBoolean()); - final Thread thread = new Thread(() -> finalInitialEngine.index(new Engine.Index(uid, doc))); + final Thread thread = new Thread(() -> finalInitialEngine.index(indexForDoc(doc))); thread.start(); if (skip.get()) { threads.add(thread); @@ -3036,8 +3093,8 @@ public class InternalEngineTests extends ESTestCase { initialEngine = engine; for (int i = 0; i < docs; i++) { final String id = Integer.toString(i); - final Term uid = newUid(id); - final ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocumentWithTextField(), SOURCE, null); + final ParsedDocument doc = testParsedDocument(id, "test", null, testDocumentWithTextField(), SOURCE, null); + final Term uid = newUid(doc); // create a gap at sequence number 3 * i + 1 initialEngine.index(new Engine.Index(uid, doc, 3 * i, 1, v, t, REPLICA, System.nanoTime(), ts, false)); initialEngine.delete(new Engine.Delete("type", id, uid, 3 * i + 2, 1, v, t, REPLICA, System.nanoTime())); @@ -3050,8 +3107,8 @@ public class InternalEngineTests extends ESTestCase { for (int i = 0; i < docs; i++) { final String id = Integer.toString(i); - final Term uid = newUid(id); - final ParsedDocument doc = testParsedDocument(id, id, "test", null, testDocumentWithTextField(), SOURCE, null); + final ParsedDocument doc = testParsedDocument(id, "test", null, testDocumentWithTextField(), SOURCE, null); + final Term uid = newUid(doc); initialEngine.index(new Engine.Index(uid, doc, 3 * i + 1, 1, v, t, REPLICA, System.nanoTime(), ts, false)); } } finally { @@ -3068,14 +3125,14 @@ public class InternalEngineTests extends ESTestCase { final List operations = new ArrayList<>(); final int numberOfOperations = randomIntBetween(16, 32); - final Term uid = newUid("1"); final Document document = testDocumentWithTextField(); final AtomicLong sequenceNumber = new AtomicLong(); final Engine.Operation.Origin origin = randomFrom(LOCAL_TRANSLOG_RECOVERY, PEER_RECOVERY, PRIMARY, REPLICA); final LongSupplier sequenceNumberSupplier = origin == PRIMARY ? () -> SequenceNumbersService.UNASSIGNED_SEQ_NO : sequenceNumber::getAndIncrement; document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - final ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_1, null); + final ParsedDocument doc = testParsedDocument("1", "test", null, document, B_1, null); + final Term uid = newUid(doc); for (int i = 0; i < numberOfOperations; i++) { if (randomBoolean()) { final Engine.Index index = new Engine.Index( diff --git a/core/src/test/java/org/elasticsearch/index/engine/ShadowEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/ShadowEngineTests.java index a7470666d63..cc92d9bd9c2 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/ShadowEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/ShadowEngineTests.java @@ -33,6 +33,7 @@ import org.apache.lucene.index.SnapshotDeletionPolicy; import org.apache.lucene.index.Term; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.TermQuery; +import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.store.Directory; import org.apache.lucene.store.MockDirectoryWrapper; import org.apache.lucene.util.IOUtils; @@ -44,21 +45,18 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.bytes.BytesArray; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.lucene.Lucene; -import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.BigArrays; import org.elasticsearch.index.IndexSettings; -import org.elasticsearch.index.VersionType; import org.elasticsearch.index.codec.CodecService; import org.elasticsearch.index.mapper.Mapping; import org.elasticsearch.index.mapper.ParseContext; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.SeqNoFieldMapper; import org.elasticsearch.index.mapper.SourceFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; -import org.elasticsearch.index.seqno.SequenceNumbersService; -import org.elasticsearch.index.shard.DocsStats; import org.elasticsearch.index.shard.RefreshListeners; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.shard.ShardUtils; @@ -83,7 +81,6 @@ import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicBoolean; -import static org.elasticsearch.index.engine.Engine.Operation.Origin.PRIMARY; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.hasKey; @@ -172,8 +169,8 @@ public class ShadowEngineTests extends ESTestCase { } - private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, ParseContext.Document document, BytesReference source, Mapping mappingsUpdate) { - Field uidField = new Field("_uid", uid, UidFieldMapper.Defaults.FIELD_TYPE); + private ParsedDocument testParsedDocument(String id, String type, String routing, ParseContext.Document document, BytesReference source, Mapping mappingsUpdate) { + Field uidField = new Field("_uid", Uid.createUid(type, id), UidFieldMapper.Defaults.FIELD_TYPE); Field versionField = new NumericDocValuesField("_version", 0); SeqNoFieldMapper.SequenceID seqID = SeqNoFieldMapper.SequenceID.emptySeqID(); document.add(uidField); @@ -254,8 +251,16 @@ public class ShadowEngineTests extends ESTestCase { return config; } - protected Term newUid(String id) { - return new Term("_uid", id); +// protected Term newUid(String id) { +// return new Term("_uid", id); +// } + + protected Term newUid(ParsedDocument doc) { + return new Term("_uid", doc.uid()); + } + + private Engine.Index indexForDoc(ParsedDocument doc) { + return new Engine.Index(newUid(doc), doc); } protected static final BytesReference B_1 = new BytesArray(new byte[]{1}); @@ -264,8 +269,8 @@ public class ShadowEngineTests extends ESTestCase { public void testCommitStats() { // create a doc and refresh - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + primaryEngine.index(indexForDoc(doc)); CommitStats stats1 = replicaEngine.commitStats(); assertThat(stats1.getGeneration(), greaterThan(0L)); @@ -296,11 +301,11 @@ public class ShadowEngineTests extends ESTestCase { assertThat(primaryEngine.segmentsStats(false).getMemoryInBytes(), equalTo(0L)); // create a doc and refresh - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + primaryEngine.index(indexForDoc(doc)); - ParsedDocument doc2 = testParsedDocument("2", "2", "test", null, testDocumentWithTextField(), B_2, null); - primaryEngine.index(new Engine.Index(newUid("2"), doc2)); + ParsedDocument doc2 = testParsedDocument("2", "test", null, testDocumentWithTextField(), B_2, null); + primaryEngine.index(indexForDoc(doc2)); primaryEngine.refresh("test"); segments = primaryEngine.segments(false); @@ -358,8 +363,8 @@ public class ShadowEngineTests extends ESTestCase { assertThat(segments.get(0).isCompound(), equalTo(true)); - ParsedDocument doc3 = testParsedDocument("3", "3", "test", null, testDocumentWithTextField(), B_3, null); - primaryEngine.index(new Engine.Index(newUid("3"), doc3)); + ParsedDocument doc3 = testParsedDocument("3", "test", null, testDocumentWithTextField(), B_3, null); + primaryEngine.index(indexForDoc(doc3)); primaryEngine.refresh("test"); segments = primaryEngine.segments(false); @@ -408,7 +413,7 @@ public class ShadowEngineTests extends ESTestCase { assertThat(segments.get(1).getDeletedDocs(), equalTo(0)); assertThat(segments.get(1).isCompound(), equalTo(true)); - primaryEngine.delete(new Engine.Delete("test", "1", newUid("1"))); + primaryEngine.delete(new Engine.Delete("test", "1", newUid(doc))); primaryEngine.refresh("test"); segments = primaryEngine.segments(false); @@ -430,8 +435,8 @@ public class ShadowEngineTests extends ESTestCase { primaryEngine.flush(); replicaEngine.refresh("test"); - ParsedDocument doc4 = testParsedDocument("4", "4", "test", null, testDocumentWithTextField(), B_3, null); - primaryEngine.index(new Engine.Index(newUid("4"), doc4)); + ParsedDocument doc4 = testParsedDocument("4", "test", null, testDocumentWithTextField(), B_3, null); + primaryEngine.index(indexForDoc(doc4)); primaryEngine.refresh("test"); segments = primaryEngine.segments(false); @@ -463,19 +468,19 @@ public class ShadowEngineTests extends ESTestCase { List segments = primaryEngine.segments(true); assertThat(segments.isEmpty(), equalTo(true)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + primaryEngine.index(indexForDoc(doc)); primaryEngine.refresh("test"); segments = primaryEngine.segments(true); assertThat(segments.size(), equalTo(1)); assertThat(segments.get(0).ramTree, notNullValue()); - ParsedDocument doc2 = testParsedDocument("2", "2", "test", null, testDocumentWithTextField(), B_2, null); - primaryEngine.index(new Engine.Index(newUid("2"), doc2)); + ParsedDocument doc2 = testParsedDocument("2", "test", null, testDocumentWithTextField(), B_2, null); + primaryEngine.index(indexForDoc(doc2)); primaryEngine.refresh("test"); - ParsedDocument doc3 = testParsedDocument("3", "3", "test", null, testDocumentWithTextField(), B_3, null); - primaryEngine.index(new Engine.Index(newUid("3"), doc3)); + ParsedDocument doc3 = testParsedDocument("3", "test", null, testDocumentWithTextField(), B_3, null); + primaryEngine.index(indexForDoc(doc3)); primaryEngine.refresh("test"); segments = primaryEngine.segments(true); @@ -500,9 +505,9 @@ public class ShadowEngineTests extends ESTestCase { // create a document ParseContext.Document document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_1, null); + ParsedDocument doc = testParsedDocument("1", "test", null, document, B_1, null); try { - replicaEngine.index(new Engine.Index(newUid("1"), doc)); + replicaEngine.index(indexForDoc(doc)); fail("should have thrown an exception"); } catch (UnsupportedOperationException e) {} replicaEngine.refresh("test"); @@ -512,16 +517,16 @@ public class ShadowEngineTests extends ESTestCase { MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(0)); MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term("value", "test")), 0)); searchResult.close(); - Engine.GetResult getResult = replicaEngine.get(new Engine.Get(true, newUid("1"))); + Engine.GetResult getResult = replicaEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); getResult.release(); // index a document document = testDocument(); document.add(new TextField("value", "test1", Field.Store.YES)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); + doc = testParsedDocument("1", "test", null, document, B_1, null); try { - replicaEngine.index(new Engine.Index(newUid("1"), doc)); + replicaEngine.index(indexForDoc(doc)); fail("should have thrown an exception"); } catch (UnsupportedOperationException e) {} replicaEngine.refresh("test"); @@ -531,15 +536,15 @@ public class ShadowEngineTests extends ESTestCase { MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(0)); MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term("value", "test")), 0)); searchResult.close(); - getResult = replicaEngine.get(new Engine.Get(true, newUid("1"))); + getResult = replicaEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); getResult.release(); // Now, add a document to the primary so we can test shadow engine deletes document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + doc = testParsedDocument("1", "test", null, document, B_1, null); + primaryEngine.index(indexForDoc(doc)); primaryEngine.flush(); replicaEngine.refresh("test"); @@ -550,14 +555,14 @@ public class ShadowEngineTests extends ESTestCase { searchResult.close(); // And the replica can retrieve it - getResult = replicaEngine.get(new Engine.Get(false, newUid("1"))); + getResult = replicaEngine.get(new Engine.Get(false, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); // try to delete it on the replica try { - replicaEngine.delete(new Engine.Delete("test", "1", newUid("1"))); + replicaEngine.delete(new Engine.Delete("test", "1", newUid(doc))); fail("should have thrown an exception"); } catch (UnsupportedOperationException e) {} replicaEngine.flush(); @@ -569,7 +574,7 @@ public class ShadowEngineTests extends ESTestCase { MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(1)); MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term("value", "test")), 1)); searchResult.close(); - getResult = replicaEngine.get(new Engine.Get(false, newUid("1"))); + getResult = replicaEngine.get(new Engine.Get(false, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -579,7 +584,7 @@ public class ShadowEngineTests extends ESTestCase { MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(1)); MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term("value", "test")), 1)); searchResult.close(); - getResult = primaryEngine.get(new Engine.Get(false, newUid("1"))); + getResult = primaryEngine.get(new Engine.Get(false, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -593,8 +598,8 @@ public class ShadowEngineTests extends ESTestCase { // create a document ParseContext.Document document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, document, B_1, null); + primaryEngine.index(indexForDoc(doc)); // its not there... searchResult = primaryEngine.acquireSearcher("test"); @@ -609,18 +614,18 @@ public class ShadowEngineTests extends ESTestCase { searchResult.close(); // but, we can still get it (in realtime) - Engine.GetResult getResult = primaryEngine.get(new Engine.Get(true, newUid("1"))); + Engine.GetResult getResult = primaryEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); // can't get it from the replica, because it's not in the translog for a shadow replica - getResult = replicaEngine.get(new Engine.Get(true, newUid("1"))); + getResult = replicaEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); getResult.release(); // but, not there non realtime - getResult = primaryEngine.get(new Engine.Get(false, newUid("1"))); + getResult = primaryEngine.get(new Engine.Get(false, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); getResult.release(); @@ -631,7 +636,7 @@ public class ShadowEngineTests extends ESTestCase { searchResult.close(); // also in non realtime - getResult = primaryEngine.get(new Engine.Get(false, newUid("1"))); + getResult = primaryEngine.get(new Engine.Get(false, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -646,8 +651,8 @@ public class ShadowEngineTests extends ESTestCase { document = testDocument(); document.add(new TextField("value", "test1", Field.Store.YES)); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_2), SourceFieldMapper.Defaults.FIELD_TYPE)); - doc = testParsedDocument("1", "1", "test", null, document, B_2, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + doc = testParsedDocument("1", "test", null, document, B_2, null); + primaryEngine.index(indexForDoc(doc)); // its not updated yet... searchResult = primaryEngine.acquireSearcher("test"); @@ -657,7 +662,7 @@ public class ShadowEngineTests extends ESTestCase { searchResult.close(); // but, we can still get it (in realtime) - getResult = primaryEngine.get(new Engine.Get(true, newUid("1"))); + getResult = primaryEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -690,7 +695,7 @@ public class ShadowEngineTests extends ESTestCase { searchResult.close(); // now delete - primaryEngine.delete(new Engine.Delete("test", "1", newUid("1"))); + primaryEngine.delete(new Engine.Delete("test", "1", newUid(doc))); // its not deleted yet searchResult = primaryEngine.acquireSearcher("test"); @@ -700,7 +705,7 @@ public class ShadowEngineTests extends ESTestCase { searchResult.close(); // but, get should not see it (in realtime) - getResult = primaryEngine.get(new Engine.Get(true, newUid("1"))); + getResult = primaryEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(false)); getResult.release(); @@ -716,8 +721,8 @@ public class ShadowEngineTests extends ESTestCase { // add it back document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + doc = testParsedDocument("1", "test", null, document, B_1, null); + primaryEngine.index(indexForDoc(doc)); // its not there... searchResult = primaryEngine.acquireSearcher("test"); @@ -740,7 +745,7 @@ public class ShadowEngineTests extends ESTestCase { primaryEngine.flush(); // and, verify get (in real time) - getResult = primaryEngine.get(new Engine.Get(true, newUid("1"))); + getResult = primaryEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -752,7 +757,7 @@ public class ShadowEngineTests extends ESTestCase { MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term("value", "test")), 1)); MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term("value", "test1")), 0)); searchResult.close(); - getResult = replicaEngine.get(new Engine.Get(true, newUid("1"))); + getResult = replicaEngine.get(new Engine.Get(true, newUid(doc))); assertThat(getResult.exists(), equalTo(true)); assertThat(getResult.docIdAndVersion(), notNullValue()); getResult.release(); @@ -761,8 +766,8 @@ public class ShadowEngineTests extends ESTestCase { // now do an update document = testDocument(); document.add(new TextField("value", "test1", Field.Store.YES)); - doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + doc = testParsedDocument("1", "test", null, document, B_1, null); + primaryEngine.index(indexForDoc(doc)); // its not updated yet... searchResult = primaryEngine.acquireSearcher("test"); @@ -797,8 +802,8 @@ public class ShadowEngineTests extends ESTestCase { searchResult.close(); // create a document - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + primaryEngine.index(indexForDoc(doc)); // its not there... searchResult = primaryEngine.acquireSearcher("test"); @@ -827,7 +832,7 @@ public class ShadowEngineTests extends ESTestCase { // don't release the replica search result yet... // delete, refresh and do a new search, it should not be there - primaryEngine.delete(new Engine.Delete("test", "1", newUid("1"))); + primaryEngine.delete(new Engine.Delete(doc.type(), doc.id(), newUid(doc))); primaryEngine.flush(); primaryEngine.refresh("test"); replicaEngine.refresh("test"); @@ -842,8 +847,8 @@ public class ShadowEngineTests extends ESTestCase { } public void testFailEngineOnCorruption() { - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + primaryEngine.index(indexForDoc(doc)); primaryEngine.flush(); MockDirectoryWrapper leaf = DirectoryUtils.getLeaf(replicaEngine.config().getStore().directory(), MockDirectoryWrapper.class); leaf.setRandomIOExceptionRate(1.0); @@ -860,7 +865,7 @@ public class ShadowEngineTests extends ESTestCase { MatcherAssert.assertThat(searchResult, EngineSearcherTotalHitsMatcher.engineSearcherTotalHits(new TermQuery(new Term("value", "test")), 1)); searchResult.close(); fail("exception expected"); - } catch (EngineClosedException ex) { + } catch (AlreadyClosedException ex) { // all is well } } @@ -879,8 +884,8 @@ public class ShadowEngineTests extends ESTestCase { */ public void testFailStart() throws IOException { // Need a commit point for this - ParsedDocument doc = testParsedDocument("1", "1", "test", null, testDocumentWithTextField(), B_1, null); - primaryEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, testDocumentWithTextField(), B_1, null); + primaryEngine.index(indexForDoc(doc)); primaryEngine.flush(); // this test fails if any reader, searcher or directory is not closed - MDW FTW @@ -965,8 +970,8 @@ public class ShadowEngineTests extends ESTestCase { // create a document ParseContext.Document document = testDocumentWithTextField(); document.add(new Field(SourceFieldMapper.NAME, BytesReference.toBytes(B_1), SourceFieldMapper.Defaults.FIELD_TYPE)); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, document, B_1, null); - pEngine.index(new Engine.Index(newUid("1"), doc)); + ParsedDocument doc = testParsedDocument("1", "test", null, document, B_1, null); + pEngine.index(indexForDoc(doc)); pEngine.flush(true, true); t.join(); diff --git a/core/src/test/java/org/elasticsearch/index/mapper/TextFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/TextFieldMapperTests.java index e234beb7904..4df4361db6a 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/TextFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/TextFieldMapperTests.java @@ -214,7 +214,7 @@ public class TextFieldMapperTests extends ESSingleNodeTestCase { assertEquals("b", fields[1].stringValue()); IndexShard shard = indexService.getShard(0); - shard.index(new Engine.Index(new Term("_uid", "1"), doc)); + shard.index(new Engine.Index(new Term("_uid", doc.uid() ), doc)); shard.refresh("test"); try (Engine.Searcher searcher = shard.acquireSearcher("test")) { LeafReader leaf = searcher.getDirectoryReader().leaves().get(0).reader(); @@ -253,7 +253,7 @@ public class TextFieldMapperTests extends ESSingleNodeTestCase { assertEquals("b", fields[1].stringValue()); IndexShard shard = indexService.getShard(0); - shard.index(new Engine.Index(new Term("_uid", "1"), doc)); + shard.index(new Engine.Index(new Term("_uid", doc.uid()), doc)); shard.refresh("test"); try (Engine.Searcher searcher = shard.acquireSearcher("test")) { LeafReader leaf = searcher.getDirectoryReader().leaves().get(0).reader(); diff --git a/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java b/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java index c17935375e8..3e35ed357ff 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java +++ b/core/src/test/java/org/elasticsearch/index/shard/IndexShardIT.java @@ -56,6 +56,7 @@ import org.elasticsearch.index.mapper.Mapping; import org.elasticsearch.index.mapper.ParseContext; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.SeqNoFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.translog.Translog; @@ -100,9 +101,9 @@ public class IndexShardIT extends ESSingleNodeTestCase { return pluginList(InternalSettingsPlugin.class); } - private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, long seqNo, + private ParsedDocument testParsedDocument(String id, String type, String routing, long seqNo, ParseContext.Document document, BytesReference source, Mapping mappingUpdate) { - Field uidField = new Field("_uid", uid, UidFieldMapper.Defaults.FIELD_TYPE); + Field uidField = new Field("_uid", Uid.createUid(type, id), UidFieldMapper.Defaults.FIELD_TYPE); Field versionField = new NumericDocValuesField("_version", 0); SeqNoFieldMapper.SequenceID seqID = SeqNoFieldMapper.SequenceID.emptySeqID(); document.add(uidField); @@ -325,14 +326,13 @@ public class IndexShardIT extends ESSingleNodeTestCase { client().prepareIndex("test", "test", "0").setSource("{}").setRefreshPolicy(randomBoolean() ? IMMEDIATE : NONE).get(); assertFalse(shard.shouldFlush()); ParsedDocument doc = testParsedDocument( - "1", "1", "test", null, SequenceNumbersService.UNASSIGNED_SEQ_NO, new ParseContext.Document(), new BytesArray(new byte[]{1}), null); - Engine.Index index = new Engine.Index(new Term("_uid", "1"), doc); + Engine.Index index = new Engine.Index(new Term("_uid", doc.uid()), doc); shard.index(index); assertTrue(shard.shouldFlush()); assertEquals(2, shard.getEngine().getTranslog().totalOperations()); diff --git a/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java b/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java index d4e27e857de..79e3868da46 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/IndexShardTests.java @@ -29,6 +29,7 @@ import org.apache.lucene.index.Term; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TopDocs; +import org.apache.lucene.store.AlreadyClosedException; import org.apache.lucene.store.IOContext; import org.apache.lucene.util.Constants; import org.elasticsearch.Version; @@ -547,9 +548,9 @@ public class IndexShardTests extends IndexShardTestCase { closeShards(shard); } - private ParsedDocument testParsedDocument(String uid, String id, String type, String routing, + private ParsedDocument testParsedDocument(String id, String type, String routing, ParseContext.Document document, BytesReference source, Mapping mappingUpdate) { - Field uidField = new Field("_uid", uid, UidFieldMapper.Defaults.FIELD_TYPE); + Field uidField = new Field("_uid", Uid.createUid(type, id), UidFieldMapper.Defaults.FIELD_TYPE); Field versionField = new NumericDocValuesField("_version", 0); SeqNoFieldMapper.SequenceID seqID = SeqNoFieldMapper.SequenceID.emptySeqID(); document.add(uidField); @@ -619,9 +620,9 @@ public class IndexShardTests extends IndexShardTestCase { }); recoveryShardFromStore(shard); - ParsedDocument doc = testParsedDocument("1", "1", "test", null, new ParseContext.Document(), + ParsedDocument doc = testParsedDocument("1", "test", null, new ParseContext.Document(), new BytesArray(new byte[]{1}), null); - Engine.Index index = new Engine.Index(new Term("_uid", "1"), doc); + Engine.Index index = new Engine.Index(new Term("_uid", doc.uid()), doc); shard.index(index); assertEquals(1, preIndex.get()); assertEquals(1, postIndexCreate.get()); @@ -640,7 +641,7 @@ public class IndexShardTests extends IndexShardTestCase { assertEquals(0, postDelete.get()); assertEquals(0, postDeleteException.get()); - Engine.Delete delete = new Engine.Delete("test", "1", new Term("_uid", "1")); + Engine.Delete delete = new Engine.Delete("test", "1", new Term("_uid", doc.uid())); shard.delete(delete); assertEquals(2, preIndex.get()); @@ -657,7 +658,7 @@ public class IndexShardTests extends IndexShardTestCase { try { shard.index(index); fail(); - } catch (IllegalIndexShardStateException e) { + } catch (AlreadyClosedException e) { } @@ -671,7 +672,7 @@ public class IndexShardTests extends IndexShardTestCase { try { shard.delete(delete); fail(); - } catch (IllegalIndexShardStateException e) { + } catch (AlreadyClosedException e) { } @@ -1376,10 +1377,10 @@ public class IndexShardTests extends IndexShardTestCase { for (int i = 0; i < numDocs; i++) { final String id = Integer.toString(i); final ParsedDocument doc = - testParsedDocument(id, id, "test", null, new ParseContext.Document(), new BytesArray("{}"), null); + testParsedDocument(id, "test", null, new ParseContext.Document(), new BytesArray("{}"), null); final Engine.Index index = new Engine.Index( - new Term("_uid", id), + new Term("_uid", doc.uid()), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, @@ -1406,10 +1407,10 @@ public class IndexShardTests extends IndexShardTestCase { for (final Integer i : ids) { final String id = Integer.toString(i); final ParsedDocument doc = - testParsedDocument(id, id, "test", null, new ParseContext.Document(), new BytesArray("{}"), null); + testParsedDocument(id, "test", null, new ParseContext.Document(), new BytesArray("{}"), null); final Engine.Index index = new Engine.Index( - new Term("_uid", id), + new Term("_uid", doc.uid()), doc, SequenceNumbersService.UNASSIGNED_SEQ_NO, 0, diff --git a/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java b/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java index 3d5a9fdf137..2eb91a16d80 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/IndexingOperationListenerTests.java @@ -21,6 +21,8 @@ package org.elasticsearch.index.shard; import org.apache.lucene.index.Term; import org.elasticsearch.index.Index; import org.elasticsearch.index.engine.Engine; +import org.elasticsearch.index.engine.InternalEngineTests; +import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.test.ESTestCase; @@ -131,9 +133,10 @@ public class IndexingOperationListenerTests extends ESTestCase{ } Collections.shuffle(indexingOperationListeners, random()); IndexingOperationListener.CompositeListener compositeListener = - new IndexingOperationListener.CompositeListener(indexingOperationListeners, logger); - Engine.Delete delete = new Engine.Delete("test", "1", new Term("_uid", "1")); - Engine.Index index = new Engine.Index(new Term("_uid", "1"), null); + new IndexingOperationListener.CompositeListener(indexingOperationListeners, logger); + ParsedDocument doc = InternalEngineTests.createParsedDoc("1", "test", null); + Engine.Delete delete = new Engine.Delete("test", "1", new Term("_uid", doc.uid())); + Engine.Index index = new Engine.Index(new Term("_uid", doc.uid()), doc); compositeListener.postDelete(randomShardId, delete, new Engine.DeleteResult(1, SequenceNumbersService.UNASSIGNED_SEQ_NO, true)); assertEquals(0, preIndex.get()); assertEquals(0, postIndex.get()); diff --git a/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java b/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java index c1e2605ec21..e95d7ace10b 100644 --- a/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java +++ b/core/src/test/java/org/elasticsearch/index/shard/RefreshListenersTests.java @@ -48,6 +48,7 @@ import org.elasticsearch.index.fieldvisitor.SingleFieldsVisitor; import org.elasticsearch.index.mapper.ParseContext.Document; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.SeqNoFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.store.DirectoryService; import org.elasticsearch.index.store.Store; @@ -297,7 +298,7 @@ public class RefreshListenersTests extends ESTestCase { } listener.assertNoError(); - Engine.Get get = new Engine.Get(false, new Term("_uid", "test:"+threadId)); + Engine.Get get = new Engine.Get(false, new Term("_uid", Uid.createUid("test", threadId))); try (Engine.GetResult getResult = engine.get(get)) { assertTrue("document not found", getResult.exists()); assertEquals(iteration, getResult.version()); @@ -328,7 +329,7 @@ public class RefreshListenersTests extends ESTestCase { String uid = type + ":" + id; Document document = new Document(); document.add(new TextField("test", testFieldValue, Field.Store.YES)); - Field uidField = new Field("_uid", type + ":" + id, UidFieldMapper.Defaults.FIELD_TYPE); + Field uidField = new Field("_uid", Uid.createUid(type, id), UidFieldMapper.Defaults.FIELD_TYPE); Field versionField = new NumericDocValuesField("_version", Versions.MATCH_ANY); SeqNoFieldMapper.SequenceID seqID = SeqNoFieldMapper.SequenceID.emptySeqID(); document.add(uidField); @@ -338,7 +339,7 @@ public class RefreshListenersTests extends ESTestCase { document.add(seqID.primaryTerm); BytesReference source = new BytesArray(new byte[] { 1 }); ParsedDocument doc = new ParsedDocument(versionField, seqID, id, type, null, Arrays.asList(document), source, null); - Engine.Index index = new Engine.Index(new Term("_uid", uid), doc); + Engine.Index index = new Engine.Index(new Term("_uid", doc.uid()), doc); return engine.index(index); } diff --git a/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java b/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java index a3e3f611b21..1a9d7c97dc3 100644 --- a/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java +++ b/core/src/test/java/org/elasticsearch/index/translog/TranslogTests.java @@ -56,6 +56,7 @@ import org.elasticsearch.index.engine.Engine.Operation.Origin; import org.elasticsearch.index.mapper.ParseContext.Document; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.index.mapper.SeqNoFieldMapper; +import org.elasticsearch.index.mapper.Uid; import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.ShardId; @@ -625,8 +626,12 @@ public class TranslogTests extends ESTestCase { } } - private Term newUid(String id) { - return new Term("_uid", id); + private Term newUid(ParsedDocument doc) { + return new Term("_uid", doc.uid()); + } + + private Term newUid(String uid) { + return new Term("_uid", uid); } public void testVerifyTranslogIsNotDeleted() throws IOException { @@ -2014,7 +2019,7 @@ public class TranslogTests extends ESTestCase { seqID.seqNo.setLongValue(randomSeqNum); seqID.seqNoDocValue.setLongValue(randomSeqNum); seqID.primaryTerm.setLongValue(randomPrimaryTerm); - Field uidField = new Field("_uid", "1", UidFieldMapper.Defaults.FIELD_TYPE); + Field uidField = new Field("_uid", Uid.createUid("test", "1"), UidFieldMapper.Defaults.FIELD_TYPE); Field versionField = new NumericDocValuesField("_version", 1); Document document = new Document(); document.add(new TextField("value", "test", Field.Store.YES)); @@ -2025,7 +2030,7 @@ public class TranslogTests extends ESTestCase { document.add(seqID.primaryTerm); ParsedDocument doc = new ParsedDocument(versionField, seqID, "1", "type", null, Arrays.asList(document), B_1, null); - Engine.Index eIndex = new Engine.Index(newUid("1"), doc, randomSeqNum, randomPrimaryTerm, + Engine.Index eIndex = new Engine.Index(newUid(doc), doc, randomSeqNum, randomPrimaryTerm, 1, VersionType.INTERNAL, Origin.PRIMARY, 0, 0, false); Engine.IndexResult eIndexResult = new Engine.IndexResult(1, randomSeqNum, true); Translog.Index index = new Translog.Index(eIndex, eIndexResult); @@ -2036,7 +2041,7 @@ public class TranslogTests extends ESTestCase { Translog.Index serializedIndex = new Translog.Index(in); assertEquals(index, serializedIndex); - Engine.Delete eDelete = new Engine.Delete("type", "1", newUid("1"), randomSeqNum, randomPrimaryTerm, + Engine.Delete eDelete = new Engine.Delete(doc.type(), doc.id(), newUid(doc), randomSeqNum, randomPrimaryTerm, 2, VersionType.INTERNAL, Origin.PRIMARY, 0); Engine.DeleteResult eDeleteResult = new Engine.DeleteResult(2, randomSeqNum, true); Translog.Delete delete = new Translog.Delete(eDelete, eDeleteResult); From ebd38e2a6ab4375368ea372707552f3218a6e1c8 Mon Sep 17 00:00:00 2001 From: Michael McCandless Date: Mon, 16 Jan 2017 16:53:32 -0500 Subject: [PATCH 25/28] Expose FlattenGraphTokenFilter (#22643) FlattenGraphTokenFilter is necessary for using graph-based token streams (e.g. the new SynonymGraphFilter) during indexing. --- .../FlattenGraphTokenFilterFactory.java | 38 ++++++++++ .../FlattenGraphTokenFilterFactoryTests.java | 73 +++++++++++++++++++ .../AnalysisFactoryTestCase.java | 4 +- 3 files changed, 113 insertions(+), 2 deletions(-) create mode 100644 core/src/main/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactory.java create mode 100644 core/src/test/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactoryTests.java diff --git a/core/src/main/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactory.java b/core/src/main/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactory.java new file mode 100644 index 00000000000..3af472f54bc --- /dev/null +++ b/core/src/main/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactory.java @@ -0,0 +1,38 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.synonym.FlattenGraphFilter; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.env.Environment; +import org.elasticsearch.index.IndexSettings; + +public class FlattenGraphTokenFilterFactory extends AbstractTokenFilterFactory { + + public FlattenGraphTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { + super(indexSettings, name, settings); + } + + @Override + public TokenStream create(TokenStream tokenStream) { + return new FlattenGraphFilter(tokenStream); + } +} diff --git a/core/src/test/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactoryTests.java b/core/src/test/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactoryTests.java new file mode 100644 index 00000000000..259da010daa --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/analysis/FlattenGraphTokenFilterFactoryTests.java @@ -0,0 +1,73 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.analysis; + +import java.io.IOException; + +import org.apache.lucene.analysis.CannedTokenStream; +import org.apache.lucene.analysis.Token; +import org.apache.lucene.analysis.TokenStream; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.IndexSettings; +import org.elasticsearch.test.ESTokenStreamTestCase; +import org.elasticsearch.test.IndexSettingsModule; + +public class FlattenGraphTokenFilterFactoryTests extends ESTokenStreamTestCase { + + public void testBasic() throws IOException { + + Index index = new Index("test", "_na_"); + String name = "ngr"; + Settings indexSettings = newAnalysisSettingsBuilder().build(); + IndexSettings indexProperties = IndexSettingsModule.newIndexSettings(index, indexSettings); + Settings settings = newAnalysisSettingsBuilder().build(); + + // "wow that's funny" and "what the fudge" are separate side paths, in parallel with "wtf", on input: + TokenStream in = new CannedTokenStream(0, 12, new Token[] { + token("wtf", 1, 5, 0, 3), + token("what", 0, 1, 0, 3), + token("wow", 0, 3, 0, 3), + token("the", 1, 1, 0, 3), + token("fudge", 1, 3, 0, 3), + token("that's", 1, 1, 0, 3), + token("funny", 1, 1, 0, 3), + token("happened", 1, 1, 4, 12) + }); + + TokenStream tokens = new FlattenGraphTokenFilterFactory(indexProperties, null, name, settings).create(in); + + // ... but on output, it's flattened to wtf/what/wow that's/the fudge/funny happened: + assertTokenStreamContents(tokens, + new String[] {"wtf", "what", "wow", "the", "that's", "fudge", "funny", "happened"}, + new int[] {0, 0, 0, 0, 0, 0, 0, 4}, + new int[] {3, 3, 3, 3, 3, 3, 3, 12}, + new int[] {1, 0, 0, 1, 0, 1, 0, 1}, + new int[] {3, 1, 1, 1, 1, 1, 1, 1}, + 12); + } + + private static Token token(String term, int posInc, int posLength, int startOffset, int endOffset) { + final Token t = new Token(term, startOffset, endOffset); + t.setPositionIncrement(posInc); + t.setPositionLength(posLength); + return t; + } +} diff --git a/test/framework/src/main/java/org/elasticsearch/AnalysisFactoryTestCase.java b/test/framework/src/main/java/org/elasticsearch/AnalysisFactoryTestCase.java index 1634f049392..bdf36bbb85a 100644 --- a/test/framework/src/main/java/org/elasticsearch/AnalysisFactoryTestCase.java +++ b/test/framework/src/main/java/org/elasticsearch/AnalysisFactoryTestCase.java @@ -42,6 +42,7 @@ import org.elasticsearch.index.analysis.DelimitedPayloadTokenFilterFactory; import org.elasticsearch.index.analysis.EdgeNGramTokenFilterFactory; import org.elasticsearch.index.analysis.EdgeNGramTokenizerFactory; import org.elasticsearch.index.analysis.ElisionTokenFilterFactory; +import org.elasticsearch.index.analysis.FlattenGraphTokenFilterFactory; import org.elasticsearch.index.analysis.GermanNormalizationFilterFactory; import org.elasticsearch.index.analysis.GermanStemTokenFilterFactory; import org.elasticsearch.index.analysis.HindiNormalizationFilterFactory; @@ -248,6 +249,7 @@ public class AnalysisFactoryTestCase extends ESTestCase { .put("type", KeepTypesFilterFactory.class) .put("uppercase", UpperCaseTokenFilterFactory.class) .put("worddelimiter", WordDelimiterTokenFilterFactory.class) + .put("flattengraph", FlattenGraphTokenFilterFactory.class) // TODO: these tokenfilters are not yet exposed: useful? @@ -277,8 +279,6 @@ public class AnalysisFactoryTestCase extends ESTestCase { .put("fingerprint", Void.class) // for tee-sinks .put("daterecognizer", Void.class) - // to flatten graphs created by the synonym graph filter - .put("flattengraph", Void.class) .immutableMap(); From 16a76d9bc0ac8031662b708ed49d01e1b204ec0a Mon Sep 17 00:00:00 2001 From: Tim Brooks Date: Mon, 16 Jan 2017 18:38:51 -0600 Subject: [PATCH 26/28] Remove blocking TCP clients and servers (#22639) This commit removes the option to use the blocking variants of the TCP transport server, TCP transport client, or http server. --- .../common/network/NetworkService.java | 6 --- .../common/settings/ClusterSettings.java | 5 -- .../http/HttpServerTransport.java | 1 - .../elasticsearch/transport/TcpTransport.java | 8 --- .../elasticsearch/transport/Transports.java | 3 -- .../migration/migrate_6_0/settings.asciidoc | 7 +++ .../netty4/Netty4HttpServerTransport.java | 20 ++----- .../elasticsearch/transport/Netty4Plugin.java | 1 - .../transport/netty4/Netty4Transport.java | 21 ++------ .../PrivilegedOioServerSocketChannel.java | 50 ----------------- .../channel/PrivilegedOioSocketChannel.java | 54 ------------------- 11 files changed, 15 insertions(+), 161 deletions(-) delete mode 100644 modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioServerSocketChannel.java delete mode 100644 modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioSocketChannel.java diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkService.java b/core/src/main/java/org/elasticsearch/common/network/NetworkService.java index a469de03208..a9d3dc4a336 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkService.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkService.java @@ -60,12 +60,6 @@ public class NetworkService extends AbstractComponent { Setting.byteSizeSetting("network.tcp.send_buffer_size", new ByteSizeValue(-1), Property.NodeScope); public static final Setting TCP_RECEIVE_BUFFER_SIZE = Setting.byteSizeSetting("network.tcp.receive_buffer_size", new ByteSizeValue(-1), Property.NodeScope); - public static final Setting TCP_BLOCKING = - Setting.boolSetting("network.tcp.blocking", false, Property.NodeScope); - public static final Setting TCP_BLOCKING_SERVER = - Setting.boolSetting("network.tcp.blocking_server", TCP_BLOCKING, Property.NodeScope); - public static final Setting TCP_BLOCKING_CLIENT = - Setting.boolSetting("network.tcp.blocking_client", TCP_BLOCKING, Property.NodeScope); public static final Setting TCP_CONNECT_TIMEOUT = Setting.timeSetting("network.tcp.connect_timeout", new TimeValue(30, TimeUnit.SECONDS), Property.NodeScope); } diff --git a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java index c9106b8cdba..34a574077b2 100644 --- a/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java +++ b/core/src/main/java/org/elasticsearch/common/settings/ClusterSettings.java @@ -273,7 +273,6 @@ public final class ClusterSettings extends AbstractScopedSettings { TcpTransport.CONNECTIONS_PER_NODE_STATE, TcpTransport.CONNECTIONS_PER_NODE_PING, TcpTransport.PING_SCHEDULE, - TcpTransport.TCP_BLOCKING_CLIENT, TcpTransport.TCP_CONNECT_TIMEOUT, NetworkService.NETWORK_SERVER, TcpTransport.TCP_NO_DELAY, @@ -281,7 +280,6 @@ public final class ClusterSettings extends AbstractScopedSettings { TcpTransport.TCP_REUSE_ADDRESS, TcpTransport.TCP_SEND_BUFFER_SIZE, TcpTransport.TCP_RECEIVE_BUFFER_SIZE, - TcpTransport.TCP_BLOCKING_SERVER, NetworkService.GLOBAL_NETWORK_HOST_SETTING, NetworkService.GLOBAL_NETWORK_BINDHOST_SETTING, NetworkService.GLOBAL_NETWORK_PUBLISHHOST_SETTING, @@ -290,9 +288,6 @@ public final class ClusterSettings extends AbstractScopedSettings { NetworkService.TcpSettings.TCP_REUSE_ADDRESS, NetworkService.TcpSettings.TCP_SEND_BUFFER_SIZE, NetworkService.TcpSettings.TCP_RECEIVE_BUFFER_SIZE, - NetworkService.TcpSettings.TCP_BLOCKING, - NetworkService.TcpSettings.TCP_BLOCKING_SERVER, - NetworkService.TcpSettings.TCP_BLOCKING_CLIENT, NetworkService.TcpSettings.TCP_CONNECT_TIMEOUT, IndexSettings.QUERY_STRING_ANALYZE_WILDCARD, IndexSettings.QUERY_STRING_ALLOW_LEADING_WILDCARD, diff --git a/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java b/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java index 89c04198e7f..134557a28ad 100644 --- a/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java +++ b/core/src/main/java/org/elasticsearch/http/HttpServerTransport.java @@ -28,7 +28,6 @@ import org.elasticsearch.rest.RestRequest; public interface HttpServerTransport extends LifecycleComponent { String HTTP_SERVER_WORKER_THREAD_NAME_PREFIX = "http_server_worker"; - String HTTP_SERVER_BOSS_THREAD_NAME_PREFIX = "http_server_boss"; BoundTransportAddress boundAddress(); diff --git a/core/src/main/java/org/elasticsearch/transport/TcpTransport.java b/core/src/main/java/org/elasticsearch/transport/TcpTransport.java index d18cdc85e74..8e3e62fe65a 100644 --- a/core/src/main/java/org/elasticsearch/transport/TcpTransport.java +++ b/core/src/main/java/org/elasticsearch/transport/TcpTransport.java @@ -112,8 +112,6 @@ import static org.elasticsearch.common.util.concurrent.ConcurrentCollections.new public abstract class TcpTransport extends AbstractLifecycleComponent implements Transport { public static final String TRANSPORT_SERVER_WORKER_THREAD_NAME_PREFIX = "transport_server_worker"; - public static final String TRANSPORT_SERVER_BOSS_THREAD_NAME_PREFIX = "transport_server_boss"; - public static final String TRANSPORT_CLIENT_WORKER_THREAD_NAME_PREFIX = "transport_client_worker"; public static final String TRANSPORT_CLIENT_BOSS_THREAD_NAME_PREFIX = "transport_client_boss"; // the scheduled internal ping interval setting, defaults to disabled (-1) @@ -137,10 +135,6 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i boolSetting("transport.tcp.keep_alive", NetworkService.TcpSettings.TCP_KEEP_ALIVE, Setting.Property.NodeScope); public static final Setting TCP_REUSE_ADDRESS = boolSetting("transport.tcp.reuse_address", NetworkService.TcpSettings.TCP_REUSE_ADDRESS, Setting.Property.NodeScope); - public static final Setting TCP_BLOCKING_CLIENT = - boolSetting("transport.tcp.blocking_client", NetworkService.TcpSettings.TCP_BLOCKING_CLIENT, Setting.Property.NodeScope); - public static final Setting TCP_BLOCKING_SERVER = - boolSetting("transport.tcp.blocking_server", NetworkService.TcpSettings.TCP_BLOCKING_SERVER, Setting.Property.NodeScope); public static final Setting TCP_SEND_BUFFER_SIZE = Setting.byteSizeSetting("transport.tcp.send_buffer_size", NetworkService.TcpSettings.TCP_SEND_BUFFER_SIZE, Setting.Property.NodeScope); @@ -150,7 +144,6 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i private static final long NINETY_PER_HEAP_SIZE = (long) (JvmInfo.jvmInfo().getMem().getHeapMax().getBytes() * 0.9); private static final int PING_DATA_SIZE = -1; - protected final boolean blockingClient; private final CircuitBreakerService circuitBreakerService; // package visibility for tests protected final ScheduledPing scheduledPing; @@ -194,7 +187,6 @@ public abstract class TcpTransport extends AbstractLifecycleComponent i this.compress = Transport.TRANSPORT_TCP_COMPRESS.get(settings); this.networkService = networkService; this.transportName = transportName; - this.blockingClient = TCP_BLOCKING_CLIENT.get(settings); defaultConnectionProfile = buildDefaultConnectionProfile(settings); } diff --git a/core/src/main/java/org/elasticsearch/transport/Transports.java b/core/src/main/java/org/elasticsearch/transport/Transports.java index c187e3baf23..c7a0fe4d4f5 100644 --- a/core/src/main/java/org/elasticsearch/transport/Transports.java +++ b/core/src/main/java/org/elasticsearch/transport/Transports.java @@ -37,11 +37,8 @@ public enum Transports { public static final boolean isTransportThread(Thread t) { final String threadName = t.getName(); for (String s : Arrays.asList( - HttpServerTransport.HTTP_SERVER_BOSS_THREAD_NAME_PREFIX, HttpServerTransport.HTTP_SERVER_WORKER_THREAD_NAME_PREFIX, - TcpTransport.TRANSPORT_SERVER_BOSS_THREAD_NAME_PREFIX, TcpTransport.TRANSPORT_SERVER_WORKER_THREAD_NAME_PREFIX, - TcpTransport.TRANSPORT_CLIENT_WORKER_THREAD_NAME_PREFIX, TcpTransport.TRANSPORT_CLIENT_BOSS_THREAD_NAME_PREFIX, TEST_MOCK_TRANSPORT_THREAD_PREFIX)) { if (threadName.contains(s)) { diff --git a/docs/reference/migration/migrate_6_0/settings.asciidoc b/docs/reference/migration/migrate_6_0/settings.asciidoc index c2f20de7d21..65f242c078b 100644 --- a/docs/reference/migration/migrate_6_0/settings.asciidoc +++ b/docs/reference/migration/migrate_6_0/settings.asciidoc @@ -20,3 +20,10 @@ recognized anymore. The `default` `index.store.type` has been removed. If you were using it, we advise that you simply remove it from your index settings and Elasticsearch will use the best `store` implementation for your operating system. + +==== Network settings + +The blocking TCP client, blocking TCP server, and blocking HTTP server have been removed. +As a consequence, the `network.tcp.blocking_server`, `network.tcp.blocking_client`, +`network.tcp.blocking`,`transport.tcp.blocking_client`, `transport.tcp.blocking_server`, +and `http.tcp.blocking_server` settings are not recognized anymore. \ No newline at end of file diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java index 138ed0a67be..10a0a159c23 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/http/netty4/Netty4HttpServerTransport.java @@ -32,7 +32,6 @@ import io.netty.channel.ChannelOption; import io.netty.channel.FixedRecvByteBufAllocator; import io.netty.channel.RecvByteBufAllocator; import io.netty.channel.nio.NioEventLoopGroup; -import io.netty.channel.oio.OioEventLoopGroup; import io.netty.handler.codec.ByteToMessageDecoder; import io.netty.handler.codec.http.HttpContentCompressor; import io.netty.handler.codec.http.HttpContentDecompressor; @@ -78,7 +77,6 @@ import org.elasticsearch.transport.BindTransportException; import org.elasticsearch.transport.netty4.Netty4OpenChannelsHandler; import org.elasticsearch.transport.netty4.Netty4Utils; import org.elasticsearch.transport.netty4.channel.PrivilegedNioServerSocketChannel; -import org.elasticsearch.transport.netty4.channel.PrivilegedOioServerSocketChannel; import java.io.IOException; import java.net.InetAddress; @@ -135,8 +133,6 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem boolSetting("http.tcp_no_delay", NetworkService.TcpSettings.TCP_NO_DELAY, Property.NodeScope, Property.Shared); public static final Setting SETTING_HTTP_TCP_KEEP_ALIVE = boolSetting("http.tcp.keep_alive", NetworkService.TcpSettings.TCP_KEEP_ALIVE, Property.NodeScope, Property.Shared); - public static final Setting SETTING_HTTP_TCP_BLOCKING_SERVER = - boolSetting("http.tcp.blocking_server", NetworkService.TcpSettings.TCP_BLOCKING_SERVER, Property.NodeScope, Property.Shared); public static final Setting SETTING_HTTP_TCP_REUSE_ADDRESS = boolSetting("http.tcp.reuse_address", NetworkService.TcpSettings.TCP_REUSE_ADDRESS, Property.NodeScope, Property.Shared); @@ -174,8 +170,6 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem protected final int workerCount; - protected final boolean blockingServer; - protected final boolean pipelining; protected final int pipeliningMaxEvents; @@ -240,7 +234,6 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem this.maxCumulationBufferCapacity = SETTING_HTTP_NETTY_MAX_CUMULATION_BUFFER_CAPACITY.get(settings); this.maxCompositeBufferComponents = SETTING_HTTP_NETTY_MAX_COMPOSITE_BUFFER_COMPONENTS.get(settings); this.workerCount = SETTING_HTTP_WORKER_COUNT.get(settings); - this.blockingServer = SETTING_HTTP_TCP_BLOCKING_SERVER.get(settings); this.port = SETTING_HTTP_PORT.get(settings); this.bindHosts = SETTING_HTTP_BIND_HOST.get(settings).toArray(Strings.EMPTY_ARRAY); this.publishHosts = SETTING_HTTP_PUBLISH_HOST.get(settings).toArray(Strings.EMPTY_ARRAY); @@ -293,15 +286,10 @@ public class Netty4HttpServerTransport extends AbstractLifecycleComponent implem this.serverOpenChannels = new Netty4OpenChannelsHandler(logger); serverBootstrap = new ServerBootstrap(); - if (blockingServer) { - serverBootstrap.group(new OioEventLoopGroup(workerCount, daemonThreadFactory(settings, - HTTP_SERVER_WORKER_THREAD_NAME_PREFIX))); - serverBootstrap.channel(PrivilegedOioServerSocketChannel.class); - } else { - serverBootstrap.group(new NioEventLoopGroup(workerCount, daemonThreadFactory(settings, - HTTP_SERVER_WORKER_THREAD_NAME_PREFIX))); - serverBootstrap.channel(PrivilegedNioServerSocketChannel.class); - } + + serverBootstrap.group(new NioEventLoopGroup(workerCount, daemonThreadFactory(settings, + HTTP_SERVER_WORKER_THREAD_NAME_PREFIX))); + serverBootstrap.channel(PrivilegedNioServerSocketChannel.class); serverBootstrap.childHandler(configureServerChannelHandler()); diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java index 0516a449629..59f9447d61c 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/Netty4Plugin.java @@ -58,7 +58,6 @@ public class Netty4Plugin extends Plugin implements NetworkPlugin { Netty4HttpServerTransport.SETTING_HTTP_WORKER_COUNT, Netty4HttpServerTransport.SETTING_HTTP_TCP_NO_DELAY, Netty4HttpServerTransport.SETTING_HTTP_TCP_KEEP_ALIVE, - Netty4HttpServerTransport.SETTING_HTTP_TCP_BLOCKING_SERVER, Netty4HttpServerTransport.SETTING_HTTP_TCP_REUSE_ADDRESS, Netty4HttpServerTransport.SETTING_HTTP_TCP_SEND_BUFFER_SIZE, Netty4HttpServerTransport.SETTING_HTTP_TCP_RECEIVE_BUFFER_SIZE, diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java index 7dd0a65dfc4..bb0fda63642 100644 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java +++ b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/Netty4Transport.java @@ -32,7 +32,6 @@ import io.netty.channel.ChannelOption; import io.netty.channel.FixedRecvByteBufAllocator; import io.netty.channel.RecvByteBufAllocator; import io.netty.channel.nio.NioEventLoopGroup; -import io.netty.channel.oio.OioEventLoopGroup; import io.netty.util.concurrent.Future; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; @@ -65,8 +64,6 @@ import org.elasticsearch.transport.TransportServiceAdapter; import org.elasticsearch.transport.TransportSettings; import org.elasticsearch.transport.netty4.channel.PrivilegedNioServerSocketChannel; import org.elasticsearch.transport.netty4.channel.PrivilegedNioSocketChannel; -import org.elasticsearch.transport.netty4.channel.PrivilegedOioServerSocketChannel; -import org.elasticsearch.transport.netty4.channel.PrivilegedOioSocketChannel; import java.io.IOException; import java.net.InetSocketAddress; @@ -193,13 +190,8 @@ public class Netty4Transport extends TcpTransport { private Bootstrap createBootstrap() { final Bootstrap bootstrap = new Bootstrap(); - if (TCP_BLOCKING_CLIENT.get(settings)) { - bootstrap.group(new OioEventLoopGroup(1, daemonThreadFactory(settings, TRANSPORT_CLIENT_WORKER_THREAD_NAME_PREFIX))); - bootstrap.channel(PrivilegedOioSocketChannel.class); - } else { - bootstrap.group(new NioEventLoopGroup(workerCount, daemonThreadFactory(settings, TRANSPORT_CLIENT_BOSS_THREAD_NAME_PREFIX))); - bootstrap.channel(PrivilegedNioSocketChannel.class); - } + bootstrap.group(new NioEventLoopGroup(workerCount, daemonThreadFactory(settings, TRANSPORT_CLIENT_BOSS_THREAD_NAME_PREFIX))); + bootstrap.channel(PrivilegedNioSocketChannel.class); bootstrap.handler(getClientChannelInitializer()); @@ -282,13 +274,8 @@ public class Netty4Transport extends TcpTransport { final ServerBootstrap serverBootstrap = new ServerBootstrap(); - if (TCP_BLOCKING_SERVER.get(settings)) { - serverBootstrap.group(new OioEventLoopGroup(workerCount, workerFactory)); - serverBootstrap.channel(PrivilegedOioServerSocketChannel.class); - } else { - serverBootstrap.group(new NioEventLoopGroup(workerCount, workerFactory)); - serverBootstrap.channel(PrivilegedNioServerSocketChannel.class); - } + serverBootstrap.group(new NioEventLoopGroup(workerCount, workerFactory)); + serverBootstrap.channel(PrivilegedNioServerSocketChannel.class); serverBootstrap.childHandler(getServerChannelInitializer(name, settings)); diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioServerSocketChannel.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioServerSocketChannel.java deleted file mode 100644 index c0643adfb16..00000000000 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioServerSocketChannel.java +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.transport.netty4.channel; - -import io.netty.channel.socket.oio.OioServerSocketChannel; -import org.elasticsearch.SpecialPermission; - -import java.net.ServerSocket; -import java.security.AccessController; -import java.security.PrivilegedAction; -import java.security.PrivilegedActionException; -import java.security.PrivilegedExceptionAction; -import java.util.List; - -/** - * Wraps netty calls to {@link ServerSocket#accept()} in {@link AccessController#doPrivileged(PrivilegedAction)} blocks. - * This is necessary to limit {@link java.net.SocketPermission} to the transport module. - */ -public class PrivilegedOioServerSocketChannel extends OioServerSocketChannel { - - @Override - protected int doReadMessages(List buf) throws Exception { - SecurityManager sm = System.getSecurityManager(); - if (sm != null) { - sm.checkPermission(new SpecialPermission()); - } - try { - return AccessController.doPrivileged((PrivilegedExceptionAction) () -> super.doReadMessages(buf)); - } catch (PrivilegedActionException e) { - throw (Exception) e.getCause(); - } - } -} diff --git a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioSocketChannel.java b/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioSocketChannel.java deleted file mode 100644 index e5a169e7a24..00000000000 --- a/modules/transport-netty4/src/main/java/org/elasticsearch/transport/netty4/channel/PrivilegedOioSocketChannel.java +++ /dev/null @@ -1,54 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.transport.netty4.channel; - -import io.netty.channel.socket.oio.OioSocketChannel; -import org.elasticsearch.SpecialPermission; - -import java.net.SocketAddress; -import java.security.AccessController; -import java.security.PrivilegedAction; -import java.security.PrivilegedActionException; -import java.security.PrivilegedExceptionAction; - -/** - * Wraps netty calls to {@link java.net.Socket#connect(SocketAddress)} in - * {@link AccessController#doPrivileged(PrivilegedAction)} blocks. This is necessary to limit - * {@link java.net.SocketPermission} to the transport module. - */ -public class PrivilegedOioSocketChannel extends OioSocketChannel { - - @Override - protected void doConnect(SocketAddress remoteAddress, SocketAddress localAddress) throws Exception { - SecurityManager sm = System.getSecurityManager(); - if (sm != null) { - sm.checkPermission(new SpecialPermission()); - } - try { - AccessController.doPrivileged((PrivilegedExceptionAction) () -> { - super.doConnect(remoteAddress, localAddress); - return null; - }); - } catch (PrivilegedActionException e) { - throw (Exception) e.getCause(); - } - super.doConnect(remoteAddress, localAddress); - } -} From 519a9c469d93e983565387f810d87fcc6c5fe19e Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Tue, 17 Jan 2017 12:15:22 +0100 Subject: [PATCH 27/28] Update truncate token filter to not mention the keyword tokenizer The advice predates the existence of the keyword field Closes #22650 --- .../analysis/tokenfilters/truncate-tokenfilter.asciidoc | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc index 14652f46342..4c28ddba381 100644 --- a/docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc @@ -2,9 +2,7 @@ === Truncate Token Filter The `truncate` token filter can be used to truncate tokens into a -specific length. This can come in handy with keyword (single token) -based mapped fields that are used for sorting in order to reduce memory -usage. +specific length. It accepts a `length` parameter which control the number of characters to truncate to, defaults to `10`. From 401438819ed65c92a27f147689774c03450f78b4 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Tue, 17 Jan 2017 12:20:03 +0100 Subject: [PATCH 28/28] Docs: Fix the first highlighting example to work Closes #22642 --- docs/reference/search/request/highlighting.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/search/request/highlighting.asciidoc b/docs/reference/search/request/highlighting.asciidoc index 73ade7a47d6..dc2673cebb4 100644 --- a/docs/reference/search/request/highlighting.asciidoc +++ b/docs/reference/search/request/highlighting.asciidoc @@ -11,7 +11,7 @@ The following is an example of the search request body: GET /_search { "query" : { - "match": { "user": "kimchy" } + "match": { "content": "kimchy" } }, "highlight" : { "fields" : {