Merge branch 'master' into enhancement/shrink_request_parser
This commit is contained in:
commit
1a59a8418a
|
@ -7,7 +7,7 @@ attention.
|
|||
-->
|
||||
|
||||
- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?
|
||||
- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/.github/CONTRIBUTING.md)?
|
||||
- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?
|
||||
- If submitting code, have you built your formula locally prior to submission with `gradle check`?
|
||||
- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.
|
||||
- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?
|
||||
|
|
|
@ -296,7 +296,7 @@ gradle :distribution:integ-test-zip:integTest \
|
|||
-Dtests.method="test {p0=cat.shards/10_basic/Help}"
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
`RestNIT` are the executable test classes that runs all the
|
||||
`RestIT` are the executable test classes that runs all the
|
||||
yaml suites available within the `rest-api-spec` folder.
|
||||
|
||||
The REST tests support all the options provided by the randomized runner, plus the following:
|
||||
|
|
|
@ -0,0 +1,63 @@
|
|||
# Elasticsearch Microbenchmark Suite
|
||||
|
||||
This directory contains the microbenchmark suite of Elasticsearch. It relies on [JMH](http://openjdk.java.net/projects/code-tools/jmh/).
|
||||
|
||||
## Purpose
|
||||
|
||||
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
|
||||
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
|
||||
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended for performance-critical components to spot performance
|
||||
regressions. The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result.
|
||||
|
||||
## Running Microbenchmarks
|
||||
|
||||
Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`.
|
||||
```
|
||||
|
||||
Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
|
||||
|
||||
If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have any other special requirements
|
||||
generate the uberjar with `gradle :benchmarks:jmhJar` and run the it directly with:
|
||||
|
||||
```
|
||||
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
|
||||
```
|
||||
|
||||
JMH supports lots of command line parameters. Add `-h` to the command above for more information about the available command line options.
|
||||
|
||||
## Adding Microbenchmarks
|
||||
|
||||
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
|
||||
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).
|
||||
|
||||
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
|
||||
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.
|
||||
|
||||
## Tips and Best Practices
|
||||
|
||||
To get realistic results, you should exercise care when running your benchmarks. Here are a few tips:
|
||||
|
||||
### Do
|
||||
|
||||
* Ensure that the system executing your microbenchmarks has as little load as possible and shutdown every process that can cause unnecessary
|
||||
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
|
||||
* Ensure to run enough warmup iterations to get into a stable state. If you are unsure, don't change the defaults.
|
||||
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
|
||||
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
|
||||
`performance` CPU governor.
|
||||
* Vary problem input size with `@Param`.
|
||||
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
|
||||
** Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
|
||||
your results. If so, try to force a GC between runs (`-gc true`).
|
||||
** Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
|
||||
* Have your benchmarks peer-reviewed.
|
||||
|
||||
### Don't
|
||||
|
||||
* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.e. with `-prof perfasm`.
|
||||
* Run run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
|
||||
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.
|
|
@ -0,0 +1,96 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
buildscript {
|
||||
repositories {
|
||||
maven {
|
||||
url 'https://plugins.gradle.org/m2/'
|
||||
}
|
||||
}
|
||||
dependencies {
|
||||
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3'
|
||||
}
|
||||
}
|
||||
|
||||
apply plugin: 'elasticsearch.build'
|
||||
// build an uberjar with all benchmarks
|
||||
apply plugin: 'com.github.johnrengelman.shadow'
|
||||
// have the shadow plugin provide the runShadow task
|
||||
apply plugin: 'application'
|
||||
|
||||
archivesBaseName = 'elasticsearch-benchmarks'
|
||||
mainClassName = 'org.openjdk.jmh.Main'
|
||||
|
||||
// never try to invoke tests on the benchmark project - there aren't any
|
||||
check.dependsOn.remove(test)
|
||||
// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip
|
||||
task test(type: Test, overwrite: true)
|
||||
|
||||
dependencies {
|
||||
compile("org.elasticsearch:elasticsearch:${version}") {
|
||||
// JMH ships with the conflicting version 4.6 (JMH will not update this dependency as it is Java 6 compatible and joptsimple is one
|
||||
// of the most recent compatible version). This prevents us from using jopt-simple in benchmarks (which should be ok) but allows us
|
||||
// to invoke the JMH uberjar as usual.
|
||||
exclude group: 'net.sf.jopt-simple', module: 'jopt-simple'
|
||||
}
|
||||
compile "org.openjdk.jmh:jmh-core:$versions.jmh"
|
||||
compile "org.openjdk.jmh:jmh-generator-annprocess:$versions.jmh"
|
||||
// Dependencies of JMH
|
||||
runtime 'net.sf.jopt-simple:jopt-simple:4.6'
|
||||
runtime 'org.apache.commons:commons-math3:3.2'
|
||||
}
|
||||
|
||||
compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
|
||||
compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
|
||||
|
||||
forbiddenApis {
|
||||
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
|
||||
ignoreFailures = true
|
||||
}
|
||||
|
||||
// No licenses for our benchmark deps (we don't ship benchmarks)
|
||||
dependencyLicenses.enabled = false
|
||||
|
||||
thirdPartyAudit.excludes = [
|
||||
// these classes intentionally use JDK internal API (and this is ok since the project is maintained by Oracle employees)
|
||||
'org.openjdk.jmh.profile.AbstractHotspotProfiler',
|
||||
'org.openjdk.jmh.profile.HotspotThreadProfiler',
|
||||
'org.openjdk.jmh.profile.HotspotClassloadingProfiler',
|
||||
'org.openjdk.jmh.profile.HotspotCompilationProfiler',
|
||||
'org.openjdk.jmh.profile.HotspotMemoryProfiler',
|
||||
'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
|
||||
'org.openjdk.jmh.util.Utils'
|
||||
]
|
||||
|
||||
shadowJar {
|
||||
classifier = 'benchmarks'
|
||||
}
|
||||
|
||||
// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
|
||||
task jmhJar(
|
||||
dependsOn: shadowJar,
|
||||
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
|
||||
group: 'Benchmark'
|
||||
)
|
||||
|
||||
task jmh(
|
||||
dependsOn: runShadow,
|
||||
description: 'Runs all microbenchmarks',
|
||||
group: 'Benchmark'
|
||||
)
|
|
@ -0,0 +1,170 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.benchmark.routing.allocation;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNodes;
|
||||
import org.elasticsearch.cluster.routing.RoutingTable;
|
||||
import org.elasticsearch.cluster.routing.ShardRoutingState;
|
||||
import org.elasticsearch.cluster.routing.allocation.AllocationService;
|
||||
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.openjdk.jmh.annotations.Benchmark;
|
||||
import org.openjdk.jmh.annotations.BenchmarkMode;
|
||||
import org.openjdk.jmh.annotations.Fork;
|
||||
import org.openjdk.jmh.annotations.Measurement;
|
||||
import org.openjdk.jmh.annotations.Mode;
|
||||
import org.openjdk.jmh.annotations.OutputTimeUnit;
|
||||
import org.openjdk.jmh.annotations.Param;
|
||||
import org.openjdk.jmh.annotations.Scope;
|
||||
import org.openjdk.jmh.annotations.Setup;
|
||||
import org.openjdk.jmh.annotations.State;
|
||||
import org.openjdk.jmh.annotations.Warmup;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
@Fork(3)
|
||||
@Warmup(iterations = 10)
|
||||
@Measurement(iterations = 10)
|
||||
@BenchmarkMode(Mode.AverageTime)
|
||||
@OutputTimeUnit(TimeUnit.MILLISECONDS)
|
||||
@State(Scope.Benchmark)
|
||||
@SuppressWarnings("unused") //invoked by benchmarking framework
|
||||
public class AllocationBenchmark {
|
||||
// Do NOT make any field final (even if it is not annotated with @Param)! See also
|
||||
// http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_10_ConstantFold.java
|
||||
|
||||
// we cannot use individual @Params as some will lead to invalid combinations which do not let the benchmark terminate. JMH offers no
|
||||
// support to constrain the combinations of benchmark parameters and we do not want to rely on OptionsBuilder as each benchmark would
|
||||
// need its own main method and we cannot execute more than one class with a main method per JAR.
|
||||
@Param({
|
||||
// indices, shards, replicas, nodes
|
||||
" 10, 1, 0, 1",
|
||||
" 10, 3, 0, 1",
|
||||
" 10, 10, 0, 1",
|
||||
" 100, 1, 0, 1",
|
||||
" 100, 3, 0, 1",
|
||||
" 100, 10, 0, 1",
|
||||
|
||||
" 10, 1, 0, 10",
|
||||
" 10, 3, 0, 10",
|
||||
" 10, 10, 0, 10",
|
||||
" 100, 1, 0, 10",
|
||||
" 100, 3, 0, 10",
|
||||
" 100, 10, 0, 10",
|
||||
|
||||
" 10, 1, 1, 10",
|
||||
" 10, 3, 1, 10",
|
||||
" 10, 10, 1, 10",
|
||||
" 100, 1, 1, 10",
|
||||
" 100, 3, 1, 10",
|
||||
" 100, 10, 1, 10",
|
||||
|
||||
" 10, 1, 2, 10",
|
||||
" 10, 3, 2, 10",
|
||||
" 10, 10, 2, 10",
|
||||
" 100, 1, 2, 10",
|
||||
" 100, 3, 2, 10",
|
||||
" 100, 10, 2, 10",
|
||||
|
||||
" 10, 1, 0, 50",
|
||||
" 10, 3, 0, 50",
|
||||
" 10, 10, 0, 50",
|
||||
" 100, 1, 0, 50",
|
||||
" 100, 3, 0, 50",
|
||||
" 100, 10, 0, 50",
|
||||
|
||||
" 10, 1, 1, 50",
|
||||
" 10, 3, 1, 50",
|
||||
" 10, 10, 1, 50",
|
||||
" 100, 1, 1, 50",
|
||||
" 100, 3, 1, 50",
|
||||
" 100, 10, 1, 50",
|
||||
|
||||
" 10, 1, 2, 50",
|
||||
" 10, 3, 2, 50",
|
||||
" 10, 10, 2, 50",
|
||||
" 100, 1, 2, 50",
|
||||
" 100, 3, 2, 50",
|
||||
" 100, 10, 2, 50"
|
||||
})
|
||||
public String indicesShardsReplicasNodes = "10,1,0,1";
|
||||
|
||||
public int numTags = 2;
|
||||
|
||||
private AllocationService strategy;
|
||||
private ClusterState initialClusterState;
|
||||
|
||||
@Setup
|
||||
public void setUp() throws Exception {
|
||||
final String[] params = indicesShardsReplicasNodes.split(",");
|
||||
|
||||
int numIndices = toInt(params[0]);
|
||||
int numShards = toInt(params[1]);
|
||||
int numReplicas = toInt(params[2]);
|
||||
int numNodes = toInt(params[3]);
|
||||
|
||||
strategy = Allocators.createAllocationService(Settings.builder()
|
||||
.put("cluster.routing.allocation.awareness.attributes", "tag")
|
||||
.build());
|
||||
|
||||
MetaData.Builder mb = MetaData.builder();
|
||||
for (int i = 1; i <= numIndices; i++) {
|
||||
mb.put(IndexMetaData.builder("test_" + i)
|
||||
.settings(Settings.builder().put("index.version.created", Version.CURRENT))
|
||||
.numberOfShards(numShards)
|
||||
.numberOfReplicas(numReplicas)
|
||||
);
|
||||
}
|
||||
MetaData metaData = mb.build();
|
||||
RoutingTable.Builder rb = RoutingTable.builder();
|
||||
for (int i = 1; i <= numIndices; i++) {
|
||||
rb.addAsNew(metaData.index("test_" + i));
|
||||
}
|
||||
RoutingTable routingTable = rb.build();
|
||||
DiscoveryNodes.Builder nb = DiscoveryNodes.builder();
|
||||
for (int i = 1; i <= numNodes; i++) {
|
||||
nb.put(Allocators.newNode("node" + i, Collections.singletonMap("tag", "tag_" + (i % numTags))));
|
||||
}
|
||||
initialClusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).nodes
|
||||
(nb).build();
|
||||
}
|
||||
|
||||
private int toInt(String v) {
|
||||
return Integer.valueOf(v.trim());
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
public ClusterState measureAllocation() {
|
||||
ClusterState clusterState = initialClusterState;
|
||||
while (clusterState.getRoutingNodes().hasUnassignedShards()) {
|
||||
RoutingAllocation.Result result = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes()
|
||||
.shardsWithState(ShardRoutingState.INITIALIZING));
|
||||
clusterState = ClusterState.builder(clusterState).routingResult(result).build();
|
||||
result = strategy.reroute(clusterState, "reroute");
|
||||
clusterState = ClusterState.builder(clusterState).routingResult(result).build();
|
||||
}
|
||||
return clusterState;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,108 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.benchmark.routing.allocation;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.ClusterModule;
|
||||
import org.elasticsearch.cluster.EmptyClusterInfoService;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.routing.allocation.AllocationService;
|
||||
import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation;
|
||||
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
|
||||
import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation;
|
||||
import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;
|
||||
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDecider;
|
||||
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;
|
||||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.transport.DummyTransportAddress;
|
||||
import org.elasticsearch.common.util.set.Sets;
|
||||
import org.elasticsearch.gateway.GatewayAllocator;
|
||||
|
||||
import java.lang.reflect.Constructor;
|
||||
import java.lang.reflect.InvocationTargetException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public final class Allocators {
|
||||
private static class NoopGatewayAllocator extends GatewayAllocator {
|
||||
public static final NoopGatewayAllocator INSTANCE = new NoopGatewayAllocator();
|
||||
|
||||
protected NoopGatewayAllocator() {
|
||||
super(Settings.EMPTY, null, null);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void applyStartedShards(StartedRerouteAllocation allocation) {
|
||||
// noop
|
||||
}
|
||||
|
||||
@Override
|
||||
public void applyFailedShards(FailedRerouteAllocation allocation) {
|
||||
// noop
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean allocateUnassigned(RoutingAllocation allocation) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
private Allocators() {
|
||||
throw new AssertionError("Do not instantiate");
|
||||
}
|
||||
|
||||
|
||||
public static AllocationService createAllocationService(Settings settings) throws NoSuchMethodException, InstantiationException,
|
||||
IllegalAccessException, InvocationTargetException {
|
||||
return createAllocationService(settings, new ClusterSettings(Settings.Builder.EMPTY_SETTINGS, ClusterSettings
|
||||
.BUILT_IN_CLUSTER_SETTINGS));
|
||||
}
|
||||
|
||||
public static AllocationService createAllocationService(Settings settings, ClusterSettings clusterSettings) throws
|
||||
InvocationTargetException, NoSuchMethodException, InstantiationException, IllegalAccessException {
|
||||
return new AllocationService(settings,
|
||||
defaultAllocationDeciders(settings, clusterSettings),
|
||||
NoopGatewayAllocator.INSTANCE, new BalancedShardsAllocator(settings), EmptyClusterInfoService.INSTANCE);
|
||||
}
|
||||
|
||||
public static AllocationDeciders defaultAllocationDeciders(Settings settings, ClusterSettings clusterSettings) throws
|
||||
IllegalAccessException, InvocationTargetException, InstantiationException, NoSuchMethodException {
|
||||
List<AllocationDecider> list = new ArrayList<>();
|
||||
// Keep a deterministic order of allocation deciders for the benchmark
|
||||
for (Class<? extends AllocationDecider> deciderClass : ClusterModule.DEFAULT_ALLOCATION_DECIDERS) {
|
||||
try {
|
||||
Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class, ClusterSettings
|
||||
.class);
|
||||
list.add(constructor.newInstance(settings, clusterSettings));
|
||||
} catch (NoSuchMethodException e) {
|
||||
Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class);
|
||||
list.add(constructor.newInstance(settings));
|
||||
}
|
||||
}
|
||||
return new AllocationDeciders(settings, list.toArray(new AllocationDecider[0]));
|
||||
|
||||
}
|
||||
|
||||
public static DiscoveryNode newNode(String nodeId, Map<String, String> attributes) {
|
||||
return new DiscoveryNode("", nodeId, DummyTransportAddress.INSTANCE, attributes, Sets.newHashSet(DiscoveryNode.Role.MASTER,
|
||||
DiscoveryNode.Role.DATA), Version.CURRENT);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,8 @@
|
|||
# Do not log at all if it is not really critical - we're in a benchmark
|
||||
benchmarks.es.logger.level=ERROR
|
||||
log4j.rootLogger=${benchmarks.es.logger.level}, out
|
||||
|
||||
log4j.appender.out=org.apache.log4j.ConsoleAppender
|
||||
log4j.appender.out.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c] %m%n
|
||||
|
|
@ -160,6 +160,7 @@ subprojects {
|
|||
them as external dependencies so the build plugin that we use can be used
|
||||
to build elasticsearch plugins outside of the elasticsearch source tree. */
|
||||
ext.projectSubstitutions = [
|
||||
"org.elasticsearch.gradle:build-tools:${version}": ':build-tools',
|
||||
"org.elasticsearch:rest-api-spec:${version}": ':rest-api-spec',
|
||||
"org.elasticsearch:elasticsearch:${version}": ':core',
|
||||
"org.elasticsearch.test:framework:${version}": ':test:framework',
|
||||
|
|
|
@ -103,6 +103,7 @@ if (project == rootProject) {
|
|||
url "https://oss.sonatype.org/content/repositories/snapshots/"
|
||||
}
|
||||
}
|
||||
test.exclude 'org/elasticsearch/test/NamingConventionsCheckBadClasses*'
|
||||
}
|
||||
|
||||
/*****************************************************************************
|
||||
|
@ -122,8 +123,8 @@ if (project != rootProject) {
|
|||
// build-tools is not ready for primetime with these...
|
||||
dependencyLicenses.enabled = false
|
||||
forbiddenApisMain.enabled = false
|
||||
forbiddenApisTest.enabled = false
|
||||
jarHell.enabled = false
|
||||
loggerUsageCheck.enabled = false
|
||||
thirdPartyAudit.enabled = false
|
||||
|
||||
// test for elasticsearch.build tries to run with ES...
|
||||
|
@ -137,4 +138,9 @@ if (project != rootProject) {
|
|||
// the file that actually defines nocommit
|
||||
exclude '**/ForbiddenPatternsTask.groovy'
|
||||
}
|
||||
|
||||
namingConventions {
|
||||
testClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$UnitTestCase'
|
||||
integTestClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$IntegTestCase'
|
||||
}
|
||||
}
|
||||
|
|
|
@ -21,6 +21,7 @@ package org.elasticsearch.gradle.precommit
|
|||
|
||||
import org.elasticsearch.gradle.LoggedExec
|
||||
import org.elasticsearch.gradle.VersionProperties
|
||||
import org.gradle.api.artifacts.Dependency
|
||||
import org.gradle.api.file.FileCollection
|
||||
import org.gradle.api.tasks.Input
|
||||
import org.gradle.api.tasks.InputFiles
|
||||
|
@ -57,8 +58,27 @@ public class NamingConventionsTask extends LoggedExec {
|
|||
@Input
|
||||
boolean skipIntegTestInDisguise = false
|
||||
|
||||
/**
|
||||
* Superclass for all tests.
|
||||
*/
|
||||
@Input
|
||||
String testClass = 'org.apache.lucene.util.LuceneTestCase'
|
||||
|
||||
/**
|
||||
* Superclass for all integration tests.
|
||||
*/
|
||||
@Input
|
||||
String integTestClass = 'org.elasticsearch.test.ESIntegTestCase'
|
||||
|
||||
public NamingConventionsTask() {
|
||||
dependsOn(classpath)
|
||||
// Extra classpath contains the actual test
|
||||
project.configurations.create('namingConventions')
|
||||
Dependency buildToolsDep = project.dependencies.add('namingConventions',
|
||||
"org.elasticsearch.gradle:build-tools:${VersionProperties.elasticsearch}")
|
||||
buildToolsDep.transitive = false // We don't need gradle in the classpath. It conflicts.
|
||||
FileCollection extraClasspath = project.configurations.namingConventions
|
||||
dependsOn(extraClasspath)
|
||||
|
||||
description = "Runs NamingConventionsCheck on ${classpath}"
|
||||
executable = new File(project.javaHome, 'bin/java')
|
||||
onlyIf { project.sourceSets.test.output.classesDir.exists() }
|
||||
|
@ -69,7 +89,8 @@ public class NamingConventionsTask extends LoggedExec {
|
|||
project.afterEvaluate {
|
||||
doFirst {
|
||||
args('-Djna.nosys=true')
|
||||
args('-cp', classpath.asPath, 'org.elasticsearch.test.NamingConventionsCheck')
|
||||
args('-cp', (classpath + extraClasspath).asPath, 'org.elasticsearch.test.NamingConventionsCheck')
|
||||
args(testClass, integTestClass)
|
||||
if (skipIntegTestInDisguise) {
|
||||
args('--skip-integ-tests-in-disguise')
|
||||
}
|
||||
|
@ -79,7 +100,7 @@ public class NamingConventionsTask extends LoggedExec {
|
|||
* process of ignoring them lets us validate that they were found so this ignore parameter acts
|
||||
* as the test for the NamingConventionsCheck.
|
||||
*/
|
||||
if (':test:framework'.equals(project.path)) {
|
||||
if (':build-tools'.equals(project.path)) {
|
||||
args('--self-test')
|
||||
}
|
||||
args('--', project.sourceSets.test.output.classesDir.absolutePath)
|
||||
|
|
|
@ -34,7 +34,6 @@ class PrecommitTasks {
|
|||
configureForbiddenApis(project),
|
||||
configureCheckstyle(project),
|
||||
configureNamingConventions(project),
|
||||
configureLoggerUsage(project),
|
||||
project.tasks.create('forbiddenPatterns', ForbiddenPatternsTask.class),
|
||||
project.tasks.create('licenseHeaders', LicenseHeadersTask.class),
|
||||
project.tasks.create('jarHell', JarHellTask.class),
|
||||
|
@ -49,6 +48,20 @@ class PrecommitTasks {
|
|||
UpdateShasTask updateShas = project.tasks.create('updateShas', UpdateShasTask.class)
|
||||
updateShas.parentTask = dependencyLicenses
|
||||
}
|
||||
if (project.path != ':build-tools') {
|
||||
/*
|
||||
* Sadly, build-tools can't have logger-usage-check because that
|
||||
* would create a circular project dependency between build-tools
|
||||
* (which provides NamingConventionsCheck) and :test:logger-usage
|
||||
* which provides the logger usage check. Since the build tools
|
||||
* don't use the logger usage check because they don't have any
|
||||
* of Elaticsearch's loggers and :test:logger-usage actually does
|
||||
* use the NamingConventionsCheck we break the circular dependency
|
||||
* here.
|
||||
*/
|
||||
precommitTasks.add(configureLoggerUsage(project))
|
||||
}
|
||||
|
||||
|
||||
Map<String, Object> precommitOptions = [
|
||||
name: 'precommit',
|
||||
|
|
|
@ -291,9 +291,10 @@ class ClusterFormationTasks {
|
|||
File configDir = new File(node.homeDir, 'config')
|
||||
copyConfig.into(configDir) // copy must always have a general dest dir, even though we don't use it
|
||||
for (Map.Entry<String,Object> extraConfigFile : node.config.extraConfigFiles.entrySet()) {
|
||||
Object extraConfigFileValue = extraConfigFile.getValue()
|
||||
copyConfig.doFirst {
|
||||
// make sure the copy won't be a no-op or act on a directory
|
||||
File srcConfigFile = project.file(extraConfigFile.getValue())
|
||||
File srcConfigFile = project.file(extraConfigFileValue)
|
||||
if (srcConfigFile.isDirectory()) {
|
||||
throw new GradleException("Source for extraConfigFile must be a file: ${srcConfigFile}")
|
||||
}
|
||||
|
@ -303,7 +304,7 @@ class ClusterFormationTasks {
|
|||
}
|
||||
File destConfigFile = new File(node.homeDir, 'config/' + extraConfigFile.getKey())
|
||||
// wrap source file in closure to delay resolution to execution time
|
||||
copyConfig.from({ extraConfigFile.getValue() }) {
|
||||
copyConfig.from({ extraConfigFileValue }) {
|
||||
// this must be in a closure so it is only applied to the single file specified in from above
|
||||
into(configDir.toPath().relativize(destConfigFile.canonicalFile.parentFile.toPath()).toFile())
|
||||
rename { destConfigFile.name }
|
||||
|
|
|
@ -25,14 +25,11 @@ import java.nio.file.FileVisitResult;
|
|||
import java.nio.file.FileVisitor;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Path;
|
||||
import java.nio.file.Paths;
|
||||
import java.nio.file.attribute.BasicFileAttributes;
|
||||
import java.util.HashSet;
|
||||
import java.util.Set;
|
||||
|
||||
import org.apache.lucene.util.LuceneTestCase;
|
||||
import org.elasticsearch.common.SuppressForbidden;
|
||||
import org.elasticsearch.common.io.PathUtils;
|
||||
|
||||
/**
|
||||
* Checks that all tests in a directory are named according to our naming conventions. This is important because tests that do not follow
|
||||
* our conventions aren't run by gradle. This was once a glorious unit test but now that Elasticsearch is a multi-module project it must be
|
||||
|
@ -46,11 +43,13 @@ import org.elasticsearch.common.io.PathUtils;
|
|||
* {@code --self-test} that is only run in the test:framework project.
|
||||
*/
|
||||
public class NamingConventionsCheck {
|
||||
public static void main(String[] args) throws IOException, ClassNotFoundException {
|
||||
NamingConventionsCheck check = new NamingConventionsCheck();
|
||||
public static void main(String[] args) throws IOException {
|
||||
int i = 0;
|
||||
NamingConventionsCheck check = new NamingConventionsCheck(
|
||||
loadClassWithoutInitializing(args[i++]),
|
||||
loadClassWithoutInitializing(args[i++]));
|
||||
boolean skipIntegTestsInDisguise = false;
|
||||
boolean selfTest = false;
|
||||
int i = 0;
|
||||
while (true) {
|
||||
switch (args[i]) {
|
||||
case "--skip-integ-tests-in-disguise":
|
||||
|
@ -69,7 +68,7 @@ public class NamingConventionsCheck {
|
|||
}
|
||||
break;
|
||||
}
|
||||
check.check(PathUtils.get(args[i]));
|
||||
check.check(Paths.get(args[i]));
|
||||
|
||||
if (selfTest) {
|
||||
assertViolation("WrongName", check.missingSuffix);
|
||||
|
@ -82,14 +81,12 @@ public class NamingConventionsCheck {
|
|||
}
|
||||
|
||||
// Now we should have no violations
|
||||
assertNoViolations("Not all subclasses of " + ESTestCase.class.getSimpleName()
|
||||
assertNoViolations("Not all subclasses of " + check.testClass.getSimpleName()
|
||||
+ " match the naming convention. Concrete classes must end with [Tests]", check.missingSuffix);
|
||||
assertNoViolations("Classes ending with [Tests] are abstract or interfaces", check.notRunnable);
|
||||
assertNoViolations("Found inner classes that are tests, which are excluded from the test runner", check.innerClasses);
|
||||
String classesToSubclass = String.join(",", ESTestCase.class.getSimpleName(), ESTestCase.class.getSimpleName(),
|
||||
ESTokenStreamTestCase.class.getSimpleName(), LuceneTestCase.class.getSimpleName());
|
||||
assertNoViolations("Pure Unit-Test found must subclass one of [" + classesToSubclass + "]", check.pureUnitTest);
|
||||
assertNoViolations("Classes ending with [Tests] must subclass [" + classesToSubclass + "]", check.notImplementing);
|
||||
assertNoViolations("Pure Unit-Test found must subclass [" + check.testClass.getSimpleName() + "]", check.pureUnitTest);
|
||||
assertNoViolations("Classes ending with [Tests] must subclass [" + check.testClass.getSimpleName() + "]", check.notImplementing);
|
||||
if (!skipIntegTestsInDisguise) {
|
||||
assertNoViolations("Subclasses of ESIntegTestCase should end with IT as they are integration tests",
|
||||
check.integTestsInDisguise);
|
||||
|
@ -103,6 +100,14 @@ public class NamingConventionsCheck {
|
|||
private final Set<Class<?>> notRunnable = new HashSet<>();
|
||||
private final Set<Class<?>> innerClasses = new HashSet<>();
|
||||
|
||||
private final Class<?> testClass;
|
||||
private final Class<?> integTestClass;
|
||||
|
||||
public NamingConventionsCheck(Class<?> testClass, Class<?> integTestClass) {
|
||||
this.testClass = testClass;
|
||||
this.integTestClass = integTestClass;
|
||||
}
|
||||
|
||||
public void check(Path rootPath) throws IOException {
|
||||
Files.walkFileTree(rootPath, new FileVisitor<Path>() {
|
||||
/**
|
||||
|
@ -136,9 +141,9 @@ public class NamingConventionsCheck {
|
|||
String filename = file.getFileName().toString();
|
||||
if (filename.endsWith(".class")) {
|
||||
String className = filename.substring(0, filename.length() - ".class".length());
|
||||
Class<?> clazz = loadClass(className);
|
||||
Class<?> clazz = loadClassWithoutInitializing(packageName + className);
|
||||
if (clazz.getName().endsWith("Tests")) {
|
||||
if (ESIntegTestCase.class.isAssignableFrom(clazz)) {
|
||||
if (integTestClass.isAssignableFrom(clazz)) {
|
||||
integTestsInDisguise.add(clazz);
|
||||
}
|
||||
if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
|
||||
|
@ -164,15 +169,7 @@ public class NamingConventionsCheck {
|
|||
}
|
||||
|
||||
private boolean isTestCase(Class<?> clazz) {
|
||||
return LuceneTestCase.class.isAssignableFrom(clazz);
|
||||
}
|
||||
|
||||
private Class<?> loadClass(String className) {
|
||||
try {
|
||||
return Thread.currentThread().getContextClassLoader().loadClass(packageName + className);
|
||||
} catch (ClassNotFoundException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
return testClass.isAssignableFrom(clazz);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -186,7 +183,6 @@ public class NamingConventionsCheck {
|
|||
* Fail the process if there are any violations in the set. Named to look like a junit assertion even though it isn't because it is
|
||||
* similar enough.
|
||||
*/
|
||||
@SuppressForbidden(reason = "System.err/System.exit")
|
||||
private static void assertNoViolations(String message, Set<Class<?>> set) {
|
||||
if (false == set.isEmpty()) {
|
||||
System.err.println(message + ":");
|
||||
|
@ -201,10 +197,9 @@ public class NamingConventionsCheck {
|
|||
* Fail the process if we didn't detect a particular violation. Named to look like a junit assertion even though it isn't because it is
|
||||
* similar enough.
|
||||
*/
|
||||
@SuppressForbidden(reason = "System.err/System.exit")
|
||||
private static void assertViolation(String className, Set<Class<?>> set) throws ClassNotFoundException {
|
||||
className = "org.elasticsearch.test.test.NamingConventionsCheckBadClasses$" + className;
|
||||
if (false == set.remove(Class.forName(className))) {
|
||||
private static void assertViolation(String className, Set<Class<?>> set) {
|
||||
className = "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className;
|
||||
if (false == set.remove(loadClassWithoutInitializing(className))) {
|
||||
System.err.println("Error in NamingConventionsCheck! Expected [" + className + "] to be a violation but wasn't.");
|
||||
System.exit(1);
|
||||
}
|
||||
|
@ -213,9 +208,20 @@ public class NamingConventionsCheck {
|
|||
/**
|
||||
* Fail the process with the provided message.
|
||||
*/
|
||||
@SuppressForbidden(reason = "System.err/System.exit")
|
||||
private static void fail(String reason) {
|
||||
System.err.println(reason);
|
||||
System.exit(1);
|
||||
}
|
||||
|
||||
static Class<?> loadClassWithoutInitializing(String name) {
|
||||
try {
|
||||
return Class.forName(name,
|
||||
// Don't initialize the class to save time. Not needed for this test and this doesn't share a VM with any other tests.
|
||||
false,
|
||||
// Use our classloader rather than the bootstrap class loader.
|
||||
NamingConventionsCheck.class.getClassLoader());
|
||||
} catch (ClassNotFoundException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -482,7 +482,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]PreBuiltCacheFactory.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]PreBuiltTokenFilters.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]breaker[/\\]HierarchyCircuitBreakerService.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]cluster[/\\]IndicesClusterStateService.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]fielddata[/\\]cache[/\\]IndicesFieldDataCache.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]fielddata[/\\]cache[/\\]IndicesFieldDataCacheListener.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]flush[/\\]ShardsSyncedFlushResult.java" checks="LineLength" />
|
||||
|
|
|
@ -31,5 +31,3 @@ org.apache.lucene.index.IndexReader#getCombinedCoreAndDeletesKey()
|
|||
|
||||
@defaultMessage Soon to be removed
|
||||
org.apache.lucene.document.FieldType#numericType()
|
||||
|
||||
org.apache.lucene.document.InetAddressPoint#newPrefixQuery(java.lang.String, java.net.InetAddress, int) @LUCENE-7232
|
||||
|
|
|
@ -17,9 +17,7 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.test.test;
|
||||
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
package org.elasticsearch.test;
|
||||
|
||||
import junit.framework.TestCase;
|
||||
|
||||
|
@ -30,21 +28,35 @@ public class NamingConventionsCheckBadClasses {
|
|||
public static final class NotImplementingTests {
|
||||
}
|
||||
|
||||
public static final class WrongName extends ESTestCase {
|
||||
public static final class WrongName extends UnitTestCase {
|
||||
/*
|
||||
* Dummy test so the tests pass. We do this *and* skip the tests so anyone who jumps back to a branch without these tests can still
|
||||
* compile without a failure. That is because clean doesn't actually clean these....
|
||||
*/
|
||||
public void testDummy() {}
|
||||
}
|
||||
|
||||
public static abstract class DummyAbstractTests extends ESTestCase {
|
||||
public static abstract class DummyAbstractTests extends UnitTestCase {
|
||||
}
|
||||
|
||||
public interface DummyInterfaceTests {
|
||||
}
|
||||
|
||||
public static final class InnerTests extends ESTestCase {
|
||||
public static final class InnerTests extends UnitTestCase {
|
||||
public void testDummy() {}
|
||||
}
|
||||
|
||||
public static final class WrongNameTheSecond extends ESTestCase {
|
||||
public static final class WrongNameTheSecond extends UnitTestCase {
|
||||
public void testDummy() {}
|
||||
}
|
||||
|
||||
public static final class PlainUnit extends TestCase {
|
||||
public void testDummy() {}
|
||||
}
|
||||
|
||||
public abstract static class UnitTestCase extends TestCase {
|
||||
}
|
||||
|
||||
public abstract static class IntegTestCase extends UnitTestCase {
|
||||
}
|
||||
}
|
|
@ -1,5 +1,5 @@
|
|||
elasticsearch = 5.0.0
|
||||
lucene = 6.0.1
|
||||
elasticsearch = 5.0.0-alpha4
|
||||
lucene = 6.1.0-snapshot-3a57bea
|
||||
|
||||
# optional dependencies
|
||||
spatial4j = 0.6
|
||||
|
@ -17,3 +17,6 @@ httpclient = 4.5.2
|
|||
httpcore = 4.4.4
|
||||
commonslogging = 1.1.3
|
||||
commonscodec = 1.10
|
||||
|
||||
# benchmark dependencies
|
||||
jmh = 1.12
|
|
@ -1,117 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package org.apache.lucene.document;
|
||||
|
||||
import java.net.InetAddress;
|
||||
import java.net.UnknownHostException;
|
||||
import java.util.Arrays;
|
||||
|
||||
import org.apache.lucene.search.Query;
|
||||
import org.apache.lucene.util.NumericUtils;
|
||||
import org.elasticsearch.common.SuppressForbidden;
|
||||
|
||||
/**
|
||||
* Forked utility methods from Lucene's InetAddressPoint until LUCENE-7232 and
|
||||
* LUCENE-7234 are released.
|
||||
*/
|
||||
// TODO: remove me when we upgrade to Lucene 6.1
|
||||
@SuppressForbidden(reason="uses InetAddress.getHostAddress")
|
||||
public final class XInetAddressPoint {
|
||||
|
||||
private XInetAddressPoint() {}
|
||||
|
||||
/** The minimum value that an ip address can hold. */
|
||||
public static final InetAddress MIN_VALUE;
|
||||
/** The maximum value that an ip address can hold. */
|
||||
public static final InetAddress MAX_VALUE;
|
||||
static {
|
||||
MIN_VALUE = InetAddressPoint.decode(new byte[InetAddressPoint.BYTES]);
|
||||
byte[] maxValueBytes = new byte[InetAddressPoint.BYTES];
|
||||
Arrays.fill(maxValueBytes, (byte) 0xFF);
|
||||
MAX_VALUE = InetAddressPoint.decode(maxValueBytes);
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the {@link InetAddress} that compares immediately greater than
|
||||
* {@code address}.
|
||||
* @throws ArithmeticException if the provided address is the
|
||||
* {@link #MAX_VALUE maximum ip address}
|
||||
*/
|
||||
public static InetAddress nextUp(InetAddress address) {
|
||||
if (address.equals(MAX_VALUE)) {
|
||||
throw new ArithmeticException("Overflow: there is no greater InetAddress than "
|
||||
+ address.getHostAddress());
|
||||
}
|
||||
byte[] delta = new byte[InetAddressPoint.BYTES];
|
||||
delta[InetAddressPoint.BYTES-1] = 1;
|
||||
byte[] nextUpBytes = new byte[InetAddressPoint.BYTES];
|
||||
NumericUtils.add(InetAddressPoint.BYTES, 0, InetAddressPoint.encode(address), delta, nextUpBytes);
|
||||
return InetAddressPoint.decode(nextUpBytes);
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the {@link InetAddress} that compares immediately less than
|
||||
* {@code address}.
|
||||
* @throws ArithmeticException if the provided address is the
|
||||
* {@link #MIN_VALUE minimum ip address}
|
||||
*/
|
||||
public static InetAddress nextDown(InetAddress address) {
|
||||
if (address.equals(MIN_VALUE)) {
|
||||
throw new ArithmeticException("Underflow: there is no smaller InetAddress than "
|
||||
+ address.getHostAddress());
|
||||
}
|
||||
byte[] delta = new byte[InetAddressPoint.BYTES];
|
||||
delta[InetAddressPoint.BYTES-1] = 1;
|
||||
byte[] nextDownBytes = new byte[InetAddressPoint.BYTES];
|
||||
NumericUtils.subtract(InetAddressPoint.BYTES, 0, InetAddressPoint.encode(address), delta, nextDownBytes);
|
||||
return InetAddressPoint.decode(nextDownBytes);
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a prefix query for matching a CIDR network range.
|
||||
*
|
||||
* @param field field name. must not be {@code null}.
|
||||
* @param value any host address
|
||||
* @param prefixLength the network prefix length for this address. This is also known as the subnet mask in the context of IPv4
|
||||
* addresses.
|
||||
* @throws IllegalArgumentException if {@code field} is null, or prefixLength is invalid.
|
||||
* @return a query matching documents with addresses contained within this network
|
||||
*/
|
||||
// TODO: remove me when we upgrade to Lucene 6.0.1
|
||||
public static Query newPrefixQuery(String field, InetAddress value, int prefixLength) {
|
||||
if (value == null) {
|
||||
throw new IllegalArgumentException("InetAddress must not be null");
|
||||
}
|
||||
if (prefixLength < 0 || prefixLength > 8 * value.getAddress().length) {
|
||||
throw new IllegalArgumentException("illegal prefixLength '" + prefixLength
|
||||
+ "'. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges");
|
||||
}
|
||||
// create the lower value by zeroing out the host portion, upper value by filling it with all ones.
|
||||
byte lower[] = value.getAddress();
|
||||
byte upper[] = value.getAddress();
|
||||
for (int i = prefixLength; i < 8 * lower.length; i++) {
|
||||
int m = 1 << (7 - (i & 7));
|
||||
lower[i >> 3] &= ~m;
|
||||
upper[i >> 3] |= m;
|
||||
}
|
||||
try {
|
||||
return InetAddressPoint.newRangeQuery(field, InetAddress.getByAddress(lower), InetAddress.getByAddress(upper));
|
||||
} catch (UnknownHostException e) {
|
||||
throw new AssertionError(e); // values are coming from InetAddress
|
||||
}
|
||||
}
|
||||
}
|
|
@ -283,7 +283,7 @@ public abstract class BlendedTermQuery extends Query {
|
|||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
if (!super.equals(o)) return false;
|
||||
if (sameClassAs(o) == false) return false;
|
||||
|
||||
BlendedTermQuery that = (BlendedTermQuery) o;
|
||||
return Arrays.equals(equalsTerms(), that.equalsTerms());
|
||||
|
@ -291,7 +291,7 @@ public abstract class BlendedTermQuery extends Query {
|
|||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(super.hashCode(), Arrays.hashCode(equalsTerms()));
|
||||
return Objects.hash(classHash(), Arrays.hashCode(equalsTerms()));
|
||||
}
|
||||
|
||||
public static BlendedTermQuery booleanBlendedQuery(Term[] terms, final boolean disableCoord) {
|
||||
|
|
|
@ -44,12 +44,12 @@ public final class MinDocQuery extends Query {
|
|||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(super.hashCode(), minDoc);
|
||||
return Objects.hash(classHash(), minDoc);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (super.equals(obj) == false) {
|
||||
if (sameClassAs(obj) == false) {
|
||||
return false;
|
||||
}
|
||||
MinDocQuery that = (MinDocQuery) obj;
|
||||
|
|
|
@ -63,9 +63,6 @@ import org.elasticsearch.common.io.PathUtils;
|
|||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.io.OutputStream;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Path;
|
||||
import java.nio.file.Paths;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
|
@ -622,8 +619,12 @@ public long ramBytesUsed() {
|
|||
Set<BytesRef> seenSurfaceForms = new HashSet<>();
|
||||
|
||||
int dedup = 0;
|
||||
while (reader.read(scratch)) {
|
||||
input.reset(scratch.bytes(), 0, scratch.length());
|
||||
while (true) {
|
||||
BytesRef bytes = reader.next();
|
||||
if (bytes == null) {
|
||||
break;
|
||||
}
|
||||
input.reset(bytes.bytes, bytes.offset, bytes.length);
|
||||
short analyzedLength = input.readShort();
|
||||
analyzed.grow(analyzedLength+2);
|
||||
input.readBytes(analyzed.bytes(), 0, analyzedLength);
|
||||
|
@ -631,13 +632,13 @@ public long ramBytesUsed() {
|
|||
|
||||
long cost = input.readInt();
|
||||
|
||||
surface.bytes = scratch.bytes();
|
||||
surface.bytes = bytes.bytes;
|
||||
if (hasPayloads) {
|
||||
surface.length = input.readShort();
|
||||
surface.offset = input.getPosition();
|
||||
} else {
|
||||
surface.offset = input.getPosition();
|
||||
surface.length = scratch.length() - surface.offset;
|
||||
surface.length = bytes.length - surface.offset;
|
||||
}
|
||||
|
||||
if (previousAnalyzed == null) {
|
||||
|
@ -679,11 +680,11 @@ public long ramBytesUsed() {
|
|||
builder.add(scratchInts.get(), outputs.newPair(cost, BytesRef.deepCopyOf(surface)));
|
||||
} else {
|
||||
int payloadOffset = input.getPosition() + surface.length;
|
||||
int payloadLength = scratch.length() - payloadOffset;
|
||||
int payloadLength = bytes.length - payloadOffset;
|
||||
BytesRef br = new BytesRef(surface.length + 1 + payloadLength);
|
||||
System.arraycopy(surface.bytes, surface.offset, br.bytes, 0, surface.length);
|
||||
br.bytes[surface.length] = (byte) payloadSep;
|
||||
System.arraycopy(scratch.bytes(), payloadOffset, br.bytes, surface.length+1, payloadLength);
|
||||
System.arraycopy(bytes.bytes, payloadOffset, br.bytes, surface.length+1, payloadLength);
|
||||
br.length = br.bytes.length;
|
||||
builder.add(scratchInts.get(), outputs.newPair(cost, br));
|
||||
}
|
||||
|
|
|
@ -32,7 +32,7 @@ public class ResourceNotFoundException extends ElasticsearchException {
|
|||
super(msg, args);
|
||||
}
|
||||
|
||||
protected ResourceNotFoundException(String msg, Throwable cause, Object... args) {
|
||||
public ResourceNotFoundException(String msg, Throwable cause, Object... args) {
|
||||
super(msg, cause, args);
|
||||
}
|
||||
|
||||
|
|
|
@ -76,9 +76,9 @@ public class Version {
|
|||
public static final Version V_5_0_0_alpha2 = new Version(V_5_0_0_alpha2_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
|
||||
public static final int V_5_0_0_alpha3_ID = 5000003;
|
||||
public static final Version V_5_0_0_alpha3 = new Version(V_5_0_0_alpha3_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
|
||||
public static final int V_5_0_0_ID = 5000099;
|
||||
public static final Version V_5_0_0 = new Version(V_5_0_0_ID, org.apache.lucene.util.Version.LUCENE_6_0_1);
|
||||
public static final Version CURRENT = V_5_0_0;
|
||||
public static final int V_5_0_0_alpha4_ID = 5000004;
|
||||
public static final Version V_5_0_0_alpha4 = new Version(V_5_0_0_alpha4_ID, org.apache.lucene.util.Version.LUCENE_6_1_0);
|
||||
public static final Version CURRENT = V_5_0_0_alpha4;
|
||||
|
||||
static {
|
||||
assert CURRENT.luceneVersion.equals(org.apache.lucene.util.Version.LATEST) : "Version must be upgraded to ["
|
||||
|
@ -91,8 +91,8 @@ public class Version {
|
|||
|
||||
public static Version fromId(int id) {
|
||||
switch (id) {
|
||||
case V_5_0_0_ID:
|
||||
return V_5_0_0;
|
||||
case V_5_0_0_alpha4_ID:
|
||||
return V_5_0_0_alpha4;
|
||||
case V_5_0_0_alpha3_ID:
|
||||
return V_5_0_0_alpha3;
|
||||
case V_5_0_0_alpha2_ID:
|
||||
|
|
|
@ -32,6 +32,8 @@ import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsAction;
|
|||
import org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.TransportCancelTasksAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.TransportGetTaskAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.TransportListTasksAction;
|
||||
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryAction;
|
||||
|
@ -141,7 +143,7 @@ import org.elasticsearch.action.delete.TransportDeleteAction;
|
|||
import org.elasticsearch.action.explain.ExplainAction;
|
||||
import org.elasticsearch.action.explain.TransportExplainAction;
|
||||
import org.elasticsearch.action.fieldstats.FieldStatsAction;
|
||||
import org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction;
|
||||
import org.elasticsearch.action.fieldstats.TransportFieldStatsAction;
|
||||
import org.elasticsearch.action.get.GetAction;
|
||||
import org.elasticsearch.action.get.MultiGetAction;
|
||||
import org.elasticsearch.action.get.TransportGetAction;
|
||||
|
@ -264,6 +266,7 @@ public class ActionModule extends AbstractModule {
|
|||
registerAction(NodesStatsAction.INSTANCE, TransportNodesStatsAction.class);
|
||||
registerAction(NodesHotThreadsAction.INSTANCE, TransportNodesHotThreadsAction.class);
|
||||
registerAction(ListTasksAction.INSTANCE, TransportListTasksAction.class);
|
||||
registerAction(GetTaskAction.INSTANCE, TransportGetTaskAction.class);
|
||||
registerAction(CancelTasksAction.INSTANCE, TransportCancelTasksAction.class);
|
||||
|
||||
registerAction(ClusterAllocationExplainAction.INSTANCE, TransportClusterAllocationExplainAction.class);
|
||||
|
@ -341,7 +344,7 @@ public class ActionModule extends AbstractModule {
|
|||
registerAction(GetStoredScriptAction.INSTANCE, TransportGetStoredScriptAction.class);
|
||||
registerAction(DeleteStoredScriptAction.INSTANCE, TransportDeleteStoredScriptAction.class);
|
||||
|
||||
registerAction(FieldStatsAction.INSTANCE, TransportFieldStatsTransportAction.class);
|
||||
registerAction(FieldStatsAction.INSTANCE, TransportFieldStatsAction.class);
|
||||
|
||||
registerAction(PutPipelineAction.INSTANCE, PutPipelineTransportAction.class);
|
||||
registerAction(GetPipelineAction.INSTANCE, GetPipelineTransportAction.class);
|
||||
|
|
|
@ -39,6 +39,9 @@ public abstract class ActionRequest<Request extends ActionRequest<Request>> exte
|
|||
|
||||
public abstract ActionRequestValidationException validate();
|
||||
|
||||
/**
|
||||
* Should this task persist its result after it has finished?
|
||||
*/
|
||||
public boolean getShouldPersistResult() {
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -33,8 +33,6 @@ import org.elasticsearch.common.unit.TimeValue;
|
|||
import java.io.IOException;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.elasticsearch.common.unit.TimeValue.readTimeValue;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
|
@ -160,7 +158,7 @@ public class ClusterHealthRequest extends MasterNodeReadRequest<ClusterHealthReq
|
|||
indices[i] = in.readString();
|
||||
}
|
||||
}
|
||||
timeout = readTimeValue(in);
|
||||
timeout = new TimeValue(in);
|
||||
if (in.readBoolean()) {
|
||||
waitForStatus = ClusterHealthStatus.fromValue(in.readByte());
|
||||
}
|
||||
|
|
|
@ -187,7 +187,7 @@ public class ClusterHealthResponse extends ActionResponse implements StatusToXCo
|
|||
timedOut = in.readBoolean();
|
||||
numberOfInFlightFetch = in.readInt();
|
||||
delayedUnassignedShards= in.readInt();
|
||||
taskMaxWaitingTime = TimeValue.readTimeValue(in);
|
||||
taskMaxWaitingTime = new TimeValue(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -101,7 +101,7 @@ public class NodesHotThreadsRequest extends BaseNodesRequest<NodesHotThreadsRequ
|
|||
threads = in.readInt();
|
||||
ignoreIdleThreads = in.readBoolean();
|
||||
type = in.readString();
|
||||
interval = TimeValue.readTimeValue(in);
|
||||
interval = new TimeValue(in);
|
||||
snapshots = in.readInt();
|
||||
}
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ package org.elasticsearch.action.admin.cluster.node.tasks.cancel;
|
|||
import org.elasticsearch.action.FailedNodeException;
|
||||
import org.elasticsearch.action.TaskOperationFailure;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.TaskInfo;
|
||||
import org.elasticsearch.tasks.TaskInfo;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
|
|
|
@ -22,7 +22,6 @@ package org.elasticsearch.action.admin.cluster.node.tasks.cancel;
|
|||
import org.elasticsearch.ResourceNotFoundException;
|
||||
import org.elasticsearch.action.FailedNodeException;
|
||||
import org.elasticsearch.action.TaskOperationFailure;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.TaskInfo;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.tasks.TransportTasksAction;
|
||||
import org.elasticsearch.cluster.ClusterName;
|
||||
|
@ -36,6 +35,7 @@ import org.elasticsearch.common.io.stream.StreamOutput;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.tasks.CancellableTask;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
import org.elasticsearch.tasks.TaskInfo;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.EmptyTransportResponseHandler;
|
||||
import org.elasticsearch.transport.TransportChannel;
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.action.admin.cluster.node.tasks.get;
|
||||
|
||||
import org.elasticsearch.action.Action;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
|
||||
/**
|
||||
* Action for retrieving a list of currently running tasks
|
||||
*/
|
||||
public class GetTaskAction extends Action<GetTaskRequest, GetTaskResponse, GetTaskRequestBuilder> {
|
||||
|
||||
public static final GetTaskAction INSTANCE = new GetTaskAction();
|
||||
public static final String NAME = "cluster:monitor/task/get";
|
||||
|
||||
private GetTaskAction() {
|
||||
super(NAME);
|
||||
}
|
||||
|
||||
@Override
|
||||
public GetTaskResponse newResponse() {
|
||||
return new GetTaskResponse();
|
||||
}
|
||||
|
||||
@Override
|
||||
public GetTaskRequestBuilder newRequestBuilder(ElasticsearchClient client) {
|
||||
return new GetTaskRequestBuilder(client, this);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,119 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.action.admin.cluster.node.tasks.get;
|
||||
|
||||
import org.elasticsearch.action.ActionRequest;
|
||||
import org.elasticsearch.action.ActionRequestValidationException;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static org.elasticsearch.action.ValidateActions.addValidationError;
|
||||
|
||||
/**
|
||||
* A request to get node tasks
|
||||
*/
|
||||
public class GetTaskRequest extends ActionRequest<GetTaskRequest> {
|
||||
private TaskId taskId = TaskId.EMPTY_TASK_ID;
|
||||
private boolean waitForCompletion = false;
|
||||
private TimeValue timeout = null;
|
||||
|
||||
/**
|
||||
* Get the TaskId to look up.
|
||||
*/
|
||||
public TaskId getTaskId() {
|
||||
return taskId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the TaskId to look up. Required.
|
||||
*/
|
||||
public GetTaskRequest setTaskId(TaskId taskId) {
|
||||
this.taskId = taskId;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Should this request wait for all found tasks to complete?
|
||||
*/
|
||||
public boolean getWaitForCompletion() {
|
||||
return waitForCompletion;
|
||||
}
|
||||
|
||||
/**
|
||||
* Should this request wait for all found tasks to complete?
|
||||
*/
|
||||
public GetTaskRequest setWaitForCompletion(boolean waitForCompletion) {
|
||||
this.waitForCompletion = waitForCompletion;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Timeout to wait for any async actions this request must take. It must take anywhere from 0 to 2.
|
||||
*/
|
||||
public TimeValue getTimeout() {
|
||||
return timeout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Timeout to wait for any async actions this request must take. It must take anywhere from 0 to 2.
|
||||
*/
|
||||
public GetTaskRequest setTimeout(TimeValue timeout) {
|
||||
this.timeout = timeout;
|
||||
return this;
|
||||
}
|
||||
|
||||
GetTaskRequest nodeRequest(String thisNodeId, long thisTaskId) {
|
||||
GetTaskRequest copy = new GetTaskRequest();
|
||||
copy.setParentTask(thisNodeId, thisTaskId);
|
||||
copy.setTaskId(taskId);
|
||||
copy.setTimeout(timeout);
|
||||
copy.setWaitForCompletion(waitForCompletion);
|
||||
return copy;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionRequestValidationException validate() {
|
||||
ActionRequestValidationException validationException = null;
|
||||
if (false == getTaskId().isSet()) {
|
||||
validationException = addValidationError("task id is required", validationException);
|
||||
}
|
||||
return validationException;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
taskId = TaskId.readFromStream(in);
|
||||
timeout = in.readOptionalWriteable(TimeValue::new);
|
||||
waitForCompletion = in.readBoolean();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
taskId.writeTo(out);
|
||||
out.writeOptionalWriteable(timeout);
|
||||
out.writeBoolean(waitForCompletion);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,58 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.action.admin.cluster.node.tasks.get;
|
||||
|
||||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
|
||||
/**
|
||||
* Builder for the request to retrieve the list of tasks running on the specified nodes
|
||||
*/
|
||||
public class GetTaskRequestBuilder extends ActionRequestBuilder<GetTaskRequest, GetTaskResponse, GetTaskRequestBuilder> {
|
||||
public GetTaskRequestBuilder(ElasticsearchClient client, GetTaskAction action) {
|
||||
super(client, action, new GetTaskRequest());
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the TaskId to look up. Required.
|
||||
*/
|
||||
public final GetTaskRequestBuilder setTaskId(TaskId taskId) {
|
||||
request.setTaskId(taskId);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Should this request wait for all found tasks to complete?
|
||||
*/
|
||||
public final GetTaskRequestBuilder setWaitForCompletion(boolean waitForCompletion) {
|
||||
request.setWaitForCompletion(waitForCompletion);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Timeout to wait for any async actions this request must take. It must take anywhere from 0 to 2.
|
||||
*/
|
||||
public final GetTaskRequestBuilder setTimeout(TimeValue timeout) {
|
||||
request.setTimeout(timeout);
|
||||
return this;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,75 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.action.admin.cluster.node.tasks.get;
|
||||
|
||||
import org.elasticsearch.action.ActionResponse;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.tasks.PersistedTaskInfo;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static java.util.Objects.requireNonNull;
|
||||
|
||||
/**
|
||||
* Returns the list of tasks currently running on the nodes
|
||||
*/
|
||||
public class GetTaskResponse extends ActionResponse implements ToXContent {
|
||||
private PersistedTaskInfo task;
|
||||
|
||||
public GetTaskResponse() {
|
||||
}
|
||||
|
||||
public GetTaskResponse(PersistedTaskInfo task) {
|
||||
this.task = requireNonNull(task, "task is required");
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
task = in.readOptionalWriteable(PersistedTaskInfo::new);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeOptionalWriteable(task);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the actual result of the fetch.
|
||||
*/
|
||||
public PersistedTaskInfo getTask() {
|
||||
return task;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
return task.innerToXContent(builder, params);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,216 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.action.admin.cluster.node.tasks.get;
|
||||
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.ExceptionsHelper;
|
||||
import org.elasticsearch.ResourceNotFoundException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.get.GetRequest;
|
||||
import org.elasticsearch.action.get.GetResponse;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.HandledTransportAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.ParseFieldMatcher;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
|
||||
import org.elasticsearch.common.xcontent.XContentHelper;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
import org.elasticsearch.tasks.PersistedTaskInfo;
|
||||
import org.elasticsearch.tasks.TaskPersistenceService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.BaseTransportResponseHandler;
|
||||
import org.elasticsearch.transport.TransportException;
|
||||
import org.elasticsearch.transport.TransportRequestOptions;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static org.elasticsearch.action.admin.cluster.node.tasks.list.TransportListTasksAction.waitForCompletionTimeout;
|
||||
import static org.elasticsearch.action.admin.cluster.node.tasks.list.TransportListTasksAction.waitForTaskCompletion;
|
||||
|
||||
/**
|
||||
* Action to get a single task. If the task isn't running then it'll try to request the status from request index.
|
||||
*
|
||||
* The general flow is:
|
||||
* <ul>
|
||||
* <li>If this isn't being executed on the node to which the requested TaskId belongs then move to that node.
|
||||
* <li>Look up the task and return it if it exists
|
||||
* <li>If it doesn't then look up the task from the results index
|
||||
* </ul>
|
||||
*/
|
||||
public class TransportGetTaskAction extends HandledTransportAction<GetTaskRequest, GetTaskResponse> {
|
||||
private final ClusterService clusterService;
|
||||
private final TransportService transportService;
|
||||
private final Client client;
|
||||
|
||||
@Inject
|
||||
public TransportGetTaskAction(Settings settings, ThreadPool threadPool, TransportService transportService, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, ClusterService clusterService, Client client) {
|
||||
super(settings, GetTaskAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, GetTaskRequest::new);
|
||||
this.clusterService = clusterService;
|
||||
this.transportService = transportService;
|
||||
this.client = client;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(GetTaskRequest request, ActionListener<GetTaskResponse> listener) {
|
||||
throw new UnsupportedOperationException("Task is required");
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task thisTask, GetTaskRequest request, ActionListener<GetTaskResponse> listener) {
|
||||
if (clusterService.localNode().getId().equals(request.getTaskId().getNodeId())) {
|
||||
getRunningTaskFromNode(thisTask, request, listener);
|
||||
} else {
|
||||
runOnNodeWithTaskIfPossible(thisTask, request, listener);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Executed on the coordinating node to forward execution of the remaining work to the node that matches that requested
|
||||
* {@link TaskId#getNodeId()}. If the node isn't in the cluster then this will just proceed to
|
||||
* {@link #getFinishedTaskFromIndex(Task, GetTaskRequest, ActionListener)} on this node.
|
||||
*/
|
||||
private void runOnNodeWithTaskIfPossible(Task thisTask, GetTaskRequest request, ActionListener<GetTaskResponse> listener) {
|
||||
TransportRequestOptions.Builder builder = TransportRequestOptions.builder();
|
||||
if (request.getTimeout() != null) {
|
||||
builder.withTimeout(request.getTimeout());
|
||||
}
|
||||
builder.withCompress(false);
|
||||
DiscoveryNode node = clusterService.state().nodes().get(request.getTaskId().getNodeId());
|
||||
if (node == null) {
|
||||
// Node is no longer part of the cluster! Try and look the task up from the results index.
|
||||
getFinishedTaskFromIndex(thisTask, request, listener);
|
||||
return;
|
||||
}
|
||||
GetTaskRequest nodeRequest = request.nodeRequest(clusterService.localNode().getId(), thisTask.getId());
|
||||
taskManager.registerChildTask(thisTask, node.getId());
|
||||
transportService.sendRequest(node, GetTaskAction.NAME, nodeRequest, builder.build(),
|
||||
new BaseTransportResponseHandler<GetTaskResponse>() {
|
||||
@Override
|
||||
public GetTaskResponse newInstance() {
|
||||
return new GetTaskResponse();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleResponse(GetTaskResponse response) {
|
||||
listener.onResponse(response);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handleException(TransportException exp) {
|
||||
listener.onFailure(exp);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String executor() {
|
||||
return ThreadPool.Names.SAME;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Executed on the node that should be running the task to find and return the running task. Falls back to
|
||||
* {@link #getFinishedTaskFromIndex(Task, GetTaskRequest, ActionListener)} if the task isn't still running.
|
||||
*/
|
||||
void getRunningTaskFromNode(Task thisTask, GetTaskRequest request, ActionListener<GetTaskResponse> listener) {
|
||||
Task runningTask = taskManager.getTask(request.getTaskId().getId());
|
||||
if (runningTask == null) {
|
||||
getFinishedTaskFromIndex(thisTask, request, listener);
|
||||
} else {
|
||||
if (request.getWaitForCompletion()) {
|
||||
// Shift to the generic thread pool and let it wait for the task to complete so we don't block any important threads.
|
||||
threadPool.generic().execute(new AbstractRunnable() {
|
||||
@Override
|
||||
protected void doRun() throws Exception {
|
||||
waitForTaskCompletion(taskManager, runningTask, waitForCompletionTimeout(request.getTimeout()));
|
||||
// TODO look up the task's result from the .tasks index now that it is done
|
||||
listener.onResponse(
|
||||
new GetTaskResponse(new PersistedTaskInfo(runningTask.taskInfo(clusterService.localNode(), true))));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Throwable t) {
|
||||
listener.onFailure(t);
|
||||
}
|
||||
});
|
||||
} else {
|
||||
listener.onResponse(new GetTaskResponse(new PersistedTaskInfo(runningTask.taskInfo(clusterService.localNode(), true))));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a {@link GetRequest} to the results index looking for the results of the task. It'll only be found only if the task's result was
|
||||
* persisted. Called on the node that once had the task if that node is part of the cluster or on the coordinating node if the node
|
||||
* wasn't part of the cluster.
|
||||
*/
|
||||
void getFinishedTaskFromIndex(Task thisTask, GetTaskRequest request, ActionListener<GetTaskResponse> listener) {
|
||||
GetRequest get = new GetRequest(TaskPersistenceService.TASK_INDEX, TaskPersistenceService.TASK_TYPE,
|
||||
request.getTaskId().toString());
|
||||
get.setParentTask(clusterService.localNode().getId(), thisTask.getId());
|
||||
client.get(get, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse getResponse) {
|
||||
try {
|
||||
onGetFinishedTaskFromIndex(getResponse, listener);
|
||||
} catch (Throwable e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Throwable e) {
|
||||
if (ExceptionsHelper.unwrap(e, IndexNotFoundException.class) != null) {
|
||||
// We haven't yet created the index for the task results so it can't be found.
|
||||
listener.onFailure(new ResourceNotFoundException("task [{}] isn't running or persisted", e, request.getTaskId()));
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Called with the {@linkplain GetResponse} from loading the task from the results index. Called on the node that once had the task if
|
||||
* that node is part of the cluster or on the coordinating node if the node wasn't part of the cluster.
|
||||
*/
|
||||
void onGetFinishedTaskFromIndex(GetResponse response, ActionListener<GetTaskResponse> listener) throws IOException {
|
||||
if (false == response.isExists()) {
|
||||
listener.onFailure(new ResourceNotFoundException("task [{}] isn't running or persisted", response.getId()));
|
||||
}
|
||||
if (response.isSourceEmpty()) {
|
||||
listener.onFailure(new ElasticsearchException("Stored task status for [{}] didn't contain any source!", response.getId()));
|
||||
return;
|
||||
}
|
||||
try (XContentParser parser = XContentHelper.createParser(response.getSourceAsBytesRef())) {
|
||||
PersistedTaskInfo result = PersistedTaskInfo.PARSER.apply(parser, () -> ParseFieldMatcher.STRICT);
|
||||
listener.onResponse(new GetTaskResponse(result));
|
||||
}
|
||||
}
|
||||
}
|
|
@ -23,21 +23,21 @@ import org.elasticsearch.action.FailedNodeException;
|
|||
import org.elasticsearch.action.TaskOperationFailure;
|
||||
import org.elasticsearch.action.support.tasks.BaseTasksResponse;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNodes;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
import org.elasticsearch.tasks.TaskInfo;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
|
@ -47,10 +47,12 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
|
|||
|
||||
private List<TaskInfo> tasks;
|
||||
|
||||
private Map<DiscoveryNode, List<TaskInfo>> nodes;
|
||||
private Map<String, List<TaskInfo>> perNodeTasks;
|
||||
|
||||
private List<TaskGroup> groups;
|
||||
|
||||
private DiscoveryNodes discoveryNodes;
|
||||
|
||||
public ListTasksResponse() {
|
||||
}
|
||||
|
||||
|
@ -75,28 +77,11 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
|
|||
/**
|
||||
* Returns the list of tasks by node
|
||||
*/
|
||||
public Map<DiscoveryNode, List<TaskInfo>> getPerNodeTasks() {
|
||||
if (nodes != null) {
|
||||
return nodes;
|
||||
public Map<String, List<TaskInfo>> getPerNodeTasks() {
|
||||
if (perNodeTasks == null) {
|
||||
perNodeTasks = tasks.stream().collect(Collectors.groupingBy(t -> t.getTaskId().getNodeId()));
|
||||
}
|
||||
Map<DiscoveryNode, List<TaskInfo>> nodeTasks = new HashMap<>();
|
||||
|
||||
Set<DiscoveryNode> nodes = new HashSet<>();
|
||||
for (TaskInfo shard : tasks) {
|
||||
nodes.add(shard.getNode());
|
||||
}
|
||||
|
||||
for (DiscoveryNode node : nodes) {
|
||||
List<TaskInfo> tasks = new ArrayList<>();
|
||||
for (TaskInfo taskInfo : this.tasks) {
|
||||
if (taskInfo.getNode().equals(node)) {
|
||||
tasks.add(taskInfo);
|
||||
}
|
||||
}
|
||||
nodeTasks.put(node, tasks);
|
||||
}
|
||||
this.nodes = nodeTasks;
|
||||
return nodeTasks;
|
||||
return perNodeTasks;
|
||||
}
|
||||
|
||||
public List<TaskGroup> getTaskGroups() {
|
||||
|
@ -138,6 +123,14 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
|
|||
return tasks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set a reference to the {@linkplain DiscoveryNodes}. Used for calling {@link #toXContent(XContentBuilder, ToXContent.Params)} with
|
||||
* {@code group_by=nodes}.
|
||||
*/
|
||||
public void setDiscoveryNodes(DiscoveryNodes discoveryNodes) {
|
||||
this.discoveryNodes = discoveryNodes;
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
if (getTaskFailures() != null && getTaskFailures().size() > 0) {
|
||||
|
@ -161,33 +154,38 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
|
|||
}
|
||||
String groupBy = params.param("group_by", "nodes");
|
||||
if ("nodes".equals(groupBy)) {
|
||||
if (discoveryNodes == null) {
|
||||
throw new IllegalStateException("discoveryNodes must be set before calling toXContent with group_by=nodes");
|
||||
}
|
||||
builder.startObject("nodes");
|
||||
for (Map.Entry<DiscoveryNode, List<TaskInfo>> entry : getPerNodeTasks().entrySet()) {
|
||||
DiscoveryNode node = entry.getKey();
|
||||
builder.startObject(node.getId());
|
||||
builder.field("name", node.getName());
|
||||
builder.field("transport_address", node.getAddress().toString());
|
||||
builder.field("host", node.getHostName());
|
||||
builder.field("ip", node.getAddress());
|
||||
for (Map.Entry<String, List<TaskInfo>> entry : getPerNodeTasks().entrySet()) {
|
||||
DiscoveryNode node = discoveryNodes.get(entry.getKey());
|
||||
builder.startObject(entry.getKey());
|
||||
if (node != null) {
|
||||
// If the node is no longer part of the cluster, oh well, we'll just skip it's useful information.
|
||||
builder.field("name", node.getName());
|
||||
builder.field("transport_address", node.getAddress().toString());
|
||||
builder.field("host", node.getHostName());
|
||||
builder.field("ip", node.getAddress());
|
||||
|
||||
builder.startArray("roles");
|
||||
for (DiscoveryNode.Role role : node.getRoles()) {
|
||||
builder.value(role.getRoleName());
|
||||
}
|
||||
builder.endArray();
|
||||
|
||||
if (!node.getAttributes().isEmpty()) {
|
||||
builder.startObject("attributes");
|
||||
for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {
|
||||
builder.field(attrEntry.getKey(), attrEntry.getValue());
|
||||
builder.startArray("roles");
|
||||
for (DiscoveryNode.Role role : node.getRoles()) {
|
||||
builder.value(role.getRoleName());
|
||||
}
|
||||
builder.endArray();
|
||||
|
||||
if (!node.getAttributes().isEmpty()) {
|
||||
builder.startObject("attributes");
|
||||
for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {
|
||||
builder.field(attrEntry.getKey(), attrEntry.getValue());
|
||||
}
|
||||
builder.endObject();
|
||||
}
|
||||
builder.endObject();
|
||||
}
|
||||
builder.startObject("tasks");
|
||||
for(TaskInfo task : entry.getValue()) {
|
||||
builder.startObject(task.getTaskId().toString());
|
||||
builder.field(task.getTaskId().toString());
|
||||
task.toXContent(builder, params);
|
||||
builder.endObject();
|
||||
}
|
||||
builder.endObject();
|
||||
builder.endObject();
|
||||
|
@ -196,9 +194,8 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
|
|||
} else if ("parents".equals(groupBy)) {
|
||||
builder.startObject("tasks");
|
||||
for (TaskGroup group : getTaskGroups()) {
|
||||
builder.startObject(group.getTaskInfo().getTaskId().toString());
|
||||
builder.field(group.getTaskInfo().getTaskId().toString());
|
||||
group.toXContent(builder, params);
|
||||
builder.endObject();
|
||||
}
|
||||
builder.endObject();
|
||||
}
|
||||
|
|
|
@ -21,6 +21,7 @@ package org.elasticsearch.action.admin.cluster.node.tasks.list;
|
|||
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.tasks.TaskInfo;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
|
@ -79,16 +80,15 @@ public class TaskGroup implements ToXContent {
|
|||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
task.toXContent(builder, params);
|
||||
builder.startObject();
|
||||
task.innerToXContent(builder, params);
|
||||
if (childTasks.isEmpty() == false) {
|
||||
builder.startArray("children");
|
||||
for (TaskGroup taskGroup : childTasks) {
|
||||
builder.startObject();
|
||||
taskGroup.toXContent(builder, params);
|
||||
builder.endObject();
|
||||
}
|
||||
builder.endArray();
|
||||
}
|
||||
return builder;
|
||||
return builder.endObject();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -33,6 +33,8 @@ import org.elasticsearch.common.io.stream.StreamInput;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.tasks.TaskInfo;
|
||||
import org.elasticsearch.tasks.TaskManager;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
|
||||
|
@ -47,6 +49,26 @@ import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;
|
|||
*
|
||||
*/
|
||||
public class TransportListTasksAction extends TransportTasksAction<Task, ListTasksRequest, ListTasksResponse, TaskInfo> {
|
||||
public static void waitForTaskCompletion(TaskManager taskManager, Task task, long untilInNanos) {
|
||||
while (System.nanoTime() - untilInNanos < 0) {
|
||||
if (taskManager.getTask(task.getId()) == null) {
|
||||
return;
|
||||
}
|
||||
try {
|
||||
Thread.sleep(WAIT_FOR_COMPLETION_POLL.millis());
|
||||
} catch (InterruptedException e) {
|
||||
throw new ElasticsearchException("Interrupted waiting for completion of [{}]", e, task);
|
||||
}
|
||||
}
|
||||
throw new ElasticsearchTimeoutException("Timed out waiting for completion of [{}]", task);
|
||||
}
|
||||
public static long waitForCompletionTimeout(TimeValue timeout) {
|
||||
if (timeout == null) {
|
||||
timeout = DEFAULT_WAIT_FOR_COMPLETION_TIMEOUT;
|
||||
}
|
||||
return System.nanoTime() + timeout.nanos();
|
||||
}
|
||||
|
||||
private static final TimeValue WAIT_FOR_COMPLETION_POLL = timeValueMillis(100);
|
||||
private static final TimeValue DEFAULT_WAIT_FOR_COMPLETION_TIMEOUT = timeValueSeconds(30);
|
||||
|
||||
|
@ -75,35 +97,18 @@ public class TransportListTasksAction extends TransportTasksAction<Task, ListTas
|
|||
|
||||
@Override
|
||||
protected void processTasks(ListTasksRequest request, Consumer<Task> operation) {
|
||||
if (false == request.getWaitForCompletion()) {
|
||||
super.processTasks(request, operation);
|
||||
return;
|
||||
}
|
||||
// If we should wait for completion then we have to intercept every found task and wait for it to leave the manager.
|
||||
TimeValue timeout = request.getTimeout();
|
||||
if (timeout == null) {
|
||||
timeout = DEFAULT_WAIT_FOR_COMPLETION_TIMEOUT;
|
||||
}
|
||||
long timeoutTime = System.nanoTime() + timeout.nanos();
|
||||
super.processTasks(request, operation.andThen((Task t) -> {
|
||||
while (System.nanoTime() - timeoutTime < 0) {
|
||||
Task task = taskManager.getTask(t.getId());
|
||||
if (task == null) {
|
||||
return;
|
||||
}
|
||||
if (request.getWaitForCompletion()) {
|
||||
long timeoutNanos = waitForCompletionTimeout(request.getTimeout());
|
||||
operation = operation.andThen(task -> {
|
||||
if (task.getAction().startsWith(ListTasksAction.NAME)) {
|
||||
// It doesn't make sense to wait for List Tasks and it can cause an infinite loop of the task waiting
|
||||
// for itself of one of its child tasks
|
||||
// for itself or one of its child tasks
|
||||
return;
|
||||
}
|
||||
try {
|
||||
Thread.sleep(WAIT_FOR_COMPLETION_POLL.millis());
|
||||
} catch (InterruptedException e) {
|
||||
throw new ElasticsearchException("Interrupted waiting for completion of [{}]", e, t);
|
||||
}
|
||||
}
|
||||
throw new ElasticsearchTimeoutException("Timed out waiting for completion of [{}]", t);
|
||||
}));
|
||||
waitForTaskCompletion(taskManager, task, timeoutNanos);
|
||||
});
|
||||
}
|
||||
super.processTasks(request, operation);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -141,7 +141,7 @@ public class SnapshotIndexShardStatus extends BroadcastShardResponse implements
|
|||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject(Integer.toString(getShardId()));
|
||||
builder.startObject(Integer.toString(getShardId().getId()));
|
||||
builder.field(Fields.STAGE, getStage());
|
||||
stats.toXContent(builder, params);
|
||||
if (getNodeId() != null) {
|
||||
|
|
|
@ -49,7 +49,7 @@ public class SnapshotIndexStatus implements Iterable<SnapshotIndexShardStatus>,
|
|||
Map<Integer, SnapshotIndexShardStatus> indexShards = new HashMap<>();
|
||||
stats = new SnapshotStats();
|
||||
for (SnapshotIndexShardStatus shard : shards) {
|
||||
indexShards.put(shard.getShardId(), shard);
|
||||
indexShards.put(shard.getShardId().getId(), shard);
|
||||
stats.add(shard.getStats());
|
||||
}
|
||||
shardsStats = new SnapshotShardsStats(shards);
|
||||
|
|
|
@ -544,7 +544,7 @@ public class BulkRequest extends ActionRequest<BulkRequest> implements Composite
|
|||
}
|
||||
}
|
||||
refreshPolicy = RefreshPolicy.readFrom(in);
|
||||
timeout = TimeValue.readTimeValue(in);
|
||||
timeout = new TimeValue(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -304,7 +304,7 @@ public class TransportBulkAction extends HandledTransportAction<BulkRequest, Bul
|
|||
if (request instanceof IndexRequest) {
|
||||
IndexRequest indexRequest = (IndexRequest) request;
|
||||
String concreteIndex = concreteIndices.getConcreteIndex(indexRequest.index()).getName();
|
||||
ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, indexRequest.type(), indexRequest.id(), indexRequest.routing()).shardId();
|
||||
ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, indexRequest.id(), indexRequest.routing()).shardId();
|
||||
List<BulkItemRequest> list = requestsByShard.get(shardId);
|
||||
if (list == null) {
|
||||
list = new ArrayList<>();
|
||||
|
@ -314,7 +314,7 @@ public class TransportBulkAction extends HandledTransportAction<BulkRequest, Bul
|
|||
} else if (request instanceof DeleteRequest) {
|
||||
DeleteRequest deleteRequest = (DeleteRequest) request;
|
||||
String concreteIndex = concreteIndices.getConcreteIndex(deleteRequest.index()).getName();
|
||||
ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, deleteRequest.type(), deleteRequest.id(), deleteRequest.routing()).shardId();
|
||||
ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, deleteRequest.id(), deleteRequest.routing()).shardId();
|
||||
List<BulkItemRequest> list = requestsByShard.get(shardId);
|
||||
if (list == null) {
|
||||
list = new ArrayList<>();
|
||||
|
@ -324,7 +324,7 @@ public class TransportBulkAction extends HandledTransportAction<BulkRequest, Bul
|
|||
} else if (request instanceof UpdateRequest) {
|
||||
UpdateRequest updateRequest = (UpdateRequest) request;
|
||||
String concreteIndex = concreteIndices.getConcreteIndex(updateRequest.index()).getName();
|
||||
ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, updateRequest.type(), updateRequest.id(), updateRequest.routing()).shardId();
|
||||
ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, updateRequest.id(), updateRequest.routing()).shardId();
|
||||
List<BulkItemRequest> list = requestsByShard.get(shardId);
|
||||
if (list == null) {
|
||||
list = new ArrayList<>();
|
||||
|
|
|
@ -152,7 +152,7 @@ public class TransportExplainAction extends TransportSingleShardAction<ExplainRe
|
|||
@Override
|
||||
protected ShardIterator shards(ClusterState state, InternalRequest request) {
|
||||
return clusterService.operationRouting().getShards(
|
||||
clusterService.state(), request.concreteIndex(), request.request().type(), request.request().id(), request.request().routing(), request.request().preference()
|
||||
clusterService.state(), request.concreteIndex(), request.request().id(), request.request().routing(), request.request().preference()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -44,6 +44,7 @@ public class FieldStatsRequest extends BroadcastRequest<FieldStatsRequest> {
|
|||
private String[] fields = Strings.EMPTY_ARRAY;
|
||||
private String level = DEFAULT_LEVEL;
|
||||
private IndexConstraint[] indexConstraints = new IndexConstraint[0];
|
||||
private boolean useCache = true;
|
||||
|
||||
public String[] getFields() {
|
||||
return fields;
|
||||
|
@ -56,6 +57,14 @@ public class FieldStatsRequest extends BroadcastRequest<FieldStatsRequest> {
|
|||
this.fields = fields;
|
||||
}
|
||||
|
||||
public void setUseCache(boolean useCache) {
|
||||
this.useCache = useCache;
|
||||
}
|
||||
|
||||
public boolean shouldUseCache() {
|
||||
return useCache;
|
||||
}
|
||||
|
||||
public IndexConstraint[] getIndexConstraints() {
|
||||
return indexConstraints;
|
||||
}
|
||||
|
@ -184,6 +193,7 @@ public class FieldStatsRequest extends BroadcastRequest<FieldStatsRequest> {
|
|||
indexConstraints[i] = new IndexConstraint(in);
|
||||
}
|
||||
level = in.readString();
|
||||
useCache = in.readBoolean();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -201,6 +211,7 @@ public class FieldStatsRequest extends BroadcastRequest<FieldStatsRequest> {
|
|||
}
|
||||
}
|
||||
out.writeString(level);
|
||||
out.writeBoolean(useCache);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -45,4 +45,9 @@ public class FieldStatsRequestBuilder extends
|
|||
request().level(level);
|
||||
return this;
|
||||
}
|
||||
|
||||
public FieldStatsRequestBuilder setUseCache(boolean useCache) {
|
||||
request().setUseCache(useCache);
|
||||
return this;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -34,6 +34,7 @@ import java.util.Set;
|
|||
public class FieldStatsShardRequest extends BroadcastShardRequest {
|
||||
|
||||
private String[] fields;
|
||||
private boolean useCache;
|
||||
|
||||
public FieldStatsShardRequest() {
|
||||
}
|
||||
|
@ -46,22 +47,29 @@ public class FieldStatsShardRequest extends BroadcastShardRequest {
|
|||
fields.add(indexConstraint.getField());
|
||||
}
|
||||
this.fields = fields.toArray(new String[fields.size()]);
|
||||
useCache = request.shouldUseCache();
|
||||
}
|
||||
|
||||
public String[] getFields() {
|
||||
return fields;
|
||||
}
|
||||
|
||||
public boolean shouldUseCache() {
|
||||
return useCache;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
fields = in.readStringArray();
|
||||
useCache = in.readBoolean();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeStringArrayNullable(fields);
|
||||
out.writeBoolean(useCache);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -46,7 +46,6 @@ public class FieldStatsShardResponse extends BroadcastShardResponse {
|
|||
return fieldStats;
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
|
|
|
@ -33,7 +33,6 @@ import org.elasticsearch.cluster.routing.GroupShardsIterator;
|
|||
import org.elasticsearch.cluster.routing.ShardRouting;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.regex.Regex;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.index.IndexService;
|
||||
import org.elasticsearch.index.engine.Engine;
|
||||
|
@ -45,27 +44,23 @@ import org.elasticsearch.indices.IndicesService;
|
|||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Iterator;
|
||||
import java.util.Set;
|
||||
import java.util.HashSet;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.atomic.AtomicReferenceArray;
|
||||
|
||||
public class TransportFieldStatsTransportAction extends
|
||||
public class TransportFieldStatsAction extends
|
||||
TransportBroadcastAction<FieldStatsRequest, FieldStatsResponse, FieldStatsShardRequest, FieldStatsShardResponse> {
|
||||
|
||||
private final IndicesService indicesService;
|
||||
|
||||
@Inject
|
||||
public TransportFieldStatsTransportAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,
|
||||
public TransportFieldStatsAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,
|
||||
TransportService transportService, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
IndicesService indicesService) {
|
||||
|
@ -192,29 +187,20 @@ public class TransportFieldStatsTransportAction extends
|
|||
ShardId shardId = request.shardId();
|
||||
Map<String, FieldStats<?>> fieldStats = new HashMap<>();
|
||||
IndexService indexServices = indicesService.indexServiceSafe(shardId.getIndex());
|
||||
MapperService mapperService = indexServices.mapperService();
|
||||
IndexShard shard = indexServices.getShard(shardId.id());
|
||||
try (Engine.Searcher searcher = shard.acquireSearcher("fieldstats")) {
|
||||
// Resolve patterns and deduplicate
|
||||
Set<String> fieldNames = new HashSet<>();
|
||||
for (String field : request.getFields()) {
|
||||
Collection<String> matchFields;
|
||||
if (Regex.isSimpleMatchPattern(field)) {
|
||||
matchFields = mapperService.simpleMatchToIndexNames(field);
|
||||
} else {
|
||||
matchFields = Collections.singleton(field);
|
||||
}
|
||||
for (String matchField : matchFields) {
|
||||
MappedFieldType fieldType = mapperService.fullName(matchField);
|
||||
if (fieldType == null) {
|
||||
// ignore.
|
||||
continue;
|
||||
}
|
||||
FieldStats<?> stats = fieldType.stats(searcher.reader());
|
||||
if (stats != null) {
|
||||
fieldStats.put(matchField, stats);
|
||||
}
|
||||
fieldNames.addAll(shard.mapperService().simpleMatchToIndexNames(field));
|
||||
}
|
||||
for (String field : fieldNames) {
|
||||
FieldStats<?> stats = indicesService.getFieldStats(shard, searcher, field, request.shouldUseCache());
|
||||
if (stats != null) {
|
||||
fieldStats.put(field, stats);
|
||||
}
|
||||
}
|
||||
} catch (IOException e) {
|
||||
} catch (Exception e) {
|
||||
throw ExceptionsHelper.convertToElastic(e);
|
||||
}
|
||||
return new FieldStatsShardResponse(shardId, fieldStats);
|
|
@ -62,7 +62,7 @@ public class TransportGetAction extends TransportSingleShardAction<GetRequest, G
|
|||
@Override
|
||||
protected ShardIterator shards(ClusterState state, InternalRequest request) {
|
||||
return clusterService.operationRouting()
|
||||
.getShards(clusterService.state(), request.concreteIndex(), request.request().type(), request.request().id(), request.request().routing(), request.request().preference());
|
||||
.getShards(clusterService.state(), request.concreteIndex(), request.request().id(), request.request().routing(), request.request().preference());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -76,7 +76,7 @@ public class TransportMultiGetAction extends HandledTransportAction<MultiGetRequ
|
|||
continue;
|
||||
}
|
||||
ShardId shardId = clusterService.operationRouting()
|
||||
.getShards(clusterState, concreteSingleIndex, item.type(), item.id(), item.routing(), null).shardId();
|
||||
.getShards(clusterState, concreteSingleIndex, item.id(), item.routing(), null).shardId();
|
||||
MultiGetShardRequest shardRequest = shardRequests.get(shardId);
|
||||
if (shardRequest == null) {
|
||||
shardRequest = new MultiGetShardRequest(request, shardId.getIndexName(), shardId.id());
|
||||
|
|
|
@ -633,9 +633,8 @@ public class IndexRequest extends ReplicatedWriteRequest<IndexRequest> implement
|
|||
routing = in.readOptionalString();
|
||||
parent = in.readOptionalString();
|
||||
timestamp = in.readOptionalString();
|
||||
ttl = in.readBoolean() ? TimeValue.readTimeValue(in) : null;
|
||||
ttl = in.readOptionalWriteable(TimeValue::new);
|
||||
source = in.readBytesReference();
|
||||
|
||||
opType = OpType.fromId(in.readByte());
|
||||
version = in.readLong();
|
||||
versionType = VersionType.fromValue(in.readByte());
|
||||
|
@ -650,12 +649,7 @@ public class IndexRequest extends ReplicatedWriteRequest<IndexRequest> implement
|
|||
out.writeOptionalString(routing);
|
||||
out.writeOptionalString(parent);
|
||||
out.writeOptionalString(timestamp);
|
||||
if (ttl == null) {
|
||||
out.writeBoolean(false);
|
||||
} else {
|
||||
out.writeBoolean(true);
|
||||
ttl.writeTo(out);
|
||||
}
|
||||
out.writeOptionalWriteable(ttl);
|
||||
out.writeBytesReference(source);
|
||||
out.writeByte(opType.id());
|
||||
out.writeLong(version);
|
||||
|
|
|
@ -38,6 +38,7 @@ import static org.elasticsearch.action.ValidateActions.addValidationError;
|
|||
*/
|
||||
public class MultiSearchRequest extends ActionRequest<MultiSearchRequest> implements CompositeIndicesRequest {
|
||||
|
||||
private int maxConcurrentSearchRequests = 0;
|
||||
private List<SearchRequest> requests = new ArrayList<>();
|
||||
|
||||
private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpenAndForbidClosed();
|
||||
|
@ -60,6 +61,25 @@ public class MultiSearchRequest extends ActionRequest<MultiSearchRequest> implem
|
|||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the amount of search requests specified in this multi search requests are allowed to be ran concurrently.
|
||||
*/
|
||||
public int maxConcurrentSearchRequests() {
|
||||
return maxConcurrentSearchRequests;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets how many search requests specified in this multi search requests are allowed to be ran concurrently.
|
||||
*/
|
||||
public MultiSearchRequest maxConcurrentSearchRequests(int maxConcurrentSearchRequests) {
|
||||
if (maxConcurrentSearchRequests < 1) {
|
||||
throw new IllegalArgumentException("maxConcurrentSearchRequests must be positive");
|
||||
}
|
||||
|
||||
this.maxConcurrentSearchRequests = maxConcurrentSearchRequests;
|
||||
return this;
|
||||
}
|
||||
|
||||
public List<SearchRequest> requests() {
|
||||
return this.requests;
|
||||
}
|
||||
|
@ -100,6 +120,7 @@ public class MultiSearchRequest extends ActionRequest<MultiSearchRequest> implem
|
|||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
maxConcurrentSearchRequests = in.readVInt();
|
||||
int size = in.readVInt();
|
||||
for (int i = 0; i < size; i++) {
|
||||
SearchRequest request = new SearchRequest();
|
||||
|
@ -111,6 +132,7 @@ public class MultiSearchRequest extends ActionRequest<MultiSearchRequest> implem
|
|||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeVInt(maxConcurrentSearchRequests);
|
||||
out.writeVInt(requests.size());
|
||||
for (SearchRequest request : requests) {
|
||||
request.writeTo(out);
|
||||
|
|
|
@ -71,4 +71,12 @@ public class MultiSearchRequestBuilder extends ActionRequestBuilder<MultiSearchR
|
|||
request().indicesOptions(indicesOptions);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets how many search requests specified in this multi search requests are allowed to be ran concurrently.
|
||||
*/
|
||||
public MultiSearchRequestBuilder setMaxConcurrentSearchRequests(int maxConcurrentSearchRequests) {
|
||||
request().maxConcurrentSearchRequests(maxConcurrentSearchRequests);
|
||||
return this;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -36,7 +36,6 @@ import org.elasticsearch.search.profile.ProfileShardResult;
|
|||
import org.elasticsearch.search.suggest.Suggest;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.action.search.ShardSearchFailure.readShardSearchFailure;
|
||||
|
@ -169,7 +168,7 @@ public class SearchResponse extends ActionResponse implements StatusToXContent {
|
|||
*
|
||||
* @return The profile results or an empty map
|
||||
*/
|
||||
public @Nullable Map<String, List<ProfileShardResult>> getProfileResults() {
|
||||
public @Nullable Map<String, ProfileShardResult> getProfileResults() {
|
||||
return internalResponse.profile();
|
||||
}
|
||||
|
||||
|
|
|
@ -22,6 +22,7 @@ package org.elasticsearch.action.search;
|
|||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.HandledTransportAction;
|
||||
import org.elasticsearch.action.support.TransportAction;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
|
@ -29,57 +30,118 @@ import org.elasticsearch.cluster.service.ClusterService;
|
|||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.AtomicArray;
|
||||
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||
import org.elasticsearch.common.util.concurrent.EsExecutors;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
|
||||
import java.util.Queue;
|
||||
import java.util.concurrent.ConcurrentLinkedQueue;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
/**
|
||||
*/
|
||||
public class TransportMultiSearchAction extends HandledTransportAction<MultiSearchRequest, MultiSearchResponse> {
|
||||
|
||||
private final int availableProcessors;
|
||||
private final ClusterService clusterService;
|
||||
private final TransportSearchAction searchAction;
|
||||
private final TransportAction<SearchRequest, SearchResponse> searchAction;
|
||||
|
||||
@Inject
|
||||
public TransportMultiSearchAction(Settings settings, ThreadPool threadPool, TransportService transportService,
|
||||
ClusterService clusterService, TransportSearchAction searchAction,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
ClusterService clusterService, TransportSearchAction searchAction,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
super(settings, MultiSearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiSearchRequest::new);
|
||||
this.clusterService = clusterService;
|
||||
this.searchAction = searchAction;
|
||||
this.availableProcessors = EsExecutors.boundedNumberOfProcessors(settings);
|
||||
}
|
||||
|
||||
// For testing only:
|
||||
TransportMultiSearchAction(ThreadPool threadPool, ActionFilters actionFilters, TransportService transportService,
|
||||
ClusterService clusterService, TransportAction<SearchRequest, SearchResponse> searchAction,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, int availableProcessors) {
|
||||
super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiSearchRequest::new);
|
||||
this.clusterService = clusterService;
|
||||
this.searchAction = searchAction;
|
||||
this.availableProcessors = availableProcessors;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(final MultiSearchRequest request, final ActionListener<MultiSearchResponse> listener) {
|
||||
protected void doExecute(MultiSearchRequest request, ActionListener<MultiSearchResponse> listener) {
|
||||
ClusterState clusterState = clusterService.state();
|
||||
clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ);
|
||||
|
||||
final AtomicArray<MultiSearchResponse.Item> responses = new AtomicArray<>(request.requests().size());
|
||||
final AtomicInteger counter = new AtomicInteger(responses.length());
|
||||
for (int i = 0; i < responses.length(); i++) {
|
||||
final int index = i;
|
||||
searchAction.execute(request.requests().get(i), new ActionListener<SearchResponse>() {
|
||||
@Override
|
||||
public void onResponse(SearchResponse searchResponse) {
|
||||
responses.set(index, new MultiSearchResponse.Item(searchResponse, null));
|
||||
if (counter.decrementAndGet() == 0) {
|
||||
finishHim();
|
||||
}
|
||||
}
|
||||
int maxConcurrentSearches = request.maxConcurrentSearchRequests();
|
||||
if (maxConcurrentSearches == 0) {
|
||||
maxConcurrentSearches = defaultMaxConcurrentSearches(availableProcessors, clusterState);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Throwable e) {
|
||||
responses.set(index, new MultiSearchResponse.Item(null, e));
|
||||
if (counter.decrementAndGet() == 0) {
|
||||
finishHim();
|
||||
}
|
||||
}
|
||||
Queue<SearchRequestSlot> searchRequestSlots = new ConcurrentLinkedQueue<>();
|
||||
for (int i = 0; i < request.requests().size(); i++) {
|
||||
SearchRequest searchRequest = request.requests().get(i);
|
||||
searchRequestSlots.add(new SearchRequestSlot(searchRequest, i));
|
||||
}
|
||||
|
||||
private void finishHim() {
|
||||
int numRequests = request.requests().size();
|
||||
final AtomicArray<MultiSearchResponse.Item> responses = new AtomicArray<>(numRequests);
|
||||
final AtomicInteger responseCounter = new AtomicInteger(numRequests);
|
||||
int numConcurrentSearches = Math.min(numRequests, maxConcurrentSearches);
|
||||
for (int i = 0; i < numConcurrentSearches; i++) {
|
||||
executeSearch(searchRequestSlots, responses, responseCounter, listener);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This is not perfect and makes a big assumption, that all nodes have the same thread pool size / have the number
|
||||
* of processors and that shard of the indices the search requests go to are more or less evenly distributed across
|
||||
* all nodes in the cluster. But I think it is a good enough default for most cases, if not then the default should be
|
||||
* overwritten in the request itself.
|
||||
*/
|
||||
static int defaultMaxConcurrentSearches(int availableProcessors, ClusterState state) {
|
||||
int numDateNodes = state.getNodes().getDataNodes().size();
|
||||
// availableProcessors will never be larger than 32, so max defaultMaxConcurrentSearches will never be larger than 49,
|
||||
// but we don't know about about other search requests that are being executed so lets cap at 10 per node
|
||||
int defaultSearchThreadPoolSize = Math.min(ThreadPool.searchThreadPoolSize(availableProcessors), 10);
|
||||
return Math.max(1, numDateNodes * defaultSearchThreadPoolSize);
|
||||
}
|
||||
|
||||
void executeSearch(Queue<SearchRequestSlot> requests, AtomicArray<MultiSearchResponse.Item> responses,
|
||||
AtomicInteger responseCounter, ActionListener<MultiSearchResponse> listener) {
|
||||
SearchRequestSlot request = requests.poll();
|
||||
if (request == null) {
|
||||
// Ok... so there're no more requests then this is ok, we're then waiting for running requests to complete
|
||||
return;
|
||||
}
|
||||
searchAction.execute(request.request, new ActionListener<SearchResponse>() {
|
||||
@Override
|
||||
public void onResponse(SearchResponse searchResponse) {
|
||||
responses.set(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null));
|
||||
handleResponse();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Throwable e) {
|
||||
responses.set(request.responseSlot, new MultiSearchResponse.Item(null, e));
|
||||
handleResponse();
|
||||
}
|
||||
|
||||
private void handleResponse() {
|
||||
if (responseCounter.decrementAndGet() == 0) {
|
||||
listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()])));
|
||||
} else {
|
||||
executeSearch(requests, responses, responseCounter, listener);
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
final static class SearchRequestSlot {
|
||||
|
||||
final SearchRequest request;
|
||||
final int responseSlot;
|
||||
|
||||
SearchRequestSlot(SearchRequest request, int responseSlot) {
|
||||
this.request = request;
|
||||
this.responseSlot = responseSlot;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -45,8 +45,8 @@ public abstract class BroadcastShardResponse extends TransportResponse {
|
|||
return this.shardId.getIndexName();
|
||||
}
|
||||
|
||||
public int getShardId() {
|
||||
return this.shardId.id();
|
||||
public ShardId getShardId() {
|
||||
return this.shardId;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -25,7 +25,6 @@ import org.elasticsearch.common.unit.TimeValue;
|
|||
|
||||
import java.io.IOException;
|
||||
|
||||
import static org.elasticsearch.common.unit.TimeValue.readTimeValue;
|
||||
import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;
|
||||
|
||||
/**
|
||||
|
@ -75,7 +74,7 @@ public abstract class AcknowledgedRequest<Request extends MasterNodeRequest<Requ
|
|||
* Reads the timeout value
|
||||
*/
|
||||
protected void readTimeout(StreamInput in) throws IOException {
|
||||
timeout = readTimeValue(in);
|
||||
timeout = new TimeValue(in);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -61,7 +61,7 @@ public abstract class MasterNodeRequest<Request extends MasterNodeRequest<Reques
|
|||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
masterNodeTimeout = TimeValue.readTimeValue(in);
|
||||
masterNodeTimeout = new TimeValue(in);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -82,20 +82,13 @@ public abstract class BaseNodesRequest<Request extends BaseNodesRequest<Request>
|
|||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
nodesIds = in.readStringArray();
|
||||
if (in.readBoolean()) {
|
||||
timeout = TimeValue.readTimeValue(in);
|
||||
}
|
||||
timeout = in.readOptionalWriteable(TimeValue::new);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
super.writeTo(out);
|
||||
out.writeStringArrayNullable(nodesIds);
|
||||
if (timeout == null) {
|
||||
out.writeBoolean(false);
|
||||
} else {
|
||||
out.writeBoolean(true);
|
||||
timeout.writeTo(out);
|
||||
}
|
||||
out.writeOptionalWriteable(timeout);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -181,7 +181,7 @@ public abstract class ReplicationRequest<Request extends ReplicationRequest<Requ
|
|||
shardId = null;
|
||||
}
|
||||
consistencyLevel = WriteConsistencyLevel.fromId(in.readByte());
|
||||
timeout = TimeValue.readTimeValue(in);
|
||||
timeout = new TimeValue(in);
|
||||
index = in.readString();
|
||||
routedBasedOnClusterVersion = in.readVLong();
|
||||
primaryTerm = in.readVLong();
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
|
||||
package org.elasticsearch.action.support.replication;
|
||||
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
|
@ -88,5 +89,25 @@ public class ReplicationTask extends Task {
|
|||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeString(phase);
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return Strings.toString(this);
|
||||
}
|
||||
|
||||
// Implements equals and hashcode for testing
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (obj == null || obj.getClass() != ReplicationTask.Status.class) {
|
||||
return false;
|
||||
}
|
||||
ReplicationTask.Status other = (Status) obj;
|
||||
return phase.equals(other.phase);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return phase.hashCode();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -121,7 +121,7 @@ public abstract class InstanceShardOperationRequest<Request extends InstanceShar
|
|||
} else {
|
||||
shardId = null;
|
||||
}
|
||||
timeout = TimeValue.readTimeValue(in);
|
||||
timeout = new TimeValue(in);
|
||||
concreteIndex = in.readOptionalString();
|
||||
}
|
||||
|
||||
|
|
|
@ -144,9 +144,7 @@ public class BaseTasksRequest<Request extends BaseTasksRequest<Request>> extends
|
|||
parentTaskId = TaskId.readFromStream(in);
|
||||
nodesIds = in.readStringArray();
|
||||
actions = in.readStringArray();
|
||||
if (in.readBoolean()) {
|
||||
timeout = TimeValue.readTimeValue(in);
|
||||
}
|
||||
timeout = in.readOptionalWriteable(TimeValue::new);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -156,7 +154,7 @@ public class BaseTasksRequest<Request extends BaseTasksRequest<Request>> extends
|
|||
parentTaskId.writeTo(out);
|
||||
out.writeStringArrayNullable(nodesIds);
|
||||
out.writeStringArrayNullable(actions);
|
||||
out.writeOptionalStreamable(timeout);
|
||||
out.writeOptionalWriteable(timeout);
|
||||
}
|
||||
|
||||
public boolean match(Task task) {
|
||||
|
|
|
@ -22,6 +22,7 @@ import org.elasticsearch.action.Action;
|
|||
import org.elasticsearch.action.ActionRequestBuilder;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
|
||||
/**
|
||||
* Builder for task-based requests
|
||||
|
@ -36,6 +37,15 @@ public class TasksRequestBuilder<
|
|||
super(client, action, request);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the task to lookup.
|
||||
*/
|
||||
@SuppressWarnings("unchecked")
|
||||
public final RequestBuilder setTaskId(TaskId taskId) {
|
||||
request.setTaskId(taskId);
|
||||
return (RequestBuilder) this;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public final RequestBuilder setNodesIds(String... nodesIds) {
|
||||
request.setNodesIds(nodesIds);
|
||||
|
|
|
@ -19,7 +19,6 @@
|
|||
|
||||
package org.elasticsearch.action.termvectors;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.RoutingMissingException;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.single.shard.TransportSingleShardAction;
|
||||
|
@ -56,7 +55,7 @@ public class TransportTermVectorsAction extends TransportSingleShardAction<TermV
|
|||
|
||||
@Override
|
||||
protected ShardIterator shards(ClusterState state, InternalRequest request) {
|
||||
return clusterService.operationRouting().getShards(state, request.concreteIndex(), request.request().type(), request.request().id(),
|
||||
return clusterService.operationRouting().getShards(state, request.concreteIndex(), request.request().id(),
|
||||
request.request().routing(), request.request().preference());
|
||||
}
|
||||
|
||||
|
|
|
@ -153,7 +153,7 @@ public class TransportUpdateAction extends TransportInstanceSingleOperationActio
|
|||
return clusterState.routingTable().index(request.concreteIndex()).shard(request.getShardId().getId()).primaryShardIt();
|
||||
}
|
||||
ShardIterator shardIterator = clusterService.operationRouting()
|
||||
.indexShards(clusterState, request.concreteIndex(), request.type(), request.id(), request.routing());
|
||||
.indexShards(clusterState, request.concreteIndex(), request.id(), request.routing());
|
||||
ShardRouting shard;
|
||||
while ((shard = shardIterator.nextOrNull()) != null) {
|
||||
if (shard.primary()) {
|
||||
|
|
|
@ -33,6 +33,13 @@ public class JavaVersion implements Comparable<JavaVersion> {
|
|||
}
|
||||
|
||||
private JavaVersion(List<Integer> version) {
|
||||
if (version.size() >= 2
|
||||
&& version.get(0).intValue() == 1
|
||||
&& version.get(1).intValue() == 8) {
|
||||
// for Java 8 there is ambiguity since both 1.8 and 8 are supported,
|
||||
// so we rewrite the former to the latter
|
||||
version = new ArrayList<>(version.subList(1, version.size()));
|
||||
}
|
||||
this.version = Collections.unmodifiableList(version);
|
||||
}
|
||||
|
||||
|
@ -75,6 +82,19 @@ public class JavaVersion implements Comparable<JavaVersion> {
|
|||
return 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (o == null || o.getClass() != getClass()) {
|
||||
return false;
|
||||
}
|
||||
return compareTo((JavaVersion) o) == 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return version.hashCode();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return version.stream().map(v -> Integer.toString(v)).collect(Collectors.joining("."));
|
||||
|
|
|
@ -39,6 +39,9 @@ import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;
|
|||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksResponse;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskResponse;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksResponse;
|
||||
|
@ -112,6 +115,7 @@ import org.elasticsearch.action.ingest.SimulatePipelineResponse;
|
|||
import org.elasticsearch.action.ingest.WritePipelineResponse;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
|
||||
/**
|
||||
* Administrative actions/operations against indices.
|
||||
|
@ -303,6 +307,34 @@ public interface ClusterAdminClient extends ElasticsearchClient {
|
|||
*/
|
||||
ListTasksRequestBuilder prepareListTasks(String... nodesIds);
|
||||
|
||||
/**
|
||||
* Get a task.
|
||||
*
|
||||
* @param request the request
|
||||
* @return the result future
|
||||
* @see org.elasticsearch.client.Requests#getTaskRequest()
|
||||
*/
|
||||
ActionFuture<GetTaskResponse> getTask(GetTaskRequest request);
|
||||
|
||||
/**
|
||||
* Get a task.
|
||||
*
|
||||
* @param request the request
|
||||
* @param listener A listener to be notified with the result
|
||||
* @see org.elasticsearch.client.Requests#getTaskRequest()
|
||||
*/
|
||||
void getTask(GetTaskRequest request, ActionListener<GetTaskResponse> listener);
|
||||
|
||||
/**
|
||||
* Fetch a task by id.
|
||||
*/
|
||||
GetTaskRequestBuilder prepareGetTask(String taskId);
|
||||
|
||||
/**
|
||||
* Fetch a task by id.
|
||||
*/
|
||||
GetTaskRequestBuilder prepareGetTask(TaskId taskId);
|
||||
|
||||
/**
|
||||
* Cancel tasks
|
||||
*
|
||||
|
|
|
@ -23,6 +23,7 @@ import org.elasticsearch.action.admin.cluster.health.ClusterHealthRequest;
|
|||
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequest;
|
||||
import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryRequest;
|
||||
import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesRequest;
|
||||
|
@ -406,6 +407,16 @@ public class Requests {
|
|||
return new ListTasksRequest();
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a get task request.
|
||||
*
|
||||
* @return The nodes tasks request
|
||||
* @see org.elasticsearch.client.ClusterAdminClient#getTask(GetTaskRequest)
|
||||
*/
|
||||
public static GetTaskRequest getTaskRequest() {
|
||||
return new GetTaskRequest();
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a nodes tasks request against one or more nodes. Pass <tt>null</tt> or an empty array for all nodes.
|
||||
*
|
||||
|
|
|
@ -49,6 +49,10 @@ import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksActio
|
|||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.cancel.CancelTasksResponse;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.get.GetTaskResponse;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksAction;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequest;
|
||||
import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksRequestBuilder;
|
||||
|
@ -109,6 +113,18 @@ import org.elasticsearch.action.admin.cluster.stats.ClusterStatsAction;
|
|||
import org.elasticsearch.action.admin.cluster.stats.ClusterStatsRequest;
|
||||
import org.elasticsearch.action.admin.cluster.stats.ClusterStatsRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.stats.ClusterStatsResponse;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptAction;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptResponse;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptAction;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptAction;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptResponse;
|
||||
import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksAction;
|
||||
import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksRequest;
|
||||
import org.elasticsearch.action.admin.cluster.tasks.PendingClusterTasksRequestBuilder;
|
||||
|
@ -272,18 +288,6 @@ import org.elasticsearch.action.index.IndexAction;
|
|||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.index.IndexRequestBuilder;
|
||||
import org.elasticsearch.action.index.IndexResponse;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptAction;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptResponse;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptAction;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptResponse;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptAction;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptRequest;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptRequestBuilder;
|
||||
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptResponse;
|
||||
import org.elasticsearch.action.ingest.DeletePipelineAction;
|
||||
import org.elasticsearch.action.ingest.DeletePipelineRequest;
|
||||
import org.elasticsearch.action.ingest.DeletePipelineRequestBuilder;
|
||||
|
@ -339,6 +343,7 @@ import org.elasticsearch.common.bytes.BytesReference;
|
|||
import org.elasticsearch.common.component.AbstractComponent;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
|
||||
import java.util.Map;
|
||||
|
@ -851,6 +856,25 @@ public abstract class AbstractClient extends AbstractComponent implements Client
|
|||
return new ListTasksRequestBuilder(this, ListTasksAction.INSTANCE).setNodesIds(nodesIds);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionFuture<GetTaskResponse> getTask(final GetTaskRequest request) {
|
||||
return execute(GetTaskAction.INSTANCE, request);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void getTask(final GetTaskRequest request, final ActionListener<GetTaskResponse> listener) {
|
||||
execute(GetTaskAction.INSTANCE, request, listener);
|
||||
}
|
||||
|
||||
@Override
|
||||
public GetTaskRequestBuilder prepareGetTask(String taskId) {
|
||||
return prepareGetTask(new TaskId(taskId));
|
||||
}
|
||||
|
||||
@Override
|
||||
public GetTaskRequestBuilder prepareGetTask(TaskId taskId) {
|
||||
return new GetTaskRequestBuilder(this, GetTaskAction.INSTANCE).setTaskId(taskId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionFuture<CancelTasksResponse> cancelTasks(CancelTasksRequest request) {
|
||||
|
|
|
@ -148,18 +148,11 @@ public class ClusterChangedEvent {
|
|||
* has changed between the previous cluster state and the new cluster state.
|
||||
* Note that this is an object reference equality test, not an equals test.
|
||||
*/
|
||||
public boolean indexMetaDataChanged(IndexMetaData current) {
|
||||
MetaData previousMetaData = previousState.metaData();
|
||||
if (previousMetaData == null) {
|
||||
return true;
|
||||
}
|
||||
IndexMetaData previousIndexMetaData = previousMetaData.index(current.getIndex());
|
||||
public static boolean indexMetaDataChanged(IndexMetaData metaData1, IndexMetaData metaData2) {
|
||||
assert metaData1 != null && metaData2 != null;
|
||||
// no need to check on version, since disco modules will make sure to use the
|
||||
// same instance if its a version match
|
||||
if (previousIndexMetaData == current) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
return metaData1 != metaData2;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -63,7 +63,7 @@ import org.elasticsearch.common.settings.Setting.Property;
|
|||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.ExtensionPoint;
|
||||
import org.elasticsearch.gateway.GatewayAllocator;
|
||||
import org.elasticsearch.tasks.TaskResultsService;
|
||||
import org.elasticsearch.tasks.TaskPersistenceService;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
|
@ -156,6 +156,6 @@ public class ClusterModule extends AbstractModule {
|
|||
bind(ShardStateAction.class).asEagerSingleton();
|
||||
bind(NodeMappingRefreshAction.class).asEagerSingleton();
|
||||
bind(MappingUpdatedAction.class).asEagerSingleton();
|
||||
bind(TaskResultsService.class).asEagerSingleton();
|
||||
bind(TaskPersistenceService.class).asEagerSingleton();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -24,6 +24,7 @@ import org.elasticsearch.action.support.IndicesOptions;
|
|||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||
import org.elasticsearch.cluster.metadata.MetaDataMappingService;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNodes;
|
||||
import org.elasticsearch.common.component.AbstractComponent;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
|
@ -58,13 +59,12 @@ public class NodeMappingRefreshAction extends AbstractComponent {
|
|||
transportService.registerRequestHandler(ACTION_NAME, NodeMappingRefreshRequest::new, ThreadPool.Names.SAME, new NodeMappingRefreshTransportHandler());
|
||||
}
|
||||
|
||||
public void nodeMappingRefresh(final ClusterState state, final NodeMappingRefreshRequest request) {
|
||||
final DiscoveryNodes nodes = state.nodes();
|
||||
if (nodes.getMasterNode() == null) {
|
||||
public void nodeMappingRefresh(final DiscoveryNode masterNode, final NodeMappingRefreshRequest request) {
|
||||
if (masterNode == null) {
|
||||
logger.warn("can't send mapping refresh for [{}], no master known.", request.index());
|
||||
return;
|
||||
}
|
||||
transportService.sendRequest(nodes.getMasterNode(), ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);
|
||||
transportService.sendRequest(masterNode, ACTION_NAME, request, EmptyTransportResponseHandler.INSTANCE_SAME);
|
||||
}
|
||||
|
||||
private class NodeMappingRefreshTransportHandler implements TransportRequestHandler<NodeMappingRefreshRequest> {
|
||||
|
|
|
@ -375,21 +375,22 @@ public class IndexShardRoutingTable implements Iterable<ShardRouting> {
|
|||
return new PlainShardIterator(shardId, ordered);
|
||||
}
|
||||
|
||||
public ShardIterator preferNodeActiveInitializingShardsIt(String nodeId) {
|
||||
ArrayList<ShardRouting> ordered = new ArrayList<>(activeShards.size() + allInitializingShards.size());
|
||||
public ShardIterator preferNodeActiveInitializingShardsIt(Set<String> nodeIds) {
|
||||
ArrayList<ShardRouting> preferred = new ArrayList<>(activeShards.size() + allInitializingShards.size());
|
||||
ArrayList<ShardRouting> notPreferred = new ArrayList<>(activeShards.size() + allInitializingShards.size());
|
||||
// fill it in a randomized fashion
|
||||
for (ShardRouting shardRouting : shuffler.shuffle(activeShards)) {
|
||||
ordered.add(shardRouting);
|
||||
if (nodeId.equals(shardRouting.currentNodeId())) {
|
||||
// switch, its the matching node id
|
||||
ordered.set(ordered.size() - 1, ordered.get(0));
|
||||
ordered.set(0, shardRouting);
|
||||
if (nodeIds.contains(shardRouting.currentNodeId())) {
|
||||
preferred.add(shardRouting);
|
||||
} else {
|
||||
notPreferred.add(shardRouting);
|
||||
}
|
||||
}
|
||||
preferred.addAll(notPreferred);
|
||||
if (!allInitializingShards.isEmpty()) {
|
||||
ordered.addAll(allInitializingShards);
|
||||
preferred.addAll(allInitializingShards);
|
||||
}
|
||||
return new PlainShardIterator(shardId, ordered);
|
||||
return new PlainShardIterator(shardId, preferred);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -33,17 +33,15 @@ import org.elasticsearch.index.shard.ShardId;
|
|||
import org.elasticsearch.index.shard.ShardNotFoundException;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
public class OperationRouting extends AbstractComponent {
|
||||
|
||||
|
||||
private final AwarenessAllocationDecider awarenessAllocationDecider;
|
||||
|
||||
@Inject
|
||||
|
@ -52,11 +50,11 @@ public class OperationRouting extends AbstractComponent {
|
|||
this.awarenessAllocationDecider = awarenessAllocationDecider;
|
||||
}
|
||||
|
||||
public ShardIterator indexShards(ClusterState clusterState, String index, String type, String id, @Nullable String routing) {
|
||||
public ShardIterator indexShards(ClusterState clusterState, String index, String id, @Nullable String routing) {
|
||||
return shards(clusterState, index, id, routing).shardsIt();
|
||||
}
|
||||
|
||||
public ShardIterator getShards(ClusterState clusterState, String index, String type, String id, @Nullable String routing, @Nullable String preference) {
|
||||
public ShardIterator getShards(ClusterState clusterState, String index, String id, @Nullable String routing, @Nullable String preference) {
|
||||
return preferenceActiveShardIterator(shards(clusterState, index, id, routing), clusterState.nodes().getLocalNodeId(), clusterState.nodes(), preference);
|
||||
}
|
||||
|
||||
|
@ -158,10 +156,14 @@ public class OperationRouting extends AbstractComponent {
|
|||
}
|
||||
preferenceType = Preference.parse(preference);
|
||||
switch (preferenceType) {
|
||||
case PREFER_NODE:
|
||||
return indexShard.preferNodeActiveInitializingShardsIt(preference.substring(Preference.PREFER_NODE.type().length() + 1));
|
||||
case PREFER_NODES:
|
||||
final Set<String> nodesIds =
|
||||
Arrays.stream(
|
||||
preference.substring(Preference.PREFER_NODES.type().length() + 1).split(",")
|
||||
).collect(Collectors.toSet());
|
||||
return indexShard.preferNodeActiveInitializingShardsIt(nodesIds);
|
||||
case LOCAL:
|
||||
return indexShard.preferNodeActiveInitializingShardsIt(localNodeId);
|
||||
return indexShard.preferNodeActiveInitializingShardsIt(Collections.singleton(localNodeId));
|
||||
case PRIMARY:
|
||||
return indexShard.primaryActiveInitializingShardIt();
|
||||
case REPLICA:
|
||||
|
|
|
@ -30,9 +30,9 @@ public enum Preference {
|
|||
SHARDS("_shards"),
|
||||
|
||||
/**
|
||||
* Route to preferred node, if possible
|
||||
* Route to preferred nodes, if possible
|
||||
*/
|
||||
PREFER_NODE("_prefer_node"),
|
||||
PREFER_NODES("_prefer_nodes"),
|
||||
|
||||
/**
|
||||
* Route to local node, if possible
|
||||
|
@ -98,8 +98,8 @@ public enum Preference {
|
|||
switch (preferenceType) {
|
||||
case "_shards":
|
||||
return SHARDS;
|
||||
case "_prefer_node":
|
||||
return PREFER_NODE;
|
||||
case "_prefer_nodes":
|
||||
return PREFER_NODES;
|
||||
case "_only_node":
|
||||
return ONLY_NODE;
|
||||
case "_local":
|
||||
|
@ -123,6 +123,7 @@ public enum Preference {
|
|||
throw new IllegalArgumentException("no Preference for [" + preferenceType + "]");
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ package org.elasticsearch.common.geo;
|
|||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
|
||||
import org.apache.lucene.spatial.util.GeoEncodingUtils;
|
||||
import org.apache.lucene.spatial.geopoint.document.GeoPointField;
|
||||
import org.apache.lucene.util.BitUtil;
|
||||
|
||||
/**
|
||||
|
@ -39,7 +39,7 @@ public class GeoHashUtils {
|
|||
|
||||
/** maximum precision for geohash strings */
|
||||
public static final int PRECISION = 12;
|
||||
private static final short MORTON_OFFSET = (GeoEncodingUtils.BITS<<1) - (PRECISION*5);
|
||||
private static final short MORTON_OFFSET = (GeoPointField.BITS<<1) - (PRECISION*5);
|
||||
|
||||
// No instance:
|
||||
private GeoHashUtils() {
|
||||
|
@ -51,7 +51,7 @@ public class GeoHashUtils {
|
|||
public static final long longEncode(final double lon, final double lat, final int level) {
|
||||
// shift to appropriate level
|
||||
final short msf = (short)(((12 - level) * 5) + MORTON_OFFSET);
|
||||
return ((BitUtil.flipFlop(GeoEncodingUtils.mortonHash(lat, lon)) >>> msf) << 4) | level;
|
||||
return ((BitUtil.flipFlop(GeoPointField.encodeLatLon(lat, lon)) >>> msf) << 4) | level;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -117,7 +117,7 @@ public class GeoHashUtils {
|
|||
*/
|
||||
public static final String stringEncode(final double lon, final double lat, final int level) {
|
||||
// convert to geohashlong
|
||||
final long ghLong = fromMorton(GeoEncodingUtils.mortonHash(lat, lon), level);
|
||||
final long ghLong = fromMorton(GeoPointField.encodeLatLon(lat, lon), level);
|
||||
return stringEncode(ghLong);
|
||||
|
||||
}
|
||||
|
@ -138,7 +138,7 @@ public class GeoHashUtils {
|
|||
|
||||
StringBuilder geoHash = new StringBuilder();
|
||||
short precision = 0;
|
||||
final short msf = (GeoEncodingUtils.BITS<<1)-5;
|
||||
final short msf = (GeoPointField.BITS<<1)-5;
|
||||
long mask = 31L<<msf;
|
||||
do {
|
||||
geoHash.append(BASE_32[(int)((mask & hashedVal)>>>(msf-(precision*5)))]);
|
||||
|
|
|
@ -19,12 +19,11 @@
|
|||
|
||||
package org.elasticsearch.common.geo;
|
||||
|
||||
import org.apache.lucene.spatial.geopoint.document.GeoPointField;
|
||||
import org.apache.lucene.util.BitUtil;
|
||||
|
||||
import static org.elasticsearch.common.geo.GeoHashUtils.mortonEncode;
|
||||
import static org.elasticsearch.common.geo.GeoHashUtils.stringEncode;
|
||||
import static org.apache.lucene.spatial.util.GeoEncodingUtils.mortonUnhashLat;
|
||||
import static org.apache.lucene.spatial.util.GeoEncodingUtils.mortonUnhashLon;
|
||||
|
||||
/**
|
||||
*
|
||||
|
@ -84,14 +83,14 @@ public final class GeoPoint {
|
|||
}
|
||||
|
||||
public GeoPoint resetFromIndexHash(long hash) {
|
||||
lon = mortonUnhashLon(hash);
|
||||
lat = mortonUnhashLat(hash);
|
||||
lon = GeoPointField.decodeLongitude(hash);
|
||||
lat = GeoPointField.decodeLatitude(hash);
|
||||
return this;
|
||||
}
|
||||
|
||||
public GeoPoint resetFromGeoHash(String geohash) {
|
||||
final long hash = mortonEncode(geohash);
|
||||
return this.reset(mortonUnhashLat(hash), mortonUnhashLon(hash));
|
||||
return this.reset(GeoPointField.decodeLatitude(hash), GeoPointField.decodeLongitude(hash));
|
||||
}
|
||||
|
||||
public GeoPoint resetFromGeoHash(long geohashLong) {
|
||||
|
@ -164,8 +163,4 @@ public final class GeoPoint {
|
|||
public static GeoPoint fromGeohash(long geohashLong) {
|
||||
return new GeoPoint().resetFromGeoHash(geohashLong);
|
||||
}
|
||||
|
||||
public static GeoPoint fromIndexLong(long indexLong) {
|
||||
return new GeoPoint().resetFromIndexHash(indexLong);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -28,7 +28,6 @@ import org.elasticsearch.common.xcontent.XContentParser;
|
|||
import org.elasticsearch.common.xcontent.XContentParser.Token;
|
||||
import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
|
||||
|
||||
import static org.apache.lucene.spatial.util.GeoDistanceUtils.maxRadialDistanceMeters;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
|
@ -67,6 +66,9 @@ public class GeoUtils {
|
|||
/** Earth ellipsoid polar distance in meters */
|
||||
public static final double EARTH_POLAR_DISTANCE = Math.PI * EARTH_SEMI_MINOR_AXIS;
|
||||
|
||||
/** rounding error for quantized latitude and longitude values */
|
||||
public static final double TOLERANCE = 1E-6;
|
||||
|
||||
/** Returns the minimum between the provided distance 'initialRadius' and the
|
||||
* maximum distance/radius from the point 'center' before overlapping
|
||||
**/
|
||||
|
@ -468,6 +470,14 @@ public class GeoUtils {
|
|||
}
|
||||
}
|
||||
|
||||
/** Returns the maximum distance/radius (in meters) from the point 'center' before overlapping */
|
||||
public static double maxRadialDistanceMeters(final double centerLat, final double centerLon) {
|
||||
if (Math.abs(centerLat) == MAX_LAT) {
|
||||
return SloppyMath.haversinMeters(centerLat, centerLon, 0, centerLon);
|
||||
}
|
||||
return SloppyMath.haversinMeters(centerLat, centerLon, centerLat, (MAX_LON + centerLon) % 360);
|
||||
}
|
||||
|
||||
private GeoUtils() {
|
||||
}
|
||||
}
|
||||
|
|
|
@ -26,6 +26,8 @@ import java.io.IOException;
|
|||
* across the wire" using Elasticsearch's internal protocol. If the implementer also implements equals and hashCode then a copy made by
|
||||
* serializing and deserializing must be equal and have the same hashCode. It isn't required that such a copy be entirely unchanged. For
|
||||
* example, {@link org.elasticsearch.common.unit.TimeValue} converts the time to nanoseconds for serialization.
|
||||
* {@linkplain org.elasticsearch.common.unit.TimeValue} actually implements {@linkplain Writeable} not {@linkplain Streamable} but it has
|
||||
* the same contract.
|
||||
*
|
||||
* Prefer implementing {@link Writeable} over implementing this interface where possible. Lots of code depends on this interface so this
|
||||
* isn't always possible.
|
||||
|
|
|
@ -26,8 +26,6 @@ import java.io.IOException;
|
|||
* across the wire" using Elasticsearch's internal protocol. If the implementer also implements equals and hashCode then a copy made by
|
||||
* serializing and deserializing must be equal and have the same hashCode. It isn't required that such a copy be entirely unchanged. For
|
||||
* example, {@link org.elasticsearch.common.unit.TimeValue} converts the time to nanoseconds for serialization.
|
||||
* {@linkplain org.elasticsearch.common.unit.TimeValue} actually implements {@linkplain Streamable} not {@linkplain Writeable} but it has
|
||||
* the same contract.
|
||||
*
|
||||
* Prefer implementing this interface over implementing {@link Streamable} where possible. Lots of code depends on {@linkplain Streamable}
|
||||
* so this isn't always possible.
|
||||
|
|
|
@ -45,6 +45,7 @@ import org.apache.lucene.util.BytesRef;
|
|||
import org.apache.lucene.util.SmallFloat;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
import java.util.Set;
|
||||
|
||||
/**
|
||||
|
@ -63,6 +64,19 @@ public final class AllTermQuery extends Query {
|
|||
this.term = term;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (sameClassAs(obj) == false) {
|
||||
return false;
|
||||
}
|
||||
return Objects.equals(term, ((AllTermQuery) obj).term);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return 31 * classHash() + term.hashCode();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Query rewrite(IndexReader reader) throws IOException {
|
||||
Query rewritten = super.rewrite(reader);
|
||||
|
|
|
@ -66,4 +66,14 @@ public class MatchNoDocsQuery extends Query {
|
|||
public String toString(String field) {
|
||||
return "MatchNoDocsQuery[\"" + reason + "\"]";
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
return sameClassAs(obj);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return classHash();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -84,14 +84,14 @@ public class MoreLikeThisQuery extends Query {
|
|||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(super.hashCode(), boostTerms, boostTermsFactor, Arrays.hashCode(likeText),
|
||||
return Objects.hash(classHash(), boostTerms, boostTermsFactor, Arrays.hashCode(likeText),
|
||||
maxDocFreq, maxQueryTerms, maxWordLen, minDocFreq, minTermFrequency, minWordLen,
|
||||
Arrays.hashCode(moreLikeFields), minimumShouldMatch, stopWords);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (super.equals(obj) == false) {
|
||||
if (sameClassAs(obj) == false) {
|
||||
return false;
|
||||
}
|
||||
MoreLikeThisQuery other = (MoreLikeThisQuery) obj;
|
||||
|
|
|
@ -238,7 +238,7 @@ public class MultiPhrasePrefixQuery extends Query {
|
|||
*/
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (super.equals(o) == false) {
|
||||
if (sameClassAs(o) == false) {
|
||||
return false;
|
||||
}
|
||||
MultiPhrasePrefixQuery other = (MultiPhrasePrefixQuery) o;
|
||||
|
@ -252,7 +252,7 @@ public class MultiPhrasePrefixQuery extends Query {
|
|||
*/
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return super.hashCode()
|
||||
return classHash()
|
||||
^ slop
|
||||
^ termArraysHashCode()
|
||||
^ positions.hashCode();
|
||||
|
|
|
@ -355,7 +355,7 @@ public class FiltersFunctionScoreQuery extends Query {
|
|||
if (this == o) {
|
||||
return true;
|
||||
}
|
||||
if (super.equals(o) == false) {
|
||||
if (sameClassAs(o) == false) {
|
||||
return false;
|
||||
}
|
||||
FiltersFunctionScoreQuery other = (FiltersFunctionScoreQuery) o;
|
||||
|
@ -367,6 +367,6 @@ public class FiltersFunctionScoreQuery extends Query {
|
|||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(super.hashCode(), subQuery, maxBoost, combineFunction, minScore, scoreMode, Arrays.hashCode(filterFunctions));
|
||||
return Objects.hash(classHash(), subQuery, maxBoost, combineFunction, minScore, scoreMode, Arrays.hashCode(filterFunctions));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -210,7 +210,7 @@ public class FunctionScoreQuery extends Query {
|
|||
if (this == o) {
|
||||
return true;
|
||||
}
|
||||
if (super.equals(o) == false) {
|
||||
if (sameClassAs(o) == false) {
|
||||
return false;
|
||||
}
|
||||
FunctionScoreQuery other = (FunctionScoreQuery) o;
|
||||
|
@ -221,6 +221,6 @@ public class FunctionScoreQuery extends Query {
|
|||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(super.hashCode(), subQuery.hashCode(), function, combineFunction, minScore, maxBoost);
|
||||
return Objects.hash(classHash(), subQuery.hashCode(), function, combineFunction, minScore, maxBoost);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -49,6 +49,7 @@ import org.elasticsearch.rest.action.admin.cluster.node.hotthreads.RestNodesHotT
|
|||
import org.elasticsearch.rest.action.admin.cluster.node.info.RestNodesInfoAction;
|
||||
import org.elasticsearch.rest.action.admin.cluster.node.stats.RestNodesStatsAction;
|
||||
import org.elasticsearch.rest.action.admin.cluster.node.tasks.RestCancelTasksAction;
|
||||
import org.elasticsearch.rest.action.admin.cluster.node.tasks.RestGetTaskAction;
|
||||
import org.elasticsearch.rest.action.admin.cluster.node.tasks.RestListTasksAction;
|
||||
import org.elasticsearch.rest.action.admin.cluster.repositories.delete.RestDeleteRepositoryAction;
|
||||
import org.elasticsearch.rest.action.admin.cluster.repositories.get.RestGetRepositoriesAction;
|
||||
|
@ -146,6 +147,7 @@ import org.elasticsearch.rest.action.suggest.RestSuggestAction;
|
|||
import org.elasticsearch.rest.action.termvectors.RestMultiTermVectorsAction;
|
||||
import org.elasticsearch.rest.action.termvectors.RestTermVectorsAction;
|
||||
import org.elasticsearch.rest.action.update.RestUpdateAction;
|
||||
import org.elasticsearch.tasks.RawTaskStatus;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.transport.Transport;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
|
@ -277,6 +279,7 @@ public class NetworkModule extends AbstractModule {
|
|||
|
||||
// Tasks API
|
||||
RestListTasksAction.class,
|
||||
RestGetTaskAction.class,
|
||||
RestCancelTasksAction.class,
|
||||
|
||||
// Ingest API
|
||||
|
@ -339,6 +342,7 @@ public class NetworkModule extends AbstractModule {
|
|||
registerTransport(LOCAL_TRANSPORT, LocalTransport.class);
|
||||
registerTransport(NETTY_TRANSPORT, NettyTransport.class);
|
||||
registerTaskStatus(ReplicationTask.Status.NAME, ReplicationTask.Status::new);
|
||||
registerTaskStatus(RawTaskStatus.NAME, RawTaskStatus::new);
|
||||
registerBuiltinAllocationCommands();
|
||||
|
||||
if (transportClient == false) {
|
||||
|
|
|
@ -25,6 +25,7 @@ import org.elasticsearch.common.io.stream.StreamOutput;
|
|||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.joda.time.DateTimeField;
|
||||
import org.joda.time.DateTimeZone;
|
||||
import org.joda.time.IllegalInstantException;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
|
@ -218,7 +219,56 @@ public abstract class TimeZoneRounding extends Rounding {
|
|||
public long roundKey(long utcMillis) {
|
||||
long timeLocal = timeZone.convertUTCToLocal(utcMillis);
|
||||
long rounded = Rounding.Interval.roundValue(Rounding.Interval.roundKey(timeLocal, interval), interval);
|
||||
return timeZone.convertLocalToUTC(rounded, false, utcMillis);
|
||||
long roundedUTC;
|
||||
if (isInDSTGap(rounded) == false) {
|
||||
roundedUTC = timeZone.convertLocalToUTC(rounded, true, utcMillis);
|
||||
} else {
|
||||
/*
|
||||
* Edge case where the rounded local time is illegal and landed
|
||||
* in a DST gap. In this case, we choose 1ms tick after the
|
||||
* transition date. We don't want the transition date itself
|
||||
* because those dates, when rounded themselves, fall into the
|
||||
* previous interval. This would violate the invariant that the
|
||||
* rounding operation should be idempotent.
|
||||
*/
|
||||
roundedUTC = timeZone.previousTransition(utcMillis) + 1;
|
||||
}
|
||||
return roundedUTC;
|
||||
}
|
||||
|
||||
/**
|
||||
* Determine whether the local instant is a valid instant in the given
|
||||
* time zone. The logic for this is taken from
|
||||
* {@link DateTimeZone#convertLocalToUTC(long, boolean)} for the
|
||||
* `strict` mode case, but instead of throwing an
|
||||
* {@link IllegalInstantException}, which is costly, we want to return a
|
||||
* flag indicating that the value is illegal in that time zone.
|
||||
*/
|
||||
private boolean isInDSTGap(long instantLocal) {
|
||||
if (timeZone.isFixed()) {
|
||||
return false;
|
||||
}
|
||||
// get the offset at instantLocal (first estimate)
|
||||
int offsetLocal = timeZone.getOffset(instantLocal);
|
||||
// adjust instantLocal using the estimate and recalc the offset
|
||||
int offset = timeZone.getOffset(instantLocal - offsetLocal);
|
||||
// if the offsets differ, we must be near a DST boundary
|
||||
if (offsetLocal != offset) {
|
||||
// determine if we are in the DST gap
|
||||
long nextLocal = timeZone.nextTransition(instantLocal - offsetLocal);
|
||||
if (nextLocal == (instantLocal - offsetLocal)) {
|
||||
nextLocal = Long.MAX_VALUE;
|
||||
}
|
||||
long nextAdjusted = timeZone.nextTransition(instantLocal - offset);
|
||||
if (nextAdjusted == (instantLocal - offset)) {
|
||||
nextAdjusted = Long.MAX_VALUE;
|
||||
}
|
||||
if (nextLocal != nextAdjusted) {
|
||||
// we are in the DST gap
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -181,6 +181,7 @@ public final class ClusterSettings extends AbstractScopedSettings {
|
|||
IndexStoreConfig.INDICES_STORE_THROTTLE_MAX_BYTES_PER_SEC_SETTING,
|
||||
IndicesQueryCache.INDICES_CACHE_QUERY_SIZE_SETTING,
|
||||
IndicesQueryCache.INDICES_CACHE_QUERY_COUNT_SETTING,
|
||||
IndicesQueryCache.INDICES_QUERIES_CACHE_ALL_SEGMENTS_SETTING,
|
||||
IndicesTTLService.INDICES_TTL_INTERVAL_SETTING,
|
||||
MappingUpdatedAction.INDICES_MAPPING_DYNAMIC_TIMEOUT_SETTING,
|
||||
MetaData.SETTING_READ_ONLY_SETTING,
|
||||
|
|
|
@ -116,6 +116,7 @@ public final class IndexScopedSettings extends AbstractScopedSettings {
|
|||
IndexSettings.ALLOW_UNMAPPED,
|
||||
IndexSettings.INDEX_CHECK_ON_STARTUP,
|
||||
IndexSettings.MAX_REFRESH_LISTENERS_PER_SHARD,
|
||||
IndexSettings.MAX_SLICES_PER_SCROLL,
|
||||
ShardsLimitAllocationDecider.INDEX_TOTAL_SHARDS_PER_NODE_SETTING,
|
||||
IndexSettings.INDEX_GC_DELETES_SETTING,
|
||||
IndicesRequestCache.INDEX_CACHE_REQUEST_ENABLED_SETTING,
|
||||
|
|
|
@ -23,7 +23,7 @@ import org.elasticsearch.ElasticsearchParseException;
|
|||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Streamable;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.joda.time.Period;
|
||||
import org.joda.time.PeriodType;
|
||||
import org.joda.time.format.PeriodFormat;
|
||||
|
@ -34,7 +34,7 @@ import java.util.Locale;
|
|||
import java.util.Objects;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
public class TimeValue implements Streamable {
|
||||
public class TimeValue implements Writeable {
|
||||
|
||||
/** How many nano-seconds in one milli-second */
|
||||
public static final long NSEC_PER_MSEC = 1000000;
|
||||
|
@ -59,13 +59,8 @@ public class TimeValue implements Streamable {
|
|||
return new TimeValue(hours, TimeUnit.HOURS);
|
||||
}
|
||||
|
||||
private long duration;
|
||||
|
||||
private TimeUnit timeUnit;
|
||||
|
||||
private TimeValue() {
|
||||
|
||||
}
|
||||
private final long duration;
|
||||
private final TimeUnit timeUnit;
|
||||
|
||||
public TimeValue(long millis) {
|
||||
this(millis, TimeUnit.MILLISECONDS);
|
||||
|
@ -76,6 +71,19 @@ public class TimeValue implements Streamable {
|
|||
this.timeUnit = timeUnit;
|
||||
}
|
||||
|
||||
/**
|
||||
* Read from a stream.
|
||||
*/
|
||||
public TimeValue(StreamInput in) throws IOException {
|
||||
duration = in.readZLong();
|
||||
timeUnit = TimeUnit.NANOSECONDS;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeZLong(nanos());
|
||||
}
|
||||
|
||||
public long nanos() {
|
||||
return timeUnit.toNanos(duration);
|
||||
}
|
||||
|
@ -304,26 +312,6 @@ public class TimeValue implements Streamable {
|
|||
static final long C5 = C4 * 60L;
|
||||
static final long C6 = C5 * 24L;
|
||||
|
||||
public static TimeValue readTimeValue(StreamInput in) throws IOException {
|
||||
TimeValue timeValue = new TimeValue();
|
||||
timeValue.readFrom(in);
|
||||
return timeValue;
|
||||
}
|
||||
|
||||
/**
|
||||
* serialization converts TimeValue internally to NANOSECONDS
|
||||
*/
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
duration = in.readLong();
|
||||
timeUnit = TimeUnit.NANOSECONDS;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeLong(nanos());
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object o) {
|
||||
if (this == o) return true;
|
||||
|
|
|
@ -21,7 +21,9 @@ package org.elasticsearch.common.xcontent;
|
|||
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.ParseFieldMatcherSupplier;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.xcontent.ObjectParser.ValueType;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
|
@ -123,6 +125,17 @@ public abstract class AbstractObjectParser<Value, Context extends ParseFieldMatc
|
|||
declareField(consumer, (p, c) -> parseArray(p, p::intValue), field, ValueType.INT_ARRAY);
|
||||
}
|
||||
|
||||
public void declareRawObject(BiConsumer<Value, BytesReference> consumer, ParseField field) {
|
||||
NoContextParser<BytesReference> bytesParser = p -> {
|
||||
try (XContentBuilder builder = JsonXContent.contentBuilder()) {
|
||||
builder.prettyPrint();
|
||||
builder.copyCurrentStructure(p);
|
||||
return builder.bytes();
|
||||
}
|
||||
};
|
||||
declareField(consumer, bytesParser, field, ValueType.OBJECT);
|
||||
}
|
||||
|
||||
private interface IOSupplier<T> {
|
||||
T get() throws IOException;
|
||||
}
|
||||
|
|
|
@ -328,6 +328,10 @@ public class JsonXContentGenerator implements XContentGenerator {
|
|||
if (mayWriteRawData(contentType) == false) {
|
||||
copyRawValue(content, contentType.xContent());
|
||||
} else {
|
||||
if (generator.getOutputContext().getCurrentName() != null) {
|
||||
// If we've just started a field we'll need to add the separator
|
||||
generator.writeRaw(':');
|
||||
}
|
||||
flush();
|
||||
content.writeTo(os);
|
||||
writeEndRaw();
|
||||
|
|
|
@ -45,6 +45,7 @@ import org.elasticsearch.transport.TransportService;
|
|||
|
||||
import java.io.IOException;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.concurrent.RejectedExecutionException;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
|
||||
/**
|
||||
|
@ -196,14 +197,15 @@ public class MasterFaultDetection extends FaultDetection {
|
|||
|
||||
private void notifyMasterFailure(final DiscoveryNode masterNode, final Throwable cause, final String reason) {
|
||||
if (notifiedMasterFailure.compareAndSet(false, true)) {
|
||||
threadPool.generic().execute(new Runnable() {
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
threadPool.generic().execute(() -> {
|
||||
for (Listener listener : listeners) {
|
||||
listener.onMasterFailure(masterNode, cause, reason);
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
} catch (RejectedExecutionException e) {
|
||||
logger.error("master failure notification was rejected, it's highly likely the node is shutting down", e);
|
||||
}
|
||||
stop("master failure, " + reason);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
package org.elasticsearch.discovery.zen.ping.unicast;
|
||||
|
||||
import com.carrotsearch.hppc.cursors.ObjectCursor;
|
||||
|
||||
import org.apache.lucene.util.IOUtils;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.Version;
|
||||
|
@ -79,7 +80,6 @@ import java.util.function.Function;
|
|||
import static java.util.Collections.emptyList;
|
||||
import static java.util.Collections.emptyMap;
|
||||
import static java.util.Collections.emptySet;
|
||||
import static org.elasticsearch.common.unit.TimeValue.readTimeValue;
|
||||
import static org.elasticsearch.common.util.concurrent.ConcurrentCollections.newConcurrentMap;
|
||||
import static org.elasticsearch.discovery.zen.ping.ZenPing.PingResponse.readPingResponse;
|
||||
|
||||
|
@ -545,7 +545,7 @@ public class UnicastZenPing extends AbstractLifecycleComponent<ZenPing> implemen
|
|||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
id = in.readInt();
|
||||
timeout = readTimeValue(in);
|
||||
timeout = new TimeValue(in);
|
||||
pingResponse = readPingResponse(in);
|
||||
}
|
||||
|
||||
|
|
|
@ -108,7 +108,11 @@ public class HttpServer extends AbstractLifecycleComponent<HttpServer> implement
|
|||
RestChannel responseChannel = channel;
|
||||
try {
|
||||
int contentLength = request.content().length();
|
||||
inFlightRequestsBreaker(circuitBreakerService).addEstimateBytesAndMaybeBreak(contentLength, "<http_request>");
|
||||
if (restController.canTripCircuitBreaker(request)) {
|
||||
inFlightRequestsBreaker(circuitBreakerService).addEstimateBytesAndMaybeBreak(contentLength, "<http_request>");
|
||||
} else {
|
||||
inFlightRequestsBreaker(circuitBreakerService).addWithoutBreaking(contentLength);
|
||||
}
|
||||
// iff we could reserve bytes for the request we need to send the response also over this channel
|
||||
responseChannel = new ResourceHandlingHttpChannel(channel, circuitBreakerService, contentLength);
|
||||
restController.dispatchRequest(request, responseChannel, threadContext);
|
||||
|
|
|
@ -67,6 +67,7 @@ import org.elasticsearch.index.store.Store;
|
|||
import org.elasticsearch.index.translog.Translog;
|
||||
import org.elasticsearch.indices.AliasFilterParsingException;
|
||||
import org.elasticsearch.indices.InvalidAliasNameException;
|
||||
import org.elasticsearch.indices.cluster.IndicesClusterStateService;
|
||||
import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache;
|
||||
import org.elasticsearch.indices.mapper.MapperRegistry;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
|
@ -93,7 +94,7 @@ import static org.elasticsearch.common.collect.MapBuilder.newMapBuilder;
|
|||
/**
|
||||
*
|
||||
*/
|
||||
public final class IndexService extends AbstractIndexComponent implements IndexComponent, Iterable<IndexShard> {
|
||||
public class IndexService extends AbstractIndexComponent implements IndicesClusterStateService.AllocatedIndex<IndexShard> {
|
||||
|
||||
private final IndexEventListener eventListener;
|
||||
private final AnalysisService analysisService;
|
||||
|
@ -184,8 +185,8 @@ public final class IndexService extends AbstractIndexComponent implements IndexC
|
|||
/**
|
||||
* Return the shard with the provided id, or null if there is no such shard.
|
||||
*/
|
||||
@Nullable
|
||||
public IndexShard getShardOrNull(int shardId) {
|
||||
@Override
|
||||
public @Nullable IndexShard getShardOrNull(int shardId) {
|
||||
return shards.get(shardId);
|
||||
}
|
||||
|
||||
|
@ -359,6 +360,7 @@ public final class IndexService extends AbstractIndexComponent implements IndexC
|
|||
return primary == false && IndexMetaData.isIndexUsingShadowReplicas(indexSettings);
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void removeShard(int shardId, String reason) {
|
||||
final ShardId sId = new ShardId(index(), shardId);
|
||||
final IndexShard indexShard;
|
||||
|
@ -470,6 +472,11 @@ public final class IndexService extends AbstractIndexComponent implements IndexC
|
|||
return searchOperationListeners;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean updateMapping(IndexMetaData indexMetaData) throws IOException {
|
||||
return mapperService().updateMapping(indexMetaData);
|
||||
}
|
||||
|
||||
private class StoreCloseListener implements Store.OnClose {
|
||||
private final ShardId shardId;
|
||||
private final boolean ownsShard;
|
||||
|
@ -617,6 +624,7 @@ public final class IndexService extends AbstractIndexComponent implements IndexC
|
|||
return indexSettings.getIndexMetaData();
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void updateMetaData(final IndexMetaData metadata) {
|
||||
final Translog.Durability oldTranslogDurability = indexSettings.getTranslogDurability();
|
||||
if (indexSettings.updateIndexMetaData(metadata)) {
|
||||
|
|
|
@ -121,6 +121,12 @@ public final class IndexSettings {
|
|||
public static final Setting<Integer> MAX_REFRESH_LISTENERS_PER_SHARD = Setting.intSetting("index.max_refresh_listeners", 1000, 0,
|
||||
Property.Dynamic, Property.IndexScope);
|
||||
|
||||
/**
|
||||
* The maximum number of slices allowed in a scroll request
|
||||
*/
|
||||
public static final Setting<Integer> MAX_SLICES_PER_SCROLL = Setting.intSetting("index.max_slices_per_scroll",
|
||||
1024, 1, Property.Dynamic, Property.IndexScope);
|
||||
|
||||
private final Index index;
|
||||
private final Version version;
|
||||
private final ESLogger logger;
|
||||
|
@ -154,6 +160,11 @@ public final class IndexSettings {
|
|||
* The maximum number of refresh listeners allows on this shard.
|
||||
*/
|
||||
private volatile int maxRefreshListeners;
|
||||
/**
|
||||
* The maximum number of slices allowed in a scroll request.
|
||||
*/
|
||||
private volatile int maxSlicesPerScroll;
|
||||
|
||||
|
||||
/**
|
||||
* Returns the default search field for this index.
|
||||
|
@ -239,6 +250,7 @@ public final class IndexSettings {
|
|||
maxRescoreWindow = scopedSettings.get(MAX_RESCORE_WINDOW_SETTING);
|
||||
TTLPurgeDisabled = scopedSettings.get(INDEX_TTL_DISABLE_PURGE_SETTING);
|
||||
maxRefreshListeners = scopedSettings.get(MAX_REFRESH_LISTENERS_PER_SHARD);
|
||||
maxSlicesPerScroll = scopedSettings.get(MAX_SLICES_PER_SCROLL);
|
||||
this.mergePolicyConfig = new MergePolicyConfig(logger, this);
|
||||
assert indexNameMatcher.test(indexMetaData.getIndex().getName());
|
||||
|
||||
|
@ -262,6 +274,7 @@ public final class IndexSettings {
|
|||
scopedSettings.addSettingsUpdateConsumer(INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTING, this::setTranslogFlushThresholdSize);
|
||||
scopedSettings.addSettingsUpdateConsumer(INDEX_REFRESH_INTERVAL_SETTING, this::setRefreshInterval);
|
||||
scopedSettings.addSettingsUpdateConsumer(MAX_REFRESH_LISTENERS_PER_SHARD, this::setMaxRefreshListeners);
|
||||
scopedSettings.addSettingsUpdateConsumer(MAX_SLICES_PER_SCROLL, this::setMaxSlicesPerScroll);
|
||||
}
|
||||
|
||||
private void setTranslogFlushThresholdSize(ByteSizeValue byteSizeValue) {
|
||||
|
@ -391,7 +404,7 @@ public final class IndexSettings {
|
|||
*
|
||||
* @return <code>true</code> iff any setting has been updated otherwise <code>false</code>.
|
||||
*/
|
||||
synchronized boolean updateIndexMetaData(IndexMetaData indexMetaData) {
|
||||
public synchronized boolean updateIndexMetaData(IndexMetaData indexMetaData) {
|
||||
final Settings newSettings = indexMetaData.getSettings();
|
||||
if (version.equals(Version.indexCreated(newSettings)) == false) {
|
||||
throw new IllegalArgumentException("version mismatch on settings update expected: " + version + " but was: " + Version.indexCreated(newSettings));
|
||||
|
@ -521,5 +534,16 @@ public final class IndexSettings {
|
|||
this.maxRefreshListeners = maxRefreshListeners;
|
||||
}
|
||||
|
||||
/**
|
||||
* The maximum number of slices allowed in a scroll request.
|
||||
*/
|
||||
public int getMaxSlicesPerScroll() {
|
||||
return maxSlicesPerScroll;
|
||||
}
|
||||
|
||||
private void setMaxSlicesPerScroll(int value) {
|
||||
this.maxSlicesPerScroll = value;
|
||||
}
|
||||
|
||||
IndexScopedSettings getScopedSettings() { return scopedSettings;}
|
||||
}
|
||||
|
|
|
@ -20,6 +20,7 @@ package org.elasticsearch.index.analysis;
|
|||
|
||||
import org.apache.lucene.analysis.pattern.PatternReplaceCharFilter;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.regex.Regex;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.index.IndexSettings;
|
||||
|
@ -35,10 +36,11 @@ public class PatternReplaceCharFilterFactory extends AbstractCharFilterFactory {
|
|||
public PatternReplaceCharFilterFactory(IndexSettings indexSettings, Environment env, String name, Settings settings) {
|
||||
super(indexSettings, name);
|
||||
|
||||
if (!Strings.hasLength(settings.get("pattern"))) {
|
||||
String sPattern = settings.get("pattern");
|
||||
if (!Strings.hasLength(sPattern)) {
|
||||
throw new IllegalArgumentException("pattern is missing for [" + name + "] char filter of type 'pattern_replace'");
|
||||
}
|
||||
pattern = Pattern.compile(settings.get("pattern"));
|
||||
pattern = Regex.compile(sPattern, settings.get("flags"));
|
||||
replacement = settings.get("replacement", ""); // when not set or set to "", use "".
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue