Merge branch 'master' into pr/s3-path-style-access

# Conflicts:
#	docs/plugins/repository-s3.asciidoc
#	plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/AwsS3Service.java
#	plugins/repository-s3/src/main/java/org/elasticsearch/cloud/aws/InternalAwsS3Service.java
#	plugins/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3Repository.java
#	plugins/repository-s3/src/test/java/org/elasticsearch/cloud/aws/TestAwsS3Service.java
This commit is contained in:
David Pilato 2016-07-11 23:17:38 +02:00
commit bdebaba8f5
3826 changed files with 161685 additions and 86407 deletions

View File

@ -3,7 +3,11 @@ GitHub is reserved for bug reports and feature requests. The best place
to ask a general question is at the Elastic Discourse forums at
https://discuss.elastic.co. If you are in fact posting a bug report or
a feature request, please include one and only one of the below blocks
in your new issue.
in your new issue. Note that whether you're filing a bug report or a
feature request, ensure that your submission is for an
[OS that we support](https://www.elastic.co/support/matrix#show_os).
Bug reports on an OS that we do not support or feature requests
specific to an OS that we do not support will be closed.
-->
<!--

View File

@ -7,7 +7,7 @@ attention.
-->
- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?
- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/.github/CONTRIBUTING.md)?
- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md)?
- If submitting code, have you built your formula locally prior to submission with `gradle check`?
- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.
- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?

View File

@ -1,10 +0,0 @@
language: java
jdk:
- openjdk7
env:
- ES_TEST_LOCAL=true
- ES_TEST_LOCAL=false
notifications:
email: false

View File

@ -71,12 +71,47 @@ Once your changes and tests are ready to submit for review:
Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Elasticsearch.
Please adhere to the general guideline that you should never force push
to a publicly shared branch. Once you have opened your pull request, you
should consider your branch publicly shared. Instead of force pushing
you can just add incremental commits; this is generally easier on your
reviewers. If you need to pick up changes from master, you can merge
master into your branch. A reviewer might ask you to rebase a
long-running pull request in which case force pushing is okay for that
request. Note that squashing at the end of the review process should
also not be done, that can be done when the pull request is [integrated
via GitHub](https://github.com/blog/2141-squash-your-commits).
Contributing to the Elasticsearch codebase
------------------------------------------
**Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)
Make sure you have [Gradle](http://gradle.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE: `gradle eclipse` then `File: Import: Existing Projects into Workspace`. Select the option `Search for nested projects`. Additionally you will want to ensure that Eclipse is using 2048m of heap by modifying `eclipse.ini` accordingly to avoid GC overhead errors.
Make sure you have [Gradle](http://gradle.org) installed, as
Elasticsearch uses it as its build system.
Eclipse users can automatically configure their IDE: `gradle eclipse`
then `File: Import: Existing Projects into Workspace`. Select the
option `Search for nested projects`. Additionally you will want to
ensure that Eclipse is using 2048m of heap by modifying `eclipse.ini`
accordingly to avoid GC overhead errors.
IntelliJ users can automatically configure their IDE: `gradle idea`
then `File->New Project From Existing Sources`. Point to the root of
the source directory, select
`Import project from external model->Gradle`, enable
`Use auto-import`.
The Elasticsearch codebase makes heavy use of Java `assert`s and the
test runner requires that assertions be enabled within the JVM. This
can be accomplished by passing the flag `-ea` to the JVM on startup.
For IntelliJ, go to
`Run->Edit Configurations...->Defaults->JUnit->VM options` and input
`-ea`.
For Eclipse, go to `Preferences->Java->Installed JREs` and add `-ea` to
`VM Arguments`.
Please follow these formatting guidelines:

View File

@ -50,19 +50,19 @@ h3. Indexing
Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
@ -101,7 +101,7 @@ Just for kicks, let's get all the documents stored (we should see the user as we
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -113,7 +113,7 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"range" : {
"postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
"post_date" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
}
}
}'
@ -130,19 +130,19 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
@ -152,11 +152,11 @@ The above will index information into the @kimchy@ index, with two types, @info@
Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
<pre>
curl -XPUT http://localhost:9200/another_user/ -d '
curl -XPUT http://localhost:9200/another_user?pretty -d '
{
"index" : {
"numberOfShards" : 1,
"numberOfReplicas" : 1
"number_of_shards" : 1,
"number_of_replicas" : 1
}
}'
</pre>
@ -168,7 +168,7 @@ index (twitter user), for example:
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -179,7 +179,7 @@ Or on all the indices:
curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -196,15 +196,15 @@ In order to play with the distributed nature of Elasticsearch, simply bring more
h3. Where to go from here?
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website.
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website. General questions can be asked on the "Elastic Discourse forum":https://discuss.elastic.co or on IRC on Freenode at "#elasticsearch":https://webchat.freenode.net/#elasticsearch. The Elasticsearch GitHub repository is reserved for bug reports and feature requests only.
h3. Building from Source
Elasticsearch uses "Gradle":http://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.8 should do.
Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.13 should do.
In order to create a distribution, simply run the @gradle build@ command in the cloned directory.
In order to create a distribution, simply run the @gradle assemble@ command in the cloned directory.
The distribution for each project will be created under the @target/releases@ directory in that project.
The distribution for each project will be created under the @build/distributions@ directory in that project.
See the "TESTING":TESTING.asciidoc file for more information about
running the Elasticsearch test suite.

View File

@ -18,24 +18,18 @@ gradle assemble
== Other test options
To disable and enable network transport, set the `Des.node.mode`.
To disable and enable network transport, set the `tests.es.node.mode` system property.
Use network transport:
------------------------------------
-Des.node.mode=network
-Dtests.es.node.mode=network
------------------------------------
Use local transport (default since 1.3):
-------------------------------------
-Des.node.mode=local
-------------------------------------
Alternatively, you can set the `ES_TEST_LOCAL` environment variable:
-------------------------------------
export ES_TEST_LOCAL=true && gradle test
-Dtests.es.node.mode=local
-------------------------------------
=== Running Elasticsearch from a checkout
@ -201,7 +195,7 @@ gradle test -Dtests.timeoutSuite=5000! ...
Change the logging level of ES (not gradle)
--------------------------------
gradle test -Des.logger.level=DEBUG
gradle test -Dtests.es.logger.level=DEBUG
--------------------------------
Print all the logging output from the test runs to the commandline
@ -302,7 +296,7 @@ gradle :distribution:integ-test-zip:integTest \
-Dtests.method="test {p0=cat.shards/10_basic/Help}"
---------------------------------------------------------------------------
`RestNIT` are the executable test classes that runs all the
`RestIT` are the executable test classes that runs all the
yaml suites available within the `rest-api-spec` folder.
The REST tests support all the options provided by the randomized runner, plus the following:

6
Vagrantfile vendored
View File

@ -42,7 +42,7 @@ Vagrant.configure(2) do |config|
# debian and it works fine.
config.vm.define "debian-8" do |config|
config.vm.box = "elastic/debian-8-x86_64"
deb_common config, 'echo deb http://http.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/backports.list', 'backports'
deb_common config, 'echo deb http://cloudfront.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/backports.list', 'backports'
end
config.vm.define "centos-6" do |config|
config.vm.box = "elastic/centos-6-x86_64"
@ -60,8 +60,8 @@ Vagrant.configure(2) do |config|
config.vm.box = "elastic/oraclelinux-7-x86_64"
rpm_common config
end
config.vm.define "fedora-22" do |config|
config.vm.box = "elastic/fedora-22-x86_64"
config.vm.define "fedora-24" do |config|
config.vm.box = "elastic/fedora-24-x86_64"
dnf_common config
end
config.vm.define "opensuse-13" do |config|

62
benchmarks/README.md Normal file
View File

@ -0,0 +1,62 @@
# Elasticsearch Microbenchmark Suite
This directory contains the microbenchmark suite of Elasticsearch. It relies on [JMH](http://openjdk.java.net/projects/code-tools/jmh/).
## Purpose
We do not want to microbenchmark everything but the kitchen sink and should typically rely on our
[macrobenchmarks](https://elasticsearch-benchmarks.elastic.co/app/kibana#/dashboard/Nightly-Benchmark-Overview) with
[Rally](http://github.com/elastic/rally). Microbenchmarks are intended to spot performance regressions in performance-critical components.
The microbenchmark suite is also handy for ad-hoc microbenchmarks but please remove them again before merging your PR.
## Getting Started
Just run `gradle :benchmarks:jmh` from the project root directory. It will build all microbenchmarks, execute them and print the result.
## Running Microbenchmarks
Benchmarks are always run via Gradle with `gradle :benchmarks:jmh`.
Running via an IDE is not supported as the results are meaningless (we have no control over the JVM running the benchmarks).
If you want to run a specific benchmark class, e.g. `org.elasticsearch.benchmark.MySampleBenchmark` or have special requirements
generate the uberjar with `gradle :benchmarks:jmhJar` and run it directly with:
```
java -jar benchmarks/build/distributions/elasticsearch-benchmarks-*.jar
```
JMH supports lots of command line parameters. Add `-h` to the command above to see the available command line options.
## Adding Microbenchmarks
Before adding a new microbenchmark, make yourself familiar with the JMH API. You can check our existing microbenchmarks and also the
[JMH samples](http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/).
In contrast to tests, the actual name of the benchmark class is not relevant to JMH. However, stick to the naming convention and
end the class name of a benchmark with `Benchmark`. To have JMH execute a benchmark, annotate the respective methods with `@Benchmark`.
## Tips and Best Practices
To get realistic results, you should exercise care when running benchmarks. Here are a few tips:
### Do
* Ensure that the system executing your microbenchmarks has as little load as possible. Shutdown every process that can cause unnecessary
runtime jitter. Watch the `Error` column in the benchmark results to see the run-to-run variance.
* Ensure to run enough warmup iterations to get the benchmark into a stable state. If you are unsure, don't change the defaults.
* Avoid CPU migrations by pinning your benchmarks to specific CPU cores. On Linux you can use `taskset`.
* Fix the CPU frequency to avoid Turbo Boost from kicking in and skewing your results. On Linux you can use `cpufreq-set` and the
`performance` CPU governor.
* Vary the problem input size with `@Param`.
* Use the integrated profilers in JMH to dig deeper if benchmark results to not match your hypotheses:
* Run the generated uberjar directly and use `-prof gc` to check whether the garbage collector runs during a microbenchmarks and skews
your results. If so, try to force a GC between runs (`-gc true`) but watch out for the caveats.
* Use `-prof perf` or `-prof perfasm` (both only available on Linux) to see hotspots.
* Have your benchmarks peer-reviewed.
### Don't
* Blindly believe the numbers that your microbenchmark produces but verify them by measuring e.g. with `-prof perfasm`.
* Run more threads than your number of CPU cores (in case you run multi-threaded microbenchmarks).
* Look only at the `Score` column and ignore `Error`. Instead take countermeasures to keep `Error` low / variance explainable.

96
benchmarks/build.gradle Normal file
View File

@ -0,0 +1,96 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
buildscript {
repositories {
maven {
url 'https://plugins.gradle.org/m2/'
}
}
dependencies {
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3'
}
}
apply plugin: 'elasticsearch.build'
// build an uberjar with all benchmarks
apply plugin: 'com.github.johnrengelman.shadow'
// have the shadow plugin provide the runShadow task
apply plugin: 'application'
archivesBaseName = 'elasticsearch-benchmarks'
mainClassName = 'org.openjdk.jmh.Main'
// never try to invoke tests on the benchmark project - there aren't any
check.dependsOn.remove(test)
// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip
task test(type: Test, overwrite: true)
dependencies {
compile("org.elasticsearch:elasticsearch:${version}") {
// JMH ships with the conflicting version 4.6 (JMH will not update this dependency as it is Java 6 compatible and joptsimple is one
// of the most recent compatible version). This prevents us from using jopt-simple in benchmarks (which should be ok) but allows us
// to invoke the JMH uberjar as usual.
exclude group: 'net.sf.jopt-simple', module: 'jopt-simple'
}
compile "org.openjdk.jmh:jmh-core:$versions.jmh"
compile "org.openjdk.jmh:jmh-generator-annprocess:$versions.jmh"
// Dependencies of JMH
runtime 'net.sf.jopt-simple:jopt-simple:4.6'
runtime 'org.apache.commons:commons-math3:3.2'
}
compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
forbiddenApis {
// classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
ignoreFailures = true
}
// No licenses for our benchmark deps (we don't ship benchmarks)
dependencyLicenses.enabled = false
thirdPartyAudit.excludes = [
// these classes intentionally use JDK internal API (and this is ok since the project is maintained by Oracle employees)
'org.openjdk.jmh.profile.AbstractHotspotProfiler',
'org.openjdk.jmh.profile.HotspotThreadProfiler',
'org.openjdk.jmh.profile.HotspotClassloadingProfiler',
'org.openjdk.jmh.profile.HotspotCompilationProfiler',
'org.openjdk.jmh.profile.HotspotMemoryProfiler',
'org.openjdk.jmh.profile.HotspotRuntimeProfiler',
'org.openjdk.jmh.util.Utils'
]
shadowJar {
classifier = 'benchmarks'
}
// alias the shadowJar and runShadow tasks to abstract from the concrete plugin that we are using and provide a more consistent interface
task jmhJar(
dependsOn: shadowJar,
description: 'Generates an uberjar with the microbenchmarks and all dependencies',
group: 'Benchmark'
)
task jmh(
dependsOn: runShadow,
description: 'Runs all microbenchmarks',
group: 'Benchmark'
)

View File

@ -0,0 +1,171 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.benchmark.routing.allocation;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.cluster.routing.RoutingTable;
import org.elasticsearch.cluster.routing.ShardRoutingState;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.common.settings.Settings;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import java.util.Collections;
import java.util.concurrent.TimeUnit;
@Fork(3)
@Warmup(iterations = 10)
@Measurement(iterations = 10)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Benchmark)
@SuppressWarnings("unused") //invoked by benchmarking framework
public class AllocationBenchmark {
// Do NOT make any field final (even if it is not annotated with @Param)! See also
// http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_10_ConstantFold.java
// we cannot use individual @Params as some will lead to invalid combinations which do not let the benchmark terminate. JMH offers no
// support to constrain the combinations of benchmark parameters and we do not want to rely on OptionsBuilder as each benchmark would
// need its own main method and we cannot execute more than one class with a main method per JAR.
@Param({
// indices, shards, replicas, nodes
" 10, 1, 0, 1",
" 10, 3, 0, 1",
" 10, 10, 0, 1",
" 100, 1, 0, 1",
" 100, 3, 0, 1",
" 100, 10, 0, 1",
" 10, 1, 0, 10",
" 10, 3, 0, 10",
" 10, 10, 0, 10",
" 100, 1, 0, 10",
" 100, 3, 0, 10",
" 100, 10, 0, 10",
" 10, 1, 1, 10",
" 10, 3, 1, 10",
" 10, 10, 1, 10",
" 100, 1, 1, 10",
" 100, 3, 1, 10",
" 100, 10, 1, 10",
" 10, 1, 2, 10",
" 10, 3, 2, 10",
" 10, 10, 2, 10",
" 100, 1, 2, 10",
" 100, 3, 2, 10",
" 100, 10, 2, 10",
" 10, 1, 0, 50",
" 10, 3, 0, 50",
" 10, 10, 0, 50",
" 100, 1, 0, 50",
" 100, 3, 0, 50",
" 100, 10, 0, 50",
" 10, 1, 1, 50",
" 10, 3, 1, 50",
" 10, 10, 1, 50",
" 100, 1, 1, 50",
" 100, 3, 1, 50",
" 100, 10, 1, 50",
" 10, 1, 2, 50",
" 10, 3, 2, 50",
" 10, 10, 2, 50",
" 100, 1, 2, 50",
" 100, 3, 2, 50",
" 100, 10, 2, 50"
})
public String indicesShardsReplicasNodes = "10,1,0,1";
public int numTags = 2;
private AllocationService strategy;
private ClusterState initialClusterState;
@Setup
public void setUp() throws Exception {
final String[] params = indicesShardsReplicasNodes.split(",");
int numIndices = toInt(params[0]);
int numShards = toInt(params[1]);
int numReplicas = toInt(params[2]);
int numNodes = toInt(params[3]);
strategy = Allocators.createAllocationService(Settings.builder()
.put("cluster.routing.allocation.awareness.attributes", "tag")
.build());
MetaData.Builder mb = MetaData.builder();
for (int i = 1; i <= numIndices; i++) {
mb.put(IndexMetaData.builder("test_" + i)
.settings(Settings.builder().put("index.version.created", Version.CURRENT))
.numberOfShards(numShards)
.numberOfReplicas(numReplicas)
);
}
MetaData metaData = mb.build();
RoutingTable.Builder rb = RoutingTable.builder();
for (int i = 1; i <= numIndices; i++) {
rb.addAsNew(metaData.index("test_" + i));
}
RoutingTable routingTable = rb.build();
DiscoveryNodes.Builder nb = DiscoveryNodes.builder();
for (int i = 1; i <= numNodes; i++) {
nb.put(Allocators.newNode("node" + i, Collections.singletonMap("tag", "tag_" + (i % numTags))));
}
initialClusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY))
.metaData(metaData).routingTable(routingTable).nodes
(nb).build();
}
private int toInt(String v) {
return Integer.valueOf(v.trim());
}
@Benchmark
public ClusterState measureAllocation() {
ClusterState clusterState = initialClusterState;
while (clusterState.getRoutingNodes().hasUnassignedShards()) {
RoutingAllocation.Result result = strategy.applyStartedShards(clusterState, clusterState.getRoutingNodes()
.shardsWithState(ShardRoutingState.INITIALIZING));
clusterState = ClusterState.builder(clusterState).routingResult(result).build();
result = strategy.reroute(clusterState, "reroute");
clusterState = ClusterState.builder(clusterState).routingResult(result).build();
}
return clusterState;
}
}

View File

@ -0,0 +1,108 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.benchmark.routing.allocation;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.ClusterModule;
import org.elasticsearch.cluster.EmptyClusterInfoService;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.cluster.routing.allocation.FailedRerouteAllocation;
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.cluster.routing.allocation.StartedRerouteAllocation;
import org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator;
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDecider;
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.LocalTransportAddress;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.gateway.GatewayAllocator;
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
public final class Allocators {
private static class NoopGatewayAllocator extends GatewayAllocator {
public static final NoopGatewayAllocator INSTANCE = new NoopGatewayAllocator();
protected NoopGatewayAllocator() {
super(Settings.EMPTY, null, null);
}
@Override
public void applyStartedShards(StartedRerouteAllocation allocation) {
// noop
}
@Override
public void applyFailedShards(FailedRerouteAllocation allocation) {
// noop
}
@Override
public boolean allocateUnassigned(RoutingAllocation allocation) {
return false;
}
}
private Allocators() {
throw new AssertionError("Do not instantiate");
}
public static AllocationService createAllocationService(Settings settings) throws NoSuchMethodException, InstantiationException,
IllegalAccessException, InvocationTargetException {
return createAllocationService(settings, new ClusterSettings(Settings.Builder.EMPTY_SETTINGS, ClusterSettings
.BUILT_IN_CLUSTER_SETTINGS));
}
public static AllocationService createAllocationService(Settings settings, ClusterSettings clusterSettings) throws
InvocationTargetException, NoSuchMethodException, InstantiationException, IllegalAccessException {
return new AllocationService(settings,
defaultAllocationDeciders(settings, clusterSettings),
NoopGatewayAllocator.INSTANCE, new BalancedShardsAllocator(settings), EmptyClusterInfoService.INSTANCE);
}
public static AllocationDeciders defaultAllocationDeciders(Settings settings, ClusterSettings clusterSettings) throws
IllegalAccessException, InvocationTargetException, InstantiationException, NoSuchMethodException {
List<AllocationDecider> list = new ArrayList<>();
// Keep a deterministic order of allocation deciders for the benchmark
for (Class<? extends AllocationDecider> deciderClass : ClusterModule.DEFAULT_ALLOCATION_DECIDERS) {
try {
Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class, ClusterSettings
.class);
list.add(constructor.newInstance(settings, clusterSettings));
} catch (NoSuchMethodException e) {
Constructor<? extends AllocationDecider> constructor = deciderClass.getConstructor(Settings.class);
list.add(constructor.newInstance(settings));
}
}
return new AllocationDeciders(settings, list.toArray(new AllocationDecider[0]));
}
public static DiscoveryNode newNode(String nodeId, Map<String, String> attributes) {
return new DiscoveryNode("", nodeId, LocalTransportAddress.buildUnique(), attributes, Sets.newHashSet(DiscoveryNode.Role.MASTER,
DiscoveryNode.Role.DATA), Version.CURRENT);
}
}

View File

@ -0,0 +1,8 @@
# Do not log at all if it is not really critical - we're in a benchmark
benchmarks.es.logger.level=ERROR
log4j.rootLogger=${benchmarks.es.logger.level}, out
log4j.appender.out=org.apache.log4j.ConsoleAppender
log4j.appender.out.layout=org.apache.log4j.PatternLayout
log4j.appender.out.layout.conversionPattern=[%d{ISO8601}][%-5p][%-25c] %m%n

View File

@ -27,6 +27,31 @@ import org.apache.tools.ant.taskdefs.condition.Os
subprojects {
group = 'org.elasticsearch'
version = org.elasticsearch.gradle.VersionProperties.elasticsearch
description = "Elasticsearch subproject ${project.path}"
// we only use maven publish to add tasks for pom generation
plugins.withType(MavenPublishPlugin).whenPluginAdded {
publishing {
publications {
// add license information to generated poms
all {
pom.withXml { XmlProvider xml ->
Node node = xml.asNode()
node.appendNode('inceptionYear', '2009')
Node license = node.appendNode('licenses').appendNode('license')
license.appendNode('name', 'The Apache Software License, Version 2.0')
license.appendNode('url', 'http://www.apache.org/licenses/LICENSE-2.0.txt')
license.appendNode('distribution', 'repo')
Node developer = node.appendNode('developers').appendNode('developer')
developer.appendNode('name', 'Elastic')
developer.appendNode('url', 'http://www.elastic.co')
}
}
}
}
}
plugins.withType(NexusPlugin).whenPluginAdded {
modifyPom {
@ -56,7 +81,7 @@ subprojects {
nexus {
String buildSnapshot = System.getProperty('build.snapshot', 'true')
if (buildSnapshot == 'false') {
Repository repo = new RepositoryBuilder().findGitDir(new File('.')).build()
Repository repo = new RepositoryBuilder().findGitDir(project.rootDir).build()
String shortHash = repo.resolve('HEAD')?.name?.substring(0,7)
repositoryUrl = project.hasProperty('build.repository') ? project.property('build.repository') : "file://${System.getenv('HOME')}/elasticsearch-releases/${version}-${shortHash}/"
}
@ -119,6 +144,14 @@ subprojects {
// see https://discuss.gradle.org/t/add-custom-javadoc-option-that-does-not-take-an-argument/5959
javadoc.options.encoding='UTF8'
javadoc.options.addStringOption('Xdoclint:all,-missing', '-quiet')
/*
TODO: building javadocs with java 9 b118 is currently broken with weird errors, so
for now this is commented out...try again with the next ea build...
javadoc.executable = new File(project.javaHome, 'bin/javadoc')
if (project.javaVersion == JavaVersion.VERSION_1_9) {
// TODO: remove this hack! gradle should be passing this...
javadoc.options.addStringOption('source', '8')
}*/
}
}
@ -127,8 +160,12 @@ subprojects {
them as external dependencies so the build plugin that we use can be used
to build elasticsearch plugins outside of the elasticsearch source tree. */
ext.projectSubstitutions = [
"org.elasticsearch.gradle:build-tools:${version}": ':build-tools',
"org.elasticsearch:rest-api-spec:${version}": ':rest-api-spec',
"org.elasticsearch:elasticsearch:${version}": ':core',
"org.elasticsearch.client:rest:${version}": ':client:rest',
"org.elasticsearch.client:sniffer:${version}": ':client:sniffer',
"org.elasticsearch.client:test:${version}": ':client:test',
"org.elasticsearch.test:framework:${version}": ':test:framework',
"org.elasticsearch.distribution.integ-test-zip:elasticsearch:${version}": ':distribution:integ-test-zip',
"org.elasticsearch.distribution.zip:elasticsearch:${version}": ':distribution:zip',
@ -224,7 +261,6 @@ allprojects {
idea {
project {
languageLevel = org.elasticsearch.gradle.BuildPlugin.minimumJava.toString()
vcs = 'Git'
}
}
@ -236,13 +272,6 @@ tasks.idea.doLast {
if (System.getProperty('idea.active') != null && ideaMarker.exists() == false) {
throw new GradleException('You must run gradle idea from the root of elasticsearch before importing into IntelliJ')
}
// add buildSrc itself as a groovy project
task buildSrcIdea(type: GradleBuild) {
buildFile = 'buildSrc/build.gradle'
tasks = ['cleanIdea', 'ideaModule']
}
tasks.idea.dependsOn(buildSrcIdea)
// eclipse configuration
allprojects {
@ -278,20 +307,14 @@ allprojects {
into '.settings'
}
// otherwise .settings is not nuked entirely
tasks.cleanEclipse {
task wipeEclipseSettings(type: Delete) {
delete '.settings'
}
tasks.cleanEclipse.dependsOn(wipeEclipseSettings)
// otherwise the eclipse merging is *super confusing*
tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)
}
// add buildSrc itself as a groovy project
task buildSrcEclipse(type: GradleBuild) {
buildFile = 'buildSrc/build.gradle'
tasks = ['cleanEclipse', 'eclipse']
}
tasks.eclipse.dependsOn(buildSrcEclipse)
// we need to add the same --debug-jvm option as
// the real RunTask has, so we can pass it through
class Run extends DefaultTask {

1
buildSrc/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
build-bootstrap/

View File

@ -1,5 +1,3 @@
import java.nio.file.Files
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
@ -19,25 +17,31 @@ import java.nio.file.Files
* under the License.
*/
// we must use buildscript + apply so that an external plugin
// can apply this file, since the plugins directive is not
// supported through file includes
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.bmuschko:gradle-nexus-plugin:2.3.1'
}
}
import java.nio.file.Files
apply plugin: 'groovy'
apply plugin: 'com.bmuschko.nexus'
// TODO: move common IDE configuration to a common file to include
apply plugin: 'idea'
apply plugin: 'eclipse'
group = 'org.elasticsearch.gradle'
archivesBaseName = 'build-tools'
// TODO: remove this when upgrading to a version that supports ProgressLogger
// gradle 2.14 made internal apis unavailable to plugins, and gradle considered
// ProgressLogger to be an internal api. Until this is made available again,
// we can't upgrade without losing our nice progress logging
// NOTE that this check duplicates that in BuildPlugin, but we need to check
// early here before trying to compile the broken classes in buildSrc
if (GradleVersion.current() != GradleVersion.version('2.13')) {
throw new GradleException('Gradle 2.13 is required to build elasticsearch')
}
if (project == rootProject) {
// change the build dir used during build init, so that doing a clean
// won't wipe out the buildscript jar
buildDir = 'build-bootstrap'
}
/*****************************************************************************
* Propagating version.properties to the rest of the build *
*****************************************************************************/
Properties props = new Properties()
props.load(project.file('version.properties').newDataInputStream())
@ -51,32 +55,6 @@ if (snapshot) {
props.put("elasticsearch", version);
}
repositories {
mavenCentral()
maven {
name 'sonatype-snapshots'
url "https://oss.sonatype.org/content/repositories/snapshots/"
}
jcenter()
}
dependencies {
compile gradleApi()
compile localGroovy()
compile "com.carrotsearch.randomizedtesting:junit4-ant:${props.getProperty('randomizedrunner')}"
compile("junit:junit:${props.getProperty('junit')}") {
transitive = false
}
compile 'com.netflix.nebula:gradle-extra-configurations-plugin:3.0.3'
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
compile 'de.thetaphi:forbiddenapis:2.0'
compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
compile 'org.apache.rat:apache-rat:0.11'
}
File tempPropertiesFile = new File(project.buildDir, "version.properties")
task writeVersionProperties {
inputs.properties(props)
@ -96,31 +74,84 @@ processResources {
from tempPropertiesFile
}
extraArchive {
javadoc = false
tests = false
/*****************************************************************************
* Dependencies used by the entire build *
*****************************************************************************/
repositories {
jcenter()
}
idea {
module {
inheritOutputDirs = false
outputDir = file('build-idea/classes/main')
testOutputDir = file('build-idea/classes/test')
dependencies {
compile gradleApi()
compile localGroovy()
compile "com.carrotsearch.randomizedtesting:junit4-ant:${props.getProperty('randomizedrunner')}"
compile("junit:junit:${props.getProperty('junit')}") {
transitive = false
}
compile 'com.netflix.nebula:gradle-extra-configurations-plugin:3.0.3'
compile 'com.netflix.nebula:nebula-publishing-plugin:4.4.4'
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
compile 'de.thetaphi:forbiddenapis:2.2'
compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
compile 'org.apache.rat:apache-rat:0.11'
compile 'ru.vyarus:gradle-animalsniffer-plugin:1.0.1'
}
/*****************************************************************************
* Bootstrap repositories *
*****************************************************************************/
// this will only happen when buildSrc is built on its own during build init
if (project == rootProject) {
repositories {
mavenCentral()
maven {
name 'sonatype-snapshots'
url "https://oss.sonatype.org/content/repositories/snapshots/"
}
}
test.exclude 'org/elasticsearch/test/NamingConventionsCheckBadClasses*'
}
/*****************************************************************************
* Normal project checks *
*****************************************************************************/
// this happens when included as a normal project in the build, which we do
// to enforce precommit checks like forbidden apis, as well as setup publishing
if (project != rootProject) {
apply plugin: 'elasticsearch.build'
apply plugin: 'nebula.maven-base-publish'
apply plugin: 'nebula.maven-scm'
// groovydoc succeeds, but has some weird internal exception...
groovydoc.enabled = false
// build-tools is not ready for primetime with these...
dependencyLicenses.enabled = false
forbiddenApisMain.enabled = false
forbiddenApisTest.enabled = false
jarHell.enabled = false
thirdPartyAudit.enabled = false
// test for elasticsearch.build tries to run with ES...
test.enabled = false
// TODO: re-enable once randomizedtesting gradle code is published and removed from here
licenseHeaders.enabled = false
forbiddenPatterns {
exclude '**/*.wav'
// the file that actually defines nocommit
exclude '**/ForbiddenPatternsTask.groovy'
}
namingConventions {
testClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$UnitTestCase'
integTestClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$IntegTestCase'
}
}
eclipse {
classpath {
defaultOutputDir = file('build-eclipse')
}
}
task copyEclipseSettings(type: Copy) {
from project.file('src/main/resources/eclipse.settings')
into '.settings'
}
// otherwise .settings is not nuked entirely
tasks.cleanEclipse {
delete '.settings'
}
tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)

View File

@ -28,12 +28,6 @@ import org.gradle.api.logging.LogLevel
import org.gradle.api.logging.Logger
import org.junit.runner.Description
import javax.sound.sampled.AudioSystem
import javax.sound.sampled.Clip
import javax.sound.sampled.Line
import javax.sound.sampled.LineEvent
import javax.sound.sampled.LineListener
import java.util.concurrent.atomic.AtomicBoolean
import java.util.concurrent.atomic.AtomicInteger
import static com.carrotsearch.ant.tasks.junit4.FormattingUtils.formatDescription
@ -123,36 +117,9 @@ class TestReportLogger extends TestsSummaryEventListener implements AggregatedEv
formatTime(e.getCurrentTime()) + ", stalled for " +
formatDurationInSeconds(e.getNoEventDuration()) + " at: " +
(e.getDescription() == null ? "<unknown>" : formatDescription(e.getDescription())))
try {
playBeat();
} catch (Exception nosound) { /* handling exceptions with style */ }
slowTestsFound = true
}
void playBeat() throws Exception {
Clip clip = (Clip)AudioSystem.getLine(new Line.Info(Clip.class));
final AtomicBoolean stop = new AtomicBoolean();
clip.addLineListener(new LineListener() {
@Override
public void update(LineEvent event) {
if (event.getType() == LineEvent.Type.STOP) {
stop.set(true);
}
}
});
InputStream stream = getClass().getResourceAsStream("/beat.wav");
try {
clip.open(AudioSystem.getAudioInputStream(stream));
clip.start();
while (!stop.get()) {
Thread.sleep(20);
}
clip.close();
} finally {
stream.close();
}
}
@Subscribe
void onQuit(AggregatedQuitEvent e) throws IOException {
if (config.showNumFailuresAtEnd > 0 && !failedTests.isEmpty()) {

View File

@ -19,6 +19,7 @@
package org.elasticsearch.gradle
import nebula.plugin.extraconfigurations.ProvidedBasePlugin
import nebula.plugin.publishing.maven.MavenBasePublishPlugin
import org.elasticsearch.gradle.precommit.PrecommitTasks
import org.gradle.api.GradleException
import org.gradle.api.JavaVersion
@ -33,6 +34,8 @@ import org.gradle.api.artifacts.ProjectDependency
import org.gradle.api.artifacts.ResolvedArtifact
import org.gradle.api.artifacts.dsl.RepositoryHandler
import org.gradle.api.artifacts.maven.MavenPom
import org.gradle.api.publish.maven.MavenPublication
import org.gradle.api.publish.maven.tasks.GenerateMavenPom
import org.gradle.api.tasks.bundling.Jar
import org.gradle.api.tasks.compile.JavaCompile
import org.gradle.internal.jvm.Jvm
@ -54,7 +57,7 @@ class BuildPlugin implements Plugin<Project> {
project.pluginManager.apply('java')
project.pluginManager.apply('carrotsearch.randomized-testing')
// these plugins add lots of info to our jars
configureJarManifest(project) // jar config must be added before info broker
configureJars(project) // jar config must be added before info broker
project.pluginManager.apply('nebula.info-broker')
project.pluginManager.apply('nebula.info-basic')
project.pluginManager.apply('nebula.info-java')
@ -68,6 +71,7 @@ class BuildPlugin implements Plugin<Project> {
configureConfigurations(project)
project.ext.versions = VersionProperties.versions
configureCompile(project)
configurePomGeneration(project)
configureTest(project)
configurePrecommit(project)
@ -109,7 +113,7 @@ class BuildPlugin implements Plugin<Project> {
}
// enforce gradle version
GradleVersion minGradle = GradleVersion.version('2.8')
GradleVersion minGradle = GradleVersion.version('2.13')
if (GradleVersion.current() < minGradle) {
throw new GradleException("${minGradle} or above is required to build elasticsearch")
}
@ -139,7 +143,7 @@ class BuildPlugin implements Plugin<Project> {
}
project.rootProject.ext.javaHome = javaHome
project.rootProject.ext.javaVersion = javaVersion
project.rootProject.ext.javaVersion = javaVersionEnum
project.rootProject.ext.buildChecksDone = true
}
project.targetCompatibility = minimumJava
@ -228,7 +232,7 @@ class BuildPlugin implements Plugin<Project> {
*/
static void configureConfigurations(Project project) {
// we are not shipping these jars, we act like dumb consumers of these things
if (project.path.startsWith(':test:fixtures')) {
if (project.path.startsWith(':test:fixtures') || project.path == ':build-tools') {
return
}
// fail on any conflicting dependency versions
@ -266,44 +270,7 @@ class BuildPlugin implements Plugin<Project> {
// add exclusions to the pom directly, for each of the transitive deps of this project's deps
project.modifyPom { MavenPom pom ->
pom.withXml { XmlProvider xml ->
// first find if we have dependencies at all, and grab the node
NodeList depsNodes = xml.asNode().get('dependencies')
if (depsNodes.isEmpty()) {
return
}
// check each dependency for any transitive deps
for (Node depNode : depsNodes.get(0).children()) {
String groupId = depNode.get('groupId').get(0).text()
String artifactId = depNode.get('artifactId').get(0).text()
String version = depNode.get('version').get(0).text()
// collect the transitive deps now that we know what this dependency is
String depConfig = transitiveDepConfigName(groupId, artifactId, version)
Configuration configuration = project.configurations.findByName(depConfig)
if (configuration == null) {
continue // we did not make this dep non-transitive
}
Set<ResolvedArtifact> artifacts = configuration.resolvedConfiguration.resolvedArtifacts
if (artifacts.size() <= 1) {
// this dep has no transitive deps (or the only artifact is itself)
continue
}
// we now know we have something to exclude, so add the exclusion elements
Node exclusions = depNode.appendNode('exclusions')
for (ResolvedArtifact transitiveArtifact : artifacts) {
ModuleVersionIdentifier transitiveDep = transitiveArtifact.moduleVersion.id
if (transitiveDep.group == groupId && transitiveDep.name == artifactId) {
continue; // don't exclude the dependency itself!
}
Node exclusion = exclusions.appendNode('exclusion')
exclusion.appendNode('groupId', transitiveDep.group)
exclusion.appendNode('artifactId', transitiveDep.name)
}
}
}
pom.withXml(removeTransitiveDependencies(project))
}
}
@ -332,6 +299,70 @@ class BuildPlugin implements Plugin<Project> {
}
}
/** Returns a closure which can be used with a MavenPom for removing transitive dependencies. */
private static Closure removeTransitiveDependencies(Project project) {
// TODO: remove this when enforcing gradle 2.13+, it now properly handles exclusions
return { XmlProvider xml ->
// first find if we have dependencies at all, and grab the node
NodeList depsNodes = xml.asNode().get('dependencies')
if (depsNodes.isEmpty()) {
return
}
// check each dependency for any transitive deps
for (Node depNode : depsNodes.get(0).children()) {
String groupId = depNode.get('groupId').get(0).text()
String artifactId = depNode.get('artifactId').get(0).text()
String version = depNode.get('version').get(0).text()
// collect the transitive deps now that we know what this dependency is
String depConfig = transitiveDepConfigName(groupId, artifactId, version)
Configuration configuration = project.configurations.findByName(depConfig)
if (configuration == null) {
continue // we did not make this dep non-transitive
}
Set<ResolvedArtifact> artifacts = configuration.resolvedConfiguration.resolvedArtifacts
if (artifacts.size() <= 1) {
// this dep has no transitive deps (or the only artifact is itself)
continue
}
// we now know we have something to exclude, so add the exclusion elements
Node exclusions = depNode.appendNode('exclusions')
for (ResolvedArtifact transitiveArtifact : artifacts) {
ModuleVersionIdentifier transitiveDep = transitiveArtifact.moduleVersion.id
if (transitiveDep.group == groupId && transitiveDep.name == artifactId) {
continue; // don't exclude the dependency itself!
}
Node exclusion = exclusions.appendNode('exclusion')
exclusion.appendNode('groupId', transitiveDep.group)
exclusion.appendNode('artifactId', transitiveDep.name)
}
}
}
}
/**Configuration generation of maven poms. */
public static void configurePomGeneration(Project project) {
project.plugins.withType(MavenBasePublishPlugin.class).whenPluginAdded {
project.publishing {
publications {
all { MavenPublication publication -> // we only deal with maven
// add exclusions to the pom directly, for each of the transitive deps of this project's deps
publication.pom.withXml(removeTransitiveDependencies(project))
}
}
}
project.tasks.withType(GenerateMavenPom.class) { GenerateMavenPom t ->
// place the pom next to the jar it is for
t.destination = new File(project.buildDir, "distributions/${project.archivesBaseName}-${project.version}.pom")
// build poms with assemble
project.assemble.dependsOn(t)
}
}
}
/** Adds compiler settings to the project */
static void configureCompile(Project project) {
project.ext.compactProfile = 'compact3'
@ -341,32 +372,40 @@ class BuildPlugin implements Plugin<Project> {
options.fork = true
options.forkOptions.executable = new File(project.javaHome, 'bin/javac')
options.forkOptions.memoryMaximumSize = "1g"
if (project.targetCompatibility >= JavaVersion.VERSION_1_8) {
// compile with compact 3 profile by default
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
if (project.compactProfile != 'full') {
options.compilerArgs << '-profile' << project.compactProfile
}
}
/*
* -path because gradle will send in paths that don't always exist.
* -missing because we have tons of missing @returns and @param.
* -serial because we don't use java serialization.
*/
// don't even think about passing args with -J-xxx, oracle will ask you to submit a bug report :)
options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial' << '-Xdoclint:all' << '-Xdoclint:-missing'
// compile with compact 3 profile by default
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
if (project.compactProfile != 'full') {
options.compilerArgs << '-profile' << project.compactProfile
}
options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial,-options,-deprecation' << '-Xdoclint:all' << '-Xdoclint:-missing'
options.encoding = 'UTF-8'
//options.incremental = true
// gradle ignores target/source compatibility when it is "unnecessary", but since to compile with
// java 9, gradle is running in java 8, it incorrectly thinks it is unnecessary
assert minimumJava == JavaVersion.VERSION_1_8
options.compilerArgs << '-target' << '1.8' << '-source' << '1.8'
if (project.javaVersion == JavaVersion.VERSION_1_9) {
// hack until gradle supports java 9's new "-release" arg
assert minimumJava == JavaVersion.VERSION_1_8
options.compilerArgs << '-release' << '8'
project.sourceCompatibility = null
project.targetCompatibility = null
}
}
}
}
/** Adds additional manifest info to jars */
static void configureJarManifest(Project project) {
/** Adds additional manifest info to jars, and adds source and javadoc jars */
static void configureJars(Project project) {
project.tasks.withType(Jar) { Jar jarTask ->
// we put all our distributable files under distributions
jarTask.destinationDir = new File(project.buildDir, 'distributions')
// fixup the jar manifest
jarTask.doFirst {
boolean isSnapshot = VersionProperties.elasticsearch.endsWith("-SNAPSHOT");
String version = VersionProperties.elasticsearch;
@ -422,7 +461,7 @@ class BuildPlugin implements Plugin<Project> {
// default test sysprop values
systemProperty 'tests.ifNoTests', 'fail'
// TODO: remove setting logging level via system property
systemProperty 'es.logger.level', 'WARN'
systemProperty 'tests.logger.level', 'WARN'
for (Map.Entry<String, String> property : System.properties.entrySet()) {
if (property.getKey().startsWith('tests.') ||
property.getKey().startsWith('es.')) {

View File

@ -26,14 +26,17 @@ import org.gradle.api.tasks.Exec
* A wrapper around gradle's Exec task to capture output and log on error.
*/
class LoggedExec extends Exec {
protected ByteArrayOutputStream output = new ByteArrayOutputStream()
LoggedExec() {
if (logger.isInfoEnabled() == false) {
standardOutput = new ByteArrayOutputStream()
errorOutput = standardOutput
standardOutput = output
errorOutput = output
ignoreExitValue = true
doLast {
if (execResult.exitValue != 0) {
standardOutput.toString('UTF-8').eachLine { line -> logger.error(line) }
output.toString('UTF-8').eachLine { line -> logger.error(line) }
throw new GradleException("Process '${executable} ${args.join(' ')}' finished with non-zero exit value ${execResult.exitValue}")
}
}

View File

@ -0,0 +1,65 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.elasticsearch.gradle.test.RestTestPlugin
import org.gradle.api.Project
import org.gradle.api.Task
/**
* Sets up tests for documentation.
*/
public class DocsTestPlugin extends RestTestPlugin {
@Override
public void apply(Project project) {
super.apply(project)
Task listSnippets = project.tasks.create('listSnippets', SnippetsTask)
listSnippets.group 'Docs'
listSnippets.description 'List each snippet'
listSnippets.perSnippet { println(it.toString()) }
Task listConsoleCandidates = project.tasks.create(
'listConsoleCandidates', SnippetsTask)
listConsoleCandidates.group 'Docs'
listConsoleCandidates.description
'List snippets that probably should be marked // CONSOLE'
listConsoleCandidates.perSnippet {
if (
it.console // Already marked, nothing to do
|| it.testResponse // It is a response
) {
return
}
List<String> languages = [
// These languages should almost always be marked console
'js', 'json',
// These are often curl commands that should be converted but
// are probably false positives
'sh', 'shell',
]
if (false == languages.contains(it.language)) {
return
}
println(it.toString())
}
project.tasks.create('buildRestTests', RestTestsFromSnippetsTask)
}
}

View File

@ -0,0 +1,240 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.elasticsearch.gradle.doc.SnippetsTask.Snippet
import org.gradle.api.InvalidUserDataException
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.OutputDirectory
import java.nio.file.Files
import java.nio.file.Path
import java.util.regex.Matcher
/**
* Generates REST tests for each snippet marked // TEST.
*/
public class RestTestsFromSnippetsTask extends SnippetsTask {
@Input
Map<String, String> setups = new HashMap()
/**
* Root directory of the tests being generated. To make rest tests happy
* we generate them in a testRoot() which is contained in this directory.
*/
@OutputDirectory
File testRoot = project.file('build/rest')
public RestTestsFromSnippetsTask() {
project.afterEvaluate {
// Wait to set this so testRoot can be customized
project.sourceSets.test.output.dir(testRoot, builtBy: this)
}
TestBuilder builder = new TestBuilder()
doFirst { outputRoot().delete() }
perSnippet builder.&handleSnippet
doLast builder.&finishLastTest
}
/**
* Root directory containing all the files generated by this task. It is
* contained withing testRoot.
*/
File outputRoot() {
return new File(testRoot, '/rest-api-spec/test')
}
private class TestBuilder {
private static final String SYNTAX = {
String method = /(?<method>GET|PUT|POST|HEAD|OPTIONS|DELETE)/
String pathAndQuery = /(?<pathAndQuery>[^\n]+)/
String badBody = /GET|PUT|POST|HEAD|OPTIONS|DELETE|#/
String body = /(?<body>(?:\n(?!$badBody)[^\n]+)+)/
String nonComment = /$method\s+$pathAndQuery$body?/
String comment = /(?<comment>#.+)/
/(?:$comment|$nonComment)\n+/
}()
/**
* The file in which we saw the last snippet that made a test.
*/
Path lastDocsPath
/**
* The file we're building.
*/
PrintWriter current
/**
* Called each time a snippet is encountered. Tracks the snippets and
* calls buildTest to actually build the test.
*/
void handleSnippet(Snippet snippet) {
if (snippet.language == 'json') {
throw new InvalidUserDataException(
"$snippet: Use `js` instead of `json`.")
}
if (snippet.testSetup) {
setup(snippet)
return
}
if (snippet.testResponse) {
response(snippet)
return
}
if (snippet.test || snippet.console) {
test(snippet)
return
}
// Must be an unmarked snippet....
}
private void test(Snippet test) {
setupCurrent(test)
if (false == test.continued) {
current.println('---')
current.println("\"$test.start\":")
}
if (test.skipTest) {
current.println(" - skip:")
current.println(" features: always_skip")
current.println(" reason: $test.skipTest")
}
if (test.setup != null) {
String setup = setups[test.setup]
if (setup == null) {
throw new InvalidUserDataException("Couldn't find setup "
+ "for $test")
}
current.println(setup)
}
body(test, false)
}
private void response(Snippet response) {
current.println(" - match: ")
current.println(" \$body: ")
response.contents.eachLine { current.println(" $it") }
}
void emitDo(String method, String pathAndQuery,
String body, String catchPart, boolean inSetup) {
def (String path, String query) = pathAndQuery.tokenize('?')
current.println(" - do:")
if (catchPart != null) {
current.println(" catch: $catchPart")
}
current.println(" raw:")
current.println(" method: $method")
current.println(" path: \"$path\"")
if (query != null) {
for (String param: query.tokenize('&')) {
def (String name, String value) = param.tokenize('=')
if (value == null) {
value = ''
}
current.println(" $name: \"$value\"")
}
}
if (body != null) {
// Throw out the leading newline we get from parsing the body
body = body.substring(1)
current.println(" body: |")
body.eachLine { current.println(" $it") }
}
/* Catch any shard failures. These only cause a non-200 response if
* no shard succeeds. But we need to fail the tests on all of these
* because they mean invalid syntax or broken queries or something
* else that we don't want to teach people to do. The REST test
* framework doesn't allow us to has assertions in the setup
* section so we have to skip it there. We also have to skip _cat
* actions because they don't return json so we can't is_false
* them. That is ok because they don't have this
* partial-success-is-success thing.
*/
if (false == inSetup && false == path.startsWith('_cat')) {
current.println(" - is_false: _shards.failures")
}
}
private void setup(Snippet setup) {
if (lastDocsPath == setup.path) {
throw new InvalidUserDataException("$setup: wasn't first")
}
setupCurrent(setup)
current.println('---')
current.println("setup:")
body(setup, true)
// always wait for yellow before anything is executed
current.println(
" - do:\n" +
" raw:\n" +
" method: GET\n" +
" path: \"_cluster/health\"\n" +
" wait_for_status: \"yellow\"")
}
private void body(Snippet snippet, boolean inSetup) {
parse("$snippet", snippet.contents, SYNTAX) { matcher, last ->
if (matcher.group("comment") != null) {
// Comment
return
}
String method = matcher.group("method")
String pathAndQuery = matcher.group("pathAndQuery")
String body = matcher.group("body")
String catchPart = last ? snippet.catchPart : null
if (pathAndQuery.startsWith('/')) {
// Leading '/'s break the generated paths
pathAndQuery = pathAndQuery.substring(1)
}
emitDo(method, pathAndQuery, body, catchPart, inSetup)
}
}
private PrintWriter setupCurrent(Snippet test) {
if (lastDocsPath == test.path) {
return
}
finishLastTest()
lastDocsPath = test.path
// Make the destination file:
// Shift the path into the destination directory tree
Path dest = outputRoot().toPath().resolve(test.path)
// Replace the extension
String fileName = dest.getName(dest.nameCount - 1)
dest = dest.parent.resolve(fileName.replace('.asciidoc', '.yaml'))
// Now setup the writer
Files.createDirectories(dest.parent)
current = dest.newPrintWriter('UTF-8')
}
void finishLastTest() {
if (current != null) {
current.close()
current = null
}
}
}
}

View File

@ -0,0 +1,308 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.gradle.api.DefaultTask
import org.gradle.api.InvalidUserDataException
import org.gradle.api.file.ConfigurableFileTree
import org.gradle.api.tasks.InputFiles
import org.gradle.api.tasks.TaskAction
import java.nio.file.Path
import java.util.regex.Matcher
/**
* A task which will run a closure on each snippet in the documentation.
*/
public class SnippetsTask extends DefaultTask {
private static final String SCHAR = /(?:\\\/|[^\/])/
private static final String SUBSTITUTION = /s\/($SCHAR+)\/($SCHAR*)\//
private static final String CATCH = /catch:\s*((?:\/[^\/]+\/)|[^ \]]+)/
private static final String SKIP = /skip:([^\]]+)/
private static final String SETUP = /setup:([^ \]]+)/
private static final String TEST_SYNTAX =
/(?:$CATCH|$SUBSTITUTION|$SKIP|(continued)|$SETUP) ?/
/**
* Action to take on each snippet. Called with a single parameter, an
* instance of Snippet.
*/
Closure perSnippet
/**
* The docs to scan. Defaults to every file in the directory exception the
* build.gradle file because that is appropriate for Elasticsearch's docs
* directory.
*/
@InputFiles
ConfigurableFileTree docs = project.fileTree(project.projectDir) {
// No snippets in the build file
exclude 'build.gradle'
// That is where the snippets go, not where they come from!
exclude 'build'
}
@TaskAction
public void executeTask() {
/*
* Walks each line of each file, building snippets as it encounters
* the lines that make up the snippet.
*/
for (File file: docs) {
String lastLanguage
int lastLanguageLine
Snippet snippet = null
StringBuilder contents = null
List substitutions = null
Closure emit = {
snippet.contents = contents.toString()
contents = null
if (substitutions != null) {
substitutions.each { String pattern, String subst ->
/*
* $body is really common but it looks like a
* backreference so we just escape it here to make the
* tests cleaner.
*/
subst = subst.replace('$body', '\\$body')
// \n is a new line....
subst = subst.replace('\\n', '\n')
snippet.contents = snippet.contents.replaceAll(
pattern, subst)
}
substitutions = null
}
perSnippet(snippet)
snippet = null
}
file.eachLine('UTF-8') { String line, int lineNumber ->
Matcher matcher
if (line ==~ /-{4,}\s*/) { // Four dashes looks like a snippet
if (snippet == null) {
Path path = docs.dir.toPath().relativize(file.toPath())
snippet = new Snippet(path: path, start: lineNumber)
if (lastLanguageLine == lineNumber - 1) {
snippet.language = lastLanguage
}
} else {
snippet.end = lineNumber
}
return
}
matcher = line =~ /\[source,(\w+)]\s*/
if (matcher.matches()) {
lastLanguage = matcher.group(1)
lastLanguageLine = lineNumber
return
}
if (line ==~ /\/\/\s*AUTOSENSE\s*/) {
throw new InvalidUserDataException("AUTOSENSE has been " +
"replaced by CONSOLE. Use that instead at " +
"$file:$lineNumber")
}
if (line ==~ /\/\/\s*CONSOLE\s*/) {
if (snippet == null) {
throw new InvalidUserDataException("CONSOLE not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.console = true
return
}
matcher = line =~ /\/\/\s*TEST(\[(.+)\])?\s*/
if (matcher.matches()) {
if (snippet == null) {
throw new InvalidUserDataException("TEST not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.test = true
if (matcher.group(2) != null) {
String loc = "$file:$lineNumber"
parse(loc, matcher.group(2), TEST_SYNTAX) {
if (it.group(1) != null) {
snippet.catchPart = it.group(1)
return
}
if (it.group(2) != null) {
if (substitutions == null) {
substitutions = []
}
substitutions.add([it.group(2), it.group(3)])
return
}
if (it.group(4) != null) {
snippet.skipTest = it.group(4)
return
}
if (it.group(5) != null) {
snippet.continued = true
return
}
if (it.group(6) != null) {
snippet.setup = it.group(6)
return
}
throw new InvalidUserDataException(
"Invalid test marker: $line")
}
}
return
}
matcher = line =~ /\/\/\s*TESTRESPONSE(\[(.+)\])?\s*/
if (matcher.matches()) {
if (snippet == null) {
throw new InvalidUserDataException("TESTRESPONSE not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.testResponse = true
if (matcher.group(2) != null) {
if (substitutions == null) {
substitutions = []
}
String loc = "$file:$lineNumber"
parse(loc, matcher.group(2), /$SUBSTITUTION ?/) {
substitutions.add([it.group(1), it.group(2)])
}
}
return
}
if (line ==~ /\/\/\s*TESTSETUP\s*/) {
snippet.testSetup = true
return
}
if (snippet == null) {
// Outside
return
}
if (snippet.end == Snippet.NOT_FINISHED) {
// Inside
if (contents == null) {
contents = new StringBuilder()
}
// We don't need the annotations
line = line.replaceAll(/<\d+>/, '')
// Nor any trailing spaces
line = line.replaceAll(/\s+$/, '')
contents.append(line).append('\n')
return
}
// Just finished
emit()
}
if (snippet != null) emit()
}
}
static class Snippet {
static final int NOT_FINISHED = -1
/**
* Path to the file containing this snippet. Relative to docs.dir of the
* SnippetsTask that created it.
*/
Path path
int start
int end = NOT_FINISHED
String contents
boolean console = false
boolean test = false
boolean testResponse = false
boolean testSetup = false
String skipTest = null
boolean continued = false
String language = null
String catchPart = null
String setup = null
@Override
public String toString() {
String result = "$path[$start:$end]"
if (language != null) {
result += "($language)"
}
if (console) {
result += '// CONSOLE'
}
if (test) {
result += '// TEST'
if (catchPart) {
result += "[catch: $catchPart]"
}
if (skipTest) {
result += "[skip=$skipTest]"
}
if (continued) {
result += '[continued]'
}
if (setup) {
result += "[setup:$setup]"
}
}
if (testResponse) {
result += '// TESTRESPONSE'
}
if (testSetup) {
result += '// TESTSETUP'
}
return result
}
}
/**
* Repeatedly match the pattern to the string, calling the closure with the
* matchers each time there is a match. If there are characters that don't
* match then blow up. If the closure takes two parameters then the second
* one is "is this the last match?".
*/
protected parse(String location, String s, String pattern, Closure c) {
if (s == null) {
return // Silly null, only real stuff gets to match!
}
Matcher m = s =~ pattern
int offset = 0
Closure extraContent = { message ->
StringBuilder cutOut = new StringBuilder()
cutOut.append(s[offset - 6..offset - 1])
cutOut.append('*')
cutOut.append(s[offset..Math.min(offset + 5, s.length() - 1)])
String cutOutNoNl = cutOut.toString().replace('\n', '\\n')
throw new InvalidUserDataException("$location: Extra content "
+ "$message ('$cutOutNoNl') matching [$pattern]: $s")
}
while (m.find()) {
if (m.start() != offset) {
extraContent("between [$offset] and [${m.start()}]")
}
offset = m.end()
if (c.maximumNumberOfParameters == 1) {
c(m)
} else {
c(m, offset == s.length())
}
}
if (offset == 0) {
throw new InvalidUserDataException("$location: Didn't match "
+ "$pattern: $s")
}
if (offset != s.length()) {
extraContent("after [$offset]")
}
}
}

View File

@ -18,14 +18,14 @@
*/
package org.elasticsearch.gradle.plugin
import nebula.plugin.publishing.maven.MavenBasePublishPlugin
import nebula.plugin.publishing.maven.MavenScmPlugin
import org.elasticsearch.gradle.BuildPlugin
import org.elasticsearch.gradle.test.RestIntegTestTask
import org.elasticsearch.gradle.test.RunTask
import org.gradle.api.Project
import org.gradle.api.artifacts.Dependency
import org.gradle.api.tasks.SourceSet
import org.gradle.api.tasks.bundling.Zip
/**
* Encapsulates build configuration for an Elasticsearch plugin.
*/
@ -50,10 +50,11 @@ public class PluginBuildPlugin extends BuildPlugin {
} else {
project.integTest.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)
project.tasks.run.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)
addPomGeneration(project)
}
project.namingConventions {
// Plugins decalare extensions of ESIntegTestCase as "Tests" instead of IT.
// Plugins declare integration tests as "Tests" instead of IT.
skipIntegTestInDisguise = true
}
}
@ -125,4 +126,32 @@ public class PluginBuildPlugin extends BuildPlugin {
project.configurations.getByName('default').extendsFrom = []
project.artifacts.add('default', bundle)
}
/**
* Adds the plugin jar and zip as publications.
*/
protected static void addPomGeneration(Project project) {
project.plugins.apply(MavenBasePublishPlugin.class)
project.plugins.apply(MavenScmPlugin.class)
project.publishing {
publications {
nebula {
artifact project.bundlePlugin
pom.withXml {
// overwrite the name/description in the pom nebula set up
Node root = asNode()
for (Node node : root.children()) {
if (node.name() == 'name') {
node.setValue(project.pluginProperties.extension.name)
} else if (node.name() == 'description') {
node.setValue(project.pluginProperties.extension.description)
}
}
}
}
}
}
}
}

View File

@ -21,11 +21,11 @@ package org.elasticsearch.gradle.precommit
import org.elasticsearch.gradle.LoggedExec
import org.elasticsearch.gradle.VersionProperties
import org.gradle.api.artifacts.Dependency
import org.gradle.api.file.FileCollection
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.InputFiles
import org.gradle.api.tasks.OutputFile
/**
* Runs NamingConventionsCheck on a classpath/directory combo to verify that
* tests are named according to our conventions so they'll be picked up by
@ -57,8 +57,27 @@ public class NamingConventionsTask extends LoggedExec {
@Input
boolean skipIntegTestInDisguise = false
/**
* Superclass for all tests.
*/
@Input
String testClass = 'org.apache.lucene.util.LuceneTestCase'
/**
* Superclass for all integration tests.
*/
@Input
String integTestClass = 'org.elasticsearch.test.ESIntegTestCase'
public NamingConventionsTask() {
dependsOn(classpath)
// Extra classpath contains the actual test
project.configurations.create('namingConventions')
Dependency buildToolsDep = project.dependencies.add('namingConventions',
"org.elasticsearch.gradle:build-tools:${VersionProperties.elasticsearch}")
buildToolsDep.transitive = false // We don't need gradle in the classpath. It conflicts.
FileCollection extraClasspath = project.configurations.namingConventions
dependsOn(extraClasspath)
description = "Runs NamingConventionsCheck on ${classpath}"
executable = new File(project.javaHome, 'bin/java')
onlyIf { project.sourceSets.test.output.classesDir.exists() }
@ -69,9 +88,12 @@ public class NamingConventionsTask extends LoggedExec {
project.afterEvaluate {
doFirst {
args('-Djna.nosys=true')
args('-cp', classpath.asPath, 'org.elasticsearch.test.NamingConventionsCheck')
args('-cp', (classpath + extraClasspath).asPath, 'org.elasticsearch.test.NamingConventionsCheck')
args('--test-class', testClass)
if (skipIntegTestInDisguise) {
args('--skip-integ-tests-in-disguise')
} else {
args('--integ-test-class', integTestClass)
}
/*
* The test framework has classes that fail the checks to validate that the checks fail properly.
@ -79,7 +101,7 @@ public class NamingConventionsTask extends LoggedExec {
* process of ignoring them lets us validate that they were found so this ignore parameter acts
* as the test for the NamingConventionsCheck.
*/
if (':test:framework'.equals(project.path)) {
if (':build-tools'.equals(project.path)) {
args('--self-test')
}
args('--', project.sourceSets.test.output.classesDir.absolutePath)

View File

@ -34,7 +34,6 @@ class PrecommitTasks {
configureForbiddenApis(project),
configureCheckstyle(project),
configureNamingConventions(project),
configureLoggerUsage(project),
project.tasks.create('forbiddenPatterns', ForbiddenPatternsTask.class),
project.tasks.create('licenseHeaders', LicenseHeadersTask.class),
project.tasks.create('jarHell', JarHellTask.class),
@ -49,6 +48,20 @@ class PrecommitTasks {
UpdateShasTask updateShas = project.tasks.create('updateShas', UpdateShasTask.class)
updateShas.parentTask = dependencyLicenses
}
if (project.path != ':build-tools') {
/*
* Sadly, build-tools can't have logger-usage-check because that
* would create a circular project dependency between build-tools
* (which provides NamingConventionsCheck) and :test:logger-usage
* which provides the logger usage check. Since the build tools
* don't use the logger usage check because they don't have any
* of Elaticsearch's loggers and :test:logger-usage actually does
* use the NamingConventionsCheck we break the circular dependency
* here.
*/
precommitTasks.add(configureLoggerUsage(project))
}
Map<String, Object> precommitOptions = [
name: 'precommit',
@ -62,9 +75,8 @@ class PrecommitTasks {
private static Task configureForbiddenApis(Project project) {
project.pluginManager.apply(ForbiddenApisPlugin.class)
project.forbiddenApis {
internalRuntimeForbidden = true
failOnUnsupportedJava = false
bundledSignatures = ['jdk-unsafe', 'jdk-deprecated', 'jdk-system-out']
bundledSignatures = ['jdk-unsafe', 'jdk-deprecated', 'jdk-non-portable', 'jdk-system-out']
signaturesURLs = [getClass().getResource('/forbidden/jdk-signatures.txt'),
getClass().getResource('/forbidden/es-all-signatures.txt')]
suppressAnnotations = ['**.SuppressForbidden']

View File

@ -203,8 +203,7 @@ public class ThirdPartyAuditTask extends AntTask {
Set<String> sheistySet = getSheistyClasses(tmpDir.toPath());
try {
ant.thirdPartyAudit(internalRuntimeForbidden: false,
failOnUnsupportedJava: false,
ant.thirdPartyAudit(failOnUnsupportedJava: false,
failOnMissingClasses: false,
signaturesFile: new File(getClass().getResource('/forbidden/third-party-audit.txt').toURI()),
classpath: classpath.asPath) {

View File

@ -291,9 +291,10 @@ class ClusterFormationTasks {
File configDir = new File(node.homeDir, 'config')
copyConfig.into(configDir) // copy must always have a general dest dir, even though we don't use it
for (Map.Entry<String,Object> extraConfigFile : node.config.extraConfigFiles.entrySet()) {
Object extraConfigFileValue = extraConfigFile.getValue()
copyConfig.doFirst {
// make sure the copy won't be a no-op or act on a directory
File srcConfigFile = project.file(extraConfigFile.getValue())
File srcConfigFile = project.file(extraConfigFileValue)
if (srcConfigFile.isDirectory()) {
throw new GradleException("Source for extraConfigFile must be a file: ${srcConfigFile}")
}
@ -303,7 +304,7 @@ class ClusterFormationTasks {
}
File destConfigFile = new File(node.homeDir, 'config/' + extraConfigFile.getKey())
// wrap source file in closure to delay resolution to execution time
copyConfig.from({ extraConfigFile.getValue() }) {
copyConfig.from({ extraConfigFileValue }) {
// this must be in a closure so it is only applied to the single file specified in from above
into(configDir.toPath().relativize(destConfigFile.canonicalFile.parentFile.toPath()).toFile())
rename { destConfigFile.name }
@ -418,8 +419,7 @@ class ClusterFormationTasks {
// argument are wrapped in an ExecArgWrapper that escapes commas
args execArgs.collect { a -> new EscapeCommaWrapper(arg: a) }
} else {
executable 'sh'
args execArgs
commandLine execArgs
}
}
}

View File

@ -129,18 +129,18 @@ class NodeInfo {
}
env = [ 'JAVA_HOME' : project.javaHome ]
args.addAll("-E", "es.node.portsfile=true")
args.addAll("-E", "node.portsfile=true")
String collectedSystemProperties = config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ")
String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + " " + config.jvmArgs
env.put('ES_JAVA_OPTS', esJavaOpts)
for (Map.Entry<String, String> property : System.properties.entrySet()) {
if (property.getKey().startsWith('es.')) {
if (property.key.startsWith('tests.es.')) {
args.add("-E")
args.add("${property.getKey()}=${property.getValue()}")
args.add("${property.key.substring('tests.es.'.size())}=${property.value}")
}
}
env.put('ES_JVM_OPTIONS', new File(confDir, 'jvm.options'))
args.addAll("-E", "es.path.conf=${confDir}")
args.addAll("-E", "path.conf=${confDir}")
if (Os.isFamily(Os.FAMILY_WINDOWS)) {
args.add('"') // end the entire command, quoted
}

View File

@ -62,6 +62,7 @@ public class RestIntegTestTask extends RandomizedTestingTask {
project.gradle.projectsEvaluated {
NodeInfo node = ClusterFormationTasks.setup(project, this, clusterConfig)
systemProperty('tests.rest.cluster', "${-> node.httpUri()}")
systemProperty('tests.config.dir', "${-> node.confDir}")
// TODO: our "client" qa tests currently use the rest-test plugin. instead they should have their own plugin
// that sets up the test cluster and passes this transport uri instead of http uri. Until then, we pass
// both as separate sysprops

View File

@ -19,6 +19,7 @@
package org.elasticsearch.gradle.vagrant
import org.gradle.api.DefaultTask
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.TaskAction
import org.gradle.logging.ProgressLoggerFactory
import org.gradle.process.internal.ExecAction
@ -30,41 +31,22 @@ import javax.inject.Inject
* Runs bats over vagrant. Pretty much like running it using Exec but with a
* nicer output formatter.
*/
class BatsOverVagrantTask extends DefaultTask {
String command
String boxName
ExecAction execAction
public class BatsOverVagrantTask extends VagrantCommandTask {
BatsOverVagrantTask() {
execAction = getExecActionFactory().newExecAction()
}
@Input
String command
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
BatsOverVagrantTask() {
project.afterEvaluate {
args 'ssh', boxName, '--command', command
}
}
@Inject
ExecActionFactory getExecActionFactory() {
throw new UnsupportedOperationException();
}
void boxName(String boxName) {
this.boxName = boxName
}
void command(String command) {
this.command = command
}
@TaskAction
void exec() {
// It'd be nice if --machine-readable were, well, nice
execAction.commandLine(['vagrant', 'ssh', boxName, '--command', command])
execAction.setStandardOutput(new TapLoggerOutputStream(
command: command,
factory: getProgressLoggerFactory(),
logger: logger))
execAction.execute();
}
@Override
protected OutputStream createLoggerOutputStream() {
return new TapLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
logger: logger)
}
}

View File

@ -19,9 +19,11 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
import groovy.transform.PackageScope
import org.gradle.api.GradleScriptException
import org.gradle.api.logging.Logger
import org.gradle.logging.ProgressLogger
import org.gradle.logging.ProgressLoggerFactory
import java.util.regex.Matcher
@ -35,73 +37,77 @@ import java.util.regex.Matcher
* There is a Tap4j project but we can't use it because it wants to parse the
* entire TAP stream at once and won't parse it stream-wise.
*/
class TapLoggerOutputStream extends LoggingOutputStream {
ProgressLogger progressLogger
Logger logger
int testsCompleted = 0
int testsFailed = 0
int testsSkipped = 0
Integer testCount
String countsFormat
public class TapLoggerOutputStream extends LoggingOutputStream {
private final ProgressLogger progressLogger
private boolean isStarted = false
private final Logger logger
private int testsCompleted = 0
private int testsFailed = 0
private int testsSkipped = 0
private Integer testCount
private String countsFormat
TapLoggerOutputStream(Map args) {
logger = args.logger
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("TAP output for $args.command")
progressLogger.started()
progressLogger.progress("Starting $args.command...")
}
void flush() {
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// System.out.print "===> $line\n"
if (testCount == null) {
try {
testCount = line.split('\\.').last().toInteger()
def length = (testCount as String).length()
countsFormat = "%0${length}d"
countsFormat = "[$countsFormat|$countsFormat|$countsFormat/$countsFormat]"
return
} catch (Exception e) {
throw new GradleScriptException(
'Error parsing first line of TAP stream!!', e)
}
}
Matcher m = line =~ /(?<status>ok|not ok) \d+(?<skip> # skip (?<skipReason>\(.+\))?)? \[(?<suite>.+)\] (?<test>.+)/
if (!m.matches()) {
/* These might be failure report lines or comments or whatever. Its hard
to tell and it doesn't matter. */
logger.warn(line)
return
}
boolean skipped = m.group('skip') != null
boolean success = !skipped && m.group('status') == 'ok'
String skipReason = m.group('skipReason')
String suiteName = m.group('suite')
String testName = m.group('test')
String status
if (skipped) {
status = "SKIPPED"
testsSkipped++
} else if (success) {
status = " OK"
testsCompleted++
} else {
status = " FAILED"
testsFailed++
TapLoggerOutputStream(Map args) {
logger = args.logger
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("TAP output for `${args.command}`")
}
String counts = sprintf(countsFormat,
[testsCompleted, testsFailed, testsSkipped, testCount])
progressLogger.progress("Tests $counts, $status [$suiteName] $testName")
if (!success) {
logger.warn(line)
@Override
public void flush() {
if (isStarted == false) {
progressLogger.started()
isStarted = true
}
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// System.out.print "===> $line\n"
if (testCount == null) {
try {
testCount = line.split('\\.').last().toInteger()
def length = (testCount as String).length()
countsFormat = "%0${length}d"
countsFormat = "[$countsFormat|$countsFormat|$countsFormat/$countsFormat]"
return
} catch (Exception e) {
throw new GradleScriptException(
'Error parsing first line of TAP stream!!', e)
}
}
Matcher m = line =~ /(?<status>ok|not ok) \d+(?<skip> # skip (?<skipReason>\(.+\))?)? \[(?<suite>.+)\] (?<test>.+)/
if (!m.matches()) {
/* These might be failure report lines or comments or whatever. Its hard
to tell and it doesn't matter. */
logger.warn(line)
return
}
boolean skipped = m.group('skip') != null
boolean success = !skipped && m.group('status') == 'ok'
String skipReason = m.group('skipReason')
String suiteName = m.group('suite')
String testName = m.group('test')
String status
if (skipped) {
status = "SKIPPED"
testsSkipped++
} else if (success) {
status = " OK"
testsCompleted++
} else {
status = " FAILED"
testsFailed++
}
String counts = sprintf(countsFormat,
[testsCompleted, testsFailed, testsSkipped, testCount])
progressLogger.progress("Tests $counts, $status [$suiteName] $testName")
if (!success) {
logger.warn(line)
}
}
}
}

View File

@ -18,11 +18,10 @@
*/
package org.elasticsearch.gradle.vagrant
import org.gradle.api.DefaultTask
import org.gradle.api.tasks.TaskAction
import org.apache.commons.io.output.TeeOutputStream
import org.elasticsearch.gradle.LoggedExec
import org.gradle.api.tasks.Input
import org.gradle.logging.ProgressLoggerFactory
import org.gradle.process.internal.ExecAction
import org.gradle.process.internal.ExecActionFactory
import javax.inject.Inject
@ -30,43 +29,30 @@ import javax.inject.Inject
* Runs a vagrant command. Pretty much like Exec task but with a nicer output
* formatter and defaults to `vagrant` as first part of commandLine.
*/
class VagrantCommandTask extends DefaultTask {
List<Object> commandLine
String boxName
ExecAction execAction
public class VagrantCommandTask extends LoggedExec {
VagrantCommandTask() {
execAction = getExecActionFactory().newExecAction()
}
@Input
String boxName
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
public VagrantCommandTask() {
executable = 'vagrant'
project.afterEvaluate {
// It'd be nice if --machine-readable were, well, nice
standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())
}
}
@Inject
ExecActionFactory getExecActionFactory() {
throw new UnsupportedOperationException();
}
protected OutputStream createLoggerOutputStream() {
return new VagrantLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
/* Vagrant tends to output a lot of stuff, but most of the important
stuff starts with ==> $box */
squashedPrefix: "==> $boxName: ")
}
void boxName(String boxName) {
this.boxName = boxName
}
void commandLine(Object... commandLine) {
this.commandLine = commandLine
}
@TaskAction
void exec() {
// It'd be nice if --machine-readable were, well, nice
execAction.commandLine(['vagrant'] + commandLine)
execAction.setStandardOutput(new VagrantLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
/* Vagrant tends to output a lot of stuff, but most of the important
stuff starts with ==> $box */
squashedPrefix: "==> $boxName: "))
execAction.execute();
}
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
}

View File

@ -19,7 +19,9 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
import org.gradle.api.logging.Logger
import org.gradle.logging.ProgressLogger
import org.gradle.logging.ProgressLoggerFactory
/**
* Adapts an OutputStream being written to by vagrant into a ProcessLogger. It
@ -42,79 +44,60 @@ import org.gradle.logging.ProgressLogger
* to catch so it can render the output like
* "Heading text > stdout from the provisioner".
*/
class VagrantLoggerOutputStream extends LoggingOutputStream {
static final String HEADING_PREFIX = '==> '
public class VagrantLoggerOutputStream extends LoggingOutputStream {
private static final String HEADING_PREFIX = '==> '
ProgressLogger progressLogger
String squashedPrefix
String lastLine = ''
boolean inProgressReport = false
String heading = ''
private final ProgressLogger progressLogger
private boolean isStarted = false
private String squashedPrefix
private String lastLine = ''
private boolean inProgressReport = false
private String heading = ''
VagrantLoggerOutputStream(Map args) {
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("Vagrant $args.command")
progressLogger.started()
progressLogger.progress("Starting vagrant $args.command...")
squashedPrefix = args.squashedPrefix
}
void flush() {
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// debugPrintLine(line) // Uncomment me to log every incoming line
if (line.startsWith('\r\u001b')) {
/* We don't want to try to be a full terminal emulator but we want to
keep the escape sequences from leaking and catch _some_ of the
meaning. */
line = line.substring(2)
if ('[K' == line) {
inProgressReport = true
}
return
VagrantLoggerOutputStream(Map args) {
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("Vagrant output for `$args.command`")
squashedPrefix = args.squashedPrefix
}
if (line.startsWith(squashedPrefix)) {
line = line.substring(squashedPrefix.length())
inProgressReport = false
lastLine = line
if (line.startsWith(HEADING_PREFIX)) {
line = line.substring(HEADING_PREFIX.length())
heading = line + ' > '
} else {
line = heading + line
}
} else if (inProgressReport) {
inProgressReport = false
line = lastLine + line
} else {
return
}
// debugLogLine(line) // Uncomment me to log every line we add to the logger
progressLogger.progress(line)
}
void debugPrintLine(line) {
System.out.print '----------> '
for (int i = start; i < end; i++) {
switch (buffer[i] as char) {
case ' '..'~':
System.out.print buffer[i] as char
break
default:
System.out.print '%'
System.out.print Integer.toHexString(buffer[i])
}
@Override
public void flush() {
if (isStarted == false) {
progressLogger.started()
isStarted = true
}
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
System.out.print '\n'
}
void debugLogLine(line) {
System.out.print '>>>>>>>>>>> '
System.out.print line
System.out.print '\n'
}
void line(String line) {
if (line.startsWith('\r\u001b')) {
/* We don't want to try to be a full terminal emulator but we want to
keep the escape sequences from leaking and catch _some_ of the
meaning. */
line = line.substring(2)
if ('[K' == line) {
inProgressReport = true
}
return
}
if (line.startsWith(squashedPrefix)) {
line = line.substring(squashedPrefix.length())
inProgressReport = false
lastLine = line
if (line.startsWith(HEADING_PREFIX)) {
line = line.substring(HEADING_PREFIX.length())
heading = line + ' > '
} else {
line = heading + line
}
} else if (inProgressReport) {
inProgressReport = false
line = lastLine + line
} else {
return
}
progressLogger.progress(line)
}
}

View File

@ -25,14 +25,11 @@ import java.nio.file.FileVisitResult;
import java.nio.file.FileVisitor;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.HashSet;
import java.util.Set;
import org.apache.lucene.util.LuceneTestCase;
import org.elasticsearch.common.SuppressForbidden;
import org.elasticsearch.common.io.PathUtils;
/**
* Checks that all tests in a directory are named according to our naming conventions. This is important because tests that do not follow
* our conventions aren't run by gradle. This was once a glorious unit test but now that Elasticsearch is a multi-module project it must be
@ -46,30 +43,37 @@ import org.elasticsearch.common.io.PathUtils;
* {@code --self-test} that is only run in the test:framework project.
*/
public class NamingConventionsCheck {
public static void main(String[] args) throws IOException, ClassNotFoundException {
NamingConventionsCheck check = new NamingConventionsCheck();
public static void main(String[] args) throws IOException {
Class<?> testClass = null;
Class<?> integTestClass = null;
Path rootPath = null;
boolean skipIntegTestsInDisguise = false;
boolean selfTest = false;
int i = 0;
while (true) {
switch (args[i]) {
case "--skip-integ-tests-in-disguise":
skipIntegTestsInDisguise = true;
i++;
continue;
case "--self-test":
selfTest = true;
i++;
continue;
case "--":
i++;
break;
default:
fail("Expected -- before a path.");
for (int i = 0; i < args.length; i++) {
String arg = args[i];
switch (arg) {
case "--test-class":
testClass = loadClassWithoutInitializing(args[++i]);
break;
case "--integ-test-class":
integTestClass = loadClassWithoutInitializing(args[++i]);
break;
case "--skip-integ-tests-in-disguise":
skipIntegTestsInDisguise = true;
break;
case "--self-test":
selfTest = true;
break;
case "--":
rootPath = Paths.get(args[++i]);
break;
default:
fail("unsupported argument '" + arg + "'");
}
break;
}
check.check(PathUtils.get(args[i]));
NamingConventionsCheck check = new NamingConventionsCheck(testClass, integTestClass);
check.check(rootPath, skipIntegTestsInDisguise);
if (selfTest) {
assertViolation("WrongName", check.missingSuffix);
@ -82,17 +86,15 @@ public class NamingConventionsCheck {
}
// Now we should have no violations
assertNoViolations("Not all subclasses of " + ESTestCase.class.getSimpleName()
assertNoViolations("Not all subclasses of " + check.testClass.getSimpleName()
+ " match the naming convention. Concrete classes must end with [Tests]", check.missingSuffix);
assertNoViolations("Classes ending with [Tests] are abstract or interfaces", check.notRunnable);
assertNoViolations("Found inner classes that are tests, which are excluded from the test runner", check.innerClasses);
String classesToSubclass = String.join(",", ESTestCase.class.getSimpleName(), ESTestCase.class.getSimpleName(),
ESTokenStreamTestCase.class.getSimpleName(), LuceneTestCase.class.getSimpleName());
assertNoViolations("Pure Unit-Test found must subclass one of [" + classesToSubclass + "]", check.pureUnitTest);
assertNoViolations("Classes ending with [Tests] must subclass [" + classesToSubclass + "]", check.notImplementing);
if (!skipIntegTestsInDisguise) {
assertNoViolations("Subclasses of ESIntegTestCase should end with IT as they are integration tests",
check.integTestsInDisguise);
assertNoViolations("Pure Unit-Test found must subclass [" + check.testClass.getSimpleName() + "]", check.pureUnitTest);
assertNoViolations("Classes ending with [Tests] must subclass [" + check.testClass.getSimpleName() + "]", check.notImplementing);
if (skipIntegTestsInDisguise == false) {
assertNoViolations("Subclasses of " + check.integTestClass.getSimpleName() +
" should end with IT as they are integration tests", check.integTestsInDisguise);
}
}
@ -103,7 +105,15 @@ public class NamingConventionsCheck {
private final Set<Class<?>> notRunnable = new HashSet<>();
private final Set<Class<?>> innerClasses = new HashSet<>();
public void check(Path rootPath) throws IOException {
private final Class<?> testClass;
private final Class<?> integTestClass;
public NamingConventionsCheck(Class<?> testClass, Class<?> integTestClass) {
this.testClass = testClass;
this.integTestClass = integTestClass;
}
public void check(Path rootPath, boolean skipTestsInDisguised) throws IOException {
Files.walkFileTree(rootPath, new FileVisitor<Path>() {
/**
* The package name of the directory we are currently visiting. Kept as a string rather than something fancy because we load
@ -136,9 +146,9 @@ public class NamingConventionsCheck {
String filename = file.getFileName().toString();
if (filename.endsWith(".class")) {
String className = filename.substring(0, filename.length() - ".class".length());
Class<?> clazz = loadClass(className);
Class<?> clazz = loadClassWithoutInitializing(packageName + className);
if (clazz.getName().endsWith("Tests")) {
if (ESIntegTestCase.class.isAssignableFrom(clazz)) {
if (skipTestsInDisguised == false && integTestClass.isAssignableFrom(clazz)) {
integTestsInDisguise.add(clazz);
}
if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
@ -164,15 +174,7 @@ public class NamingConventionsCheck {
}
private boolean isTestCase(Class<?> clazz) {
return LuceneTestCase.class.isAssignableFrom(clazz);
}
private Class<?> loadClass(String className) {
try {
return Thread.currentThread().getContextClassLoader().loadClass(packageName + className);
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
return testClass.isAssignableFrom(clazz);
}
@Override
@ -186,7 +188,6 @@ public class NamingConventionsCheck {
* Fail the process if there are any violations in the set. Named to look like a junit assertion even though it isn't because it is
* similar enough.
*/
@SuppressForbidden(reason = "System.err/System.exit")
private static void assertNoViolations(String message, Set<Class<?>> set) {
if (false == set.isEmpty()) {
System.err.println(message + ":");
@ -201,10 +202,9 @@ public class NamingConventionsCheck {
* Fail the process if we didn't detect a particular violation. Named to look like a junit assertion even though it isn't because it is
* similar enough.
*/
@SuppressForbidden(reason = "System.err/System.exit")
private static void assertViolation(String className, Set<Class<?>> set) throws ClassNotFoundException {
className = "org.elasticsearch.test.test.NamingConventionsCheckBadClasses$" + className;
if (false == set.remove(Class.forName(className))) {
private static void assertViolation(String className, Set<Class<?>> set) {
className = "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className;
if (false == set.remove(loadClassWithoutInitializing(className))) {
System.err.println("Error in NamingConventionsCheck! Expected [" + className + "] to be a violation but wasn't.");
System.exit(1);
}
@ -213,9 +213,20 @@ public class NamingConventionsCheck {
/**
* Fail the process with the provided message.
*/
@SuppressForbidden(reason = "System.err/System.exit")
private static void fail(String reason) {
System.err.println(reason);
System.exit(1);
}
static Class<?> loadClassWithoutInitializing(String name) {
try {
return Class.forName(name,
// Don't initialize the class to save time. Not needed for this test and this doesn't share a VM with any other tests.
false,
// Use our classloader rather than the bootstrap class loader.
NamingConventionsCheck.class.getClassLoader());
} catch (ClassNotFoundException e) {
throw new RuntimeException(e);
}
}
}

View File

@ -1,10 +1,10 @@
################################################################
#
# Licensed to Elasticsearch under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
@ -15,7 +15,6 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
################################################################
description=This is a description for a dummy test site plugin.
version=0.0.7-BOND-SITE
#
implementation-class=org.elasticsearch.gradle.doc.DocsTestPlugin

Binary file not shown.

View File

@ -39,6 +39,27 @@
<module name="EqualsHashCode" />
<!-- Checks that the order of modifiers conforms to the suggestions in the
Java Language specification, sections 8.1.1, 8.3.1 and 8.4.3. It is not that
the standard is perfect, but having a consistent order makes the code more
readable and no other order is compellingly better than the standard.
The correct order is:
public
protected
private
abstract
static
final
transient
volatile
synchronized
native
strictfp
-->
<module name="ModifierOrder" />
<module name="RedundantModifier" />
<!-- We don't use Java's builtin serialization and we suppress all warning
about it. The flip side of that coin is that we shouldn't _try_ to use
it. We can't outright ban it with ForbiddenApis because it complain about

View File

@ -7,8 +7,8 @@
<!-- On Windows, Checkstyle matches files using \ path separator -->
<!-- These files are generated by ANTLR so its silly to hold them to our rules. -->
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]PainlessLexer\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]PainlessParser(|BaseVisitor|Visitor)\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]antlr[/\\]PainlessLexer\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]antlr[/\\]PainlessParser(|BaseVisitor|Visitor)\.java" checks="." />
<!-- Hopefully temporary suppression of LineLength on files that don't pass it. We should remove these when we the
files start to pass. -->
@ -19,9 +19,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]Action.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ActionModule.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ActionRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ReplicationResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]TransportClusterHealthAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]hotthreads[/\\]NodesHotThreadsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]hotthreads[/\\]TransportNodesHotThreadsAction.java" checks="LineLength" />
@ -38,8 +36,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]put[/\\]TransportPutRepositoryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]verify[/\\]TransportVerifyRepositoryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]verify[/\\]VerifyRepositoryRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]ClusterRerouteRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]ClusterRerouteRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]TransportClusterRerouteAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsRequestBuilder.java" checks="LineLength" />
@ -61,7 +57,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]snapshots[/\\]status[/\\]TransportSnapshotsStatusAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]ClusterStateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]TransportClusterStateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsIndices.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsNodeResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]TransportClusterStatsAction.java" checks="LineLength" />
@ -105,7 +100,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]open[/\\]TransportOpenIndexAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]recovery[/\\]TransportRecoveryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]refresh[/\\]TransportRefreshAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]refresh[/\\]TransportShardRefreshAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]segments[/\\]IndexSegments.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]segments[/\\]IndicesSegmentResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]segments[/\\]IndicesSegmentsRequestBuilder.java" checks="LineLength" />
@ -178,21 +172,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestProxyActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]PutPipelineTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateExecutionService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolateRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateShardResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportMultiPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportShardMultiPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]MultiSearchRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchPhaseExecutionException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]ShardSearchFailure.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]TransportClearScrollAction.java" checks="LineLength" />
@ -201,10 +185,8 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]AutoCreateIndex.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]DelegatingActionListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]DestructiveOperations.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]HandledTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]IndicesOptions.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ThreadedActionListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ToXContentToBytes.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]broadcast[/\\]BroadcastOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]broadcast[/\\]BroadcastRequest.java" checks="LineLength" />
@ -216,13 +198,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeReadOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeReadAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]TransportClusterInfoAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]nodes[/\\]NodesOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]nodes[/\\]TransportNodesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]ReplicationRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]ReplicationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]TransportBroadcastReplicationAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]instance[/\\]InstanceShardOperationRequestBuilder.java" checks="LineLength" />
@ -251,10 +231,8 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]JarHell.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]Seccomp.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]Security.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cache[/\\]recycler[/\\]PageCacheRecycler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]ElasticsearchClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]FilterClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]node[/\\]NodeClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]support[/\\]AbstractClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]transport[/\\]TransportClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]transport[/\\]support[/\\]TransportProxyClient.java" checks="LineLength" />
@ -267,7 +245,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]InternalClusterInfoService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]LocalNodeMasterListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]SnapshotsInProgress.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]MappingUpdatedAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]NodeIndexDeletedAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]NodeMappingRefreshAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]shard[/\\]ShardStateAction.java" checks="LineLength" />
@ -288,7 +265,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]MetaDataMappingService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]MetaDataUpdateSettingsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]RepositoriesMetaData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]node[/\\]DiscoveryNodes.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]IndexRoutingTable.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]IndexShardRoutingTable.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]OperationRouting.java" checks="LineLength" />
@ -345,7 +321,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]recycler[/\\]Recyclers.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]rounding[/\\]Rounding.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]unit[/\\]ByteSizeValue.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]unit[/\\]TimeValue.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]BigArrays.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]CancellableThreads.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]CollectionUtils.java" checks="LineLength" />
@ -364,26 +339,19 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoveryService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoverySettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]local[/\\]LocalDiscovery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]NodeJoinController.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ZenDiscovery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]elect[/\\]ElectMasterService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]fd[/\\]FaultDetection.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]fd[/\\]MasterFaultDetection.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]fd[/\\]NodesFaultDetection.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]membership[/\\]MembershipAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ping[/\\]ZenPing.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PendingClusterStatesQueue.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PublishClusterStateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]ESFileStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]Environment.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]NodeEnvironment.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]AsyncShardFetch.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]DanglingIndicesState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayAllocator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayMetaState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]LocalAllocateDangledIndices.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]MetaDataStateFormat.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]PrimaryShardAllocator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]ReplicaShardAllocator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]TransportNodesListGatewayMetaState.java" checks="LineLength" />
@ -392,21 +360,16 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexSettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexingSlowLog.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergePolicyConfig.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergeSchedulerConfig.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]NodeServicesProvider.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]SearchSlowLog.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisRegistry.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]CommonGramsTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]CustomAnalyzerProvider.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]EdgeNGramTokenizerFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]NumericDoubleAnalyzer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]ShingleTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]StemmerOverrideTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]StopTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]compound[/\\]HyphenationCompoundWordTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]cache[/\\]bitset[/\\]BitsetFilterCache.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]cache[/\\]request[/\\]ShardRequestCache.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]codec[/\\]PerFieldMappingPostingFormatCodec.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]engine[/\\]ElasticsearchConcurrentMergeScheduler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]engine[/\\]Engine.java" checks="LineLength" />
@ -420,16 +383,13 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]fieldcomparator[/\\]FloatValuesComparatorSource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]fieldcomparator[/\\]LongValuesComparatorSource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]GlobalOrdinalsBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]GlobalOrdinalsIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]InternalGlobalOrdinalsIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]MultiOrdinals.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]OrdinalsBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]SinglePackedOrdinals.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractAtomicParentChildFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractIndexGeoPointFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractIndexOrdinalsFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]BinaryDVIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]GeoPointArrayIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]PagedBytesIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]ParentChildIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]SortedNumericDVIndexFieldData.java" checks="LineLength" />
@ -476,14 +436,9 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]ObjectMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]RootObjectMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]merge[/\\]MergeStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]ExtractQueryTermsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorQueriesRegistry.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]AbstractQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MatchQueryParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MoreLikeThisQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryBuilders.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryShardContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryValidationException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]InnerHitsQueryParserHelper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]QueryParsers.java" checks="LineLength" />
@ -500,7 +455,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]ShardStateMetaData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]StoreRecovery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]TranslogRecoveryPerformer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]similarity[/\\]SimilarityService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]snapshots[/\\]blobstore[/\\]BlobStoreIndexShardRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]snapshots[/\\]blobstore[/\\]BlobStoreIndexShardSnapshots.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]store[/\\]IndexStore.java" checks="LineLength" />
@ -516,12 +470,9 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndexingMemoryController.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesWarmer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]AnalysisModule.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]HunspellService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]PreBuiltCacheFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]PreBuiltTokenFilters.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]breaker[/\\]HierarchyCircuitBreakerService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]cluster[/\\]IndicesClusterStateService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]fielddata[/\\]cache[/\\]IndicesFieldDataCache.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]fielddata[/\\]cache[/\\]IndicesFieldDataCacheListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]flush[/\\]ShardsSyncedFlushResult.java" checks="LineLength" />
@ -530,31 +481,16 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoveryFailedException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySourceHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoveryState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]StartRecoveryRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]TransportNodesListShardStoreMetaData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]ttl[/\\]IndicesTTLService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]IngestMetadata.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineExecutionService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]CompoundProcessor.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]IngestDocument.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]Pipeline.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]ConvertProcessor.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]fs[/\\]FsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]DeadlockAnalyzer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]GcNames.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]HotThreads.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmGcMonitorService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]os[/\\]OsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]process[/\\]ProcessService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]Node.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]internal[/\\]InternalSettingsPreparer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorQuery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]DummyPluginInfo.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]RemovePluginCommand.java" checks="LineLength" />
@ -569,13 +505,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]fs[/\\]FsRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]uri[/\\]URLIndexShardRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]uri[/\\]URLRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BaseRestHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BytesRestResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]RestController.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]RestClusterHealthAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]info[/\\]RestNodesInfoAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]stats[/\\]RestNodesStatsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]RestClusterRerouteAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]RestClusterGetSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]RestClusterUpdateSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]RestClusterStateAction.java" checks="LineLength" />
@ -596,20 +530,15 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]bulk[/\\]RestBulkAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestCountAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestIndicesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestNodeAttrsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestNodesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestPendingClusterTasksAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestShardsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestThreadPoolAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]get[/\\]RestMultiGetAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]index[/\\]RestIndexAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]main[/\\]RestMainAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]percolate[/\\]RestPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]script[/\\]RestDeleteIndexedScriptAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]script[/\\]RestPutIndexedScriptAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestClearScrollAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestMultiSearchAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestSearchAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestSearchScrollAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]suggest[/\\]RestSuggestAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]support[/\\]RestActions.java" checks="LineLength" />
@ -626,7 +555,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]MultiValueMode.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]SearchService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]AggregatorFactories.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]AggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]InternalAggregation.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]InternalMultiBucketAggregation.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]ValuesSourceAggregationBuilder.java" checks="LineLength" />
@ -639,10 +567,8 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]filters[/\\]FiltersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]filters[/\\]InternalFilters.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]geogrid[/\\]GeoHashGridAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]geogrid[/\\]GeoHashGridParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]global[/\\]GlobalAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]global[/\\]InternalGlobal.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]histogram[/\\]DateHistogramParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]histogram[/\\]HistogramAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]missing[/\\]InternalMissing.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]missing[/\\]MissingAggregator.java" checks="LineLength" />
@ -651,10 +577,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]nested[/\\]ReverseNestedAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]InternalRange.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]RangeAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]date[/\\]DateRangeParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]date[/\\]InternalDateRange.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]geodistance[/\\]GeoDistanceParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]geodistance[/\\]InternalGeoDistance.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]ipv4[/\\]InternalIPv4Range.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]sampler[/\\]DiversifiedBytesHashSamplerAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]sampler[/\\]DiversifiedMapSamplerAggregator.java" checks="LineLength" />
@ -665,63 +588,39 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]GlobalOrdinalsSignificantTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]InternalSignificantTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantLongTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantLongTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantStringTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantStringTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsParametersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]UnmappedSignificantTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]GND.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]JLHScore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]NXYSignificanceHeuristic.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]PercentageScore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]ScriptHeuristic.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]SignificanceHeuristic.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]AbstractTermsParametersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]DoubleTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]DoubleTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]GlobalOrdinalsStringTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]InternalOrder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]InternalTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]LongTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]LongTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]StringTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]StringTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsParametersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]UnmappedTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]support[/\\]IncludeExclude.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]ValuesSourceMetricsAggregationBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]CardinalityAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]CardinalityAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]HyperLogLogPlusPlus.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]geobounds[/\\]GeoBoundsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]geobounds[/\\]InternalGeoBounds.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]AbstractPercentilesParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]AbstractTDigestPercentilesAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]TDigestPercentileRanksAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]TDigestPercentilesAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]scripted[/\\]InternalScriptedMetric.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]scripted[/\\]ScriptedMetricAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]StatsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]ExtendedStatsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]ExtendedStatsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]InternalExtendedStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]tophits[/\\]TopHitsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]BucketHelpers.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketmetrics[/\\]BucketMetricsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketmetrics[/\\]avg[/\\]AvgBucketPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketscript[/\\]BucketScriptPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]cumulativesum[/\\]CumulativeSumPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]derivative[/\\]DerivativePipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]derivative[/\\]InternalDerivative.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]having[/\\]BucketSelectorPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]AggregationContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]AggregationPath.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]GeoPointParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]ValuesSourceParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]format[/\\]ValueFormat.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]format[/\\]ValueParser.java" checks="LineLength" />
@ -736,10 +635,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]FetchSubPhaseParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]explain[/\\]ExplainFetchSubPhase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]fielddata[/\\]FieldDataFieldsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsFetchSubPhase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]script[/\\]ScriptFieldsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]source[/\\]FetchSourceContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]highlight[/\\]FastVectorHighlighter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]highlight[/\\]HighlightPhase.java" checks="LineLength" />
@ -764,7 +660,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]GeoDistanceSortParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]ScriptSortParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]SortParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestContextParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestUtils.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]completion[/\\]CompletionSuggestParser.java" checks="LineLength" />
@ -778,7 +673,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]phrase[/\\]WordScorer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]term[/\\]TermSuggestParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]RestoreService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotInfo.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotShardFailure.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotShardsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotsService.java" checks="LineLength" />
@ -788,7 +682,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ESExceptionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]NamingConventionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]VersionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ListenerActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]RejectionActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]HotThreadsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthResponsesTests.java" checks="LineLength" />
@ -799,25 +692,21 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]TransportAnalyzeActionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]cache[/\\]clear[/\\]ClearIndicesCacheBlocksIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]create[/\\]CreateIndexIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]flush[/\\]SyncedFlushUnitTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]get[/\\]GetIndexIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreRequestIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]put[/\\]MetaDataIndexTemplateServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]upgrade[/\\]UpgradeIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkProcessorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]RetryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]MultiGetShardRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]BulkRequestModifierTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestProxyActionFilterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateDocumentSimpleResultTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateExecutionServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequestParsingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]WriteableIngestDocumentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolatorRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]MultiSearchRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchRequestBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]AutoCreateIndexTests.java" checks="LineLength" />
@ -843,10 +732,7 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]AbstractClientHeadersTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterHealthIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterInfoServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterModuleTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateDiffIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]DiskUsageTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]MinimumMasterNodesIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]NoMasterNodeIT.java" checks="LineLength" />
@ -865,7 +751,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]allocation[/\\]SimpleAllocationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]ClusterIndexHealthTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]ClusterStateHealthTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]RoutingTableGenerator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]AutoExpandReplicasTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]DateMathExpressionResolverTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]HumanReadableIndexSettingsTests.java" checks="LineLength" />
@ -877,13 +762,11 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]AllocationIdTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]DelayedAllocationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]PrimaryAllocationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]RoutingBackwardCompatibilityTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]RoutingServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]RoutingTableTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]ShardRoutingHelper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]ShardRoutingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]UnassignedInfoTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]allocation[/\\]ActiveAllocationIdTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]allocation[/\\]AddIncrementallyTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]allocation[/\\]AllocationCommandsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]allocation[/\\]AllocationPriorityTests.java" checks="LineLength" />
@ -932,7 +815,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]geo[/\\]ShapeBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]hash[/\\]MessageDigestsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]inject[/\\]ModuleTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]io[/\\]stream[/\\]BytesStreamsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]index[/\\]FreqTermsEnumTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]uid[/\\]VersionsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]network[/\\]CidrsTests.java" checks="LineLength" />
@ -948,18 +830,13 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]builder[/\\]XContentBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]cbor[/\\]JsonVsCborTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]smile[/\\]JsonVsSmileTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]support[/\\]filtering[/\\]AbstractFilteringJsonGeneratorTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]support[/\\]filtering[/\\]FilterPathGeneratorFilteringTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]consistencylevel[/\\]WriteConsistencyLevelIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]deps[/\\]joda[/\\]SimpleJodaTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]deps[/\\]lucene[/\\]VectorHighlighterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]BlockingClusterStatePublishResponseHandlerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoveryWithServiceDisruptionsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ZenFaultDetectionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ZenUnicastDiscoveryIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]NodeJoinControllerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ZenDiscoveryUnitTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ping[/\\]unicast[/\\]UnicastZenPingIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PublishClusterStateActionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]document[/\\]DocumentActionsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]EnvironmentTests.java" checks="LineLength" />
@ -986,9 +863,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexingSlowLogTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergePolicySettingsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]SearchSlowLogTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisModuleTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]CompoundAnalysisTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]NGramTokenizerFactoryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PatternCaptureTokenFilterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PreBuiltAnalyzerProviderFactoryTests.java" checks="LineLength" />
@ -1007,11 +881,8 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]BinaryDVFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]DuelFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]FieldDataCacheTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]FilterFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]IndexFieldDataServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]PagedBytesStringFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ParentChildFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]SortedSetDVStringFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]DocumentFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]DynamicMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]FieldTypeTestCase.java" checks="LineLength" />
@ -1056,14 +927,11 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]simple[/\\]SimpleMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]source[/\\]DefaultSourceMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]string[/\\]SimpleStringMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]string[/\\]StringFieldMapperPositionIncrementGapTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]timestamp[/\\]TimestampMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]ttl[/\\]TTLMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]typelevels[/\\]ParseDocumentTypeLevelsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]update[/\\]UpdateMappingOnClusterIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]update[/\\]UpdateMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]AbstractQueryTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoolQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoostingQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]CommonTermsQueryBuilderTests.java" checks="LineLength" />
@ -1075,7 +943,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MultiMatchQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RandomQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RangeQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]ScoreModeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanMultiTermQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanNotQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]plugin[/\\]CustomQueryParserIT.java" checks="LineLength" />
@ -1106,7 +973,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerSingleNodeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesOptionsIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]PreBuiltAnalyzerIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]AnalyzeActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]HunspellServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]exists[/\\]indices[/\\]IndicesExistsIT.java" checks="LineLength" />
@ -1131,33 +997,14 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]state[/\\]CloseIndexDisableCloseAllIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]state[/\\]OpenCloseIndexIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]state[/\\]RareClusterStateIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]state[/\\]SimpleIndexStateIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]stats[/\\]IndexStatsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]template[/\\]SimpleIndexTemplateIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineExecutionServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineStoreTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]CompoundProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]IngestDocumentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]PipelineFactoryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]ValueSourceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]AbstractStringProcessorTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]AppendProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]DateFormatTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]DateProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]GsubProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]RenameProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]SetProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]SplitProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]mget[/\\]SimpleMgetIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmGcMonitorServiceSettingsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]os[/\\]OsProbeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]nodesinfo[/\\]NodeInfoStreamingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]options[/\\]detailederrors[/\\]DetailedErrorsEnabledIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorQueryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginInfoTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]recovery[/\\]FullRollingRestartIT.java" checks="LineLength" />
@ -1168,7 +1015,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BytesRestResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]CorsRegexDefaultIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]CorsRegexIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]NoOpClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]RestControllerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]util[/\\]RestUtilsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]routing[/\\]AliasResolveRoutingIT.java" checks="LineLength" />
@ -1265,9 +1111,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]update[/\\]UpdateIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]validate[/\\]SimpleValidateQueryIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]versioning[/\\]SimpleVersioningIT.java" checks="LineLength" />
<suppress files="modules[/\\]ingest-grok[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]grok[/\\]Grok.java" checks="LineLength" />
<suppress files="modules[/\\]ingest-grok[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]grok[/\\]GrokProcessorTests.java" checks="LineLength" />
<suppress files="modules[/\\]ingest-grok[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]grok[/\\]GrokTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-expression[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]expression[/\\]ExpressionPlugin.java" checks="LineLength" />
<suppress files="modules[/\\]lang-expression[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]expression[/\\]ExpressionScriptEngineService.java" checks="LineLength" />
<suppress files="modules[/\\]lang-expression[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]expression[/\\]ExpressionSearchScript.java" checks="LineLength" />
@ -1276,18 +1119,10 @@
<suppress files="modules[/\\]lang-expression[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]expression[/\\]MoreExpressionTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovyPlugin.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovyScriptEngineService.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]BucketScriptTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]BulkTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]DoubleTermsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]EquivalenceTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]GeoDistanceTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HDRPercentileRanksTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HDRPercentilesTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HistogramTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IPv4RangeTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndexLookupTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndexedScriptTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndicesRequestTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]LongTermsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]MinDocCountTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]MinTests.java" checks="LineLength" />
@ -1296,19 +1131,25 @@
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]SearchFieldsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]SimpleSortTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]StringTermsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TDigestPercentileRanksTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TDigestPercentilesTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]package-info.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovyScriptTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovySecurityTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustachePlugin.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]RenderSearchTemplateTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]SuggestSearchTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TemplateQueryParserTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TemplateQueryTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]package-info.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustacheScriptEngineTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustacheTests.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequest.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequestBuilder.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolateShardResponse.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportMultiPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportShardMultiPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorIT.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorIT.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorRequestTests.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuCollationTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuFoldingTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuNormalizerTokenFilterFactory.java" checks="LineLength" />
@ -1317,28 +1158,13 @@
<suppress files="plugins[/\\]analysis-kuromoji[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]JapaneseStopTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-kuromoji[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]KuromojiAnalysisTests.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-phonetic[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PhoneticTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-smartcn[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]SimpleSmartChineseAnalysisTests.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-stempel[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PolishAnalysisTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryRequest.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryRequestBuilder.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryResponse.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]TransportDeleteByQueryAction.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]IndexDeleteByQueryResponseTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]TransportDeleteByQueryActionTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugin[/\\]deletebyquery[/\\]DeleteByQueryTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]management[/\\]AzureComputeService.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]AbstractAzureTestCase.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureMinimumMasterNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureSimpleTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureTwoStartedNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]aws[/\\]AwsEc2ServiceImpl.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure-classic[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]AbstractAzureTestCase.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure-classic[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureMinimumMasterNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure-classic[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureSimpleTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure-classic[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureTwoStartedNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]aws[/\\]AbstractAwsTestCase.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ec2[/\\]AmazonEC2Mock.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-gce[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]gce[/\\]GceUnicastHostsProvider.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-gce[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]gce[/\\]GceNetworkTests.java" checks="LineLength" />
<suppress files="plugins[/\\]ingest-geoip[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]geoip[/\\]GeoIpProcessor.java" checks="LineLength" />
<suppress files="plugins[/\\]ingest-geoip[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]geoip[/\\]GeoIpProcessorFactoryTests.java" checks="LineLength" />
<suppress files="plugins[/\\]ingest-geoip[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]geoip[/\\]GeoIpProcessorTests.java" checks="LineLength" />
<suppress files="plugins[/\\]lang-javascript[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugin[/\\]javascript[/\\]JavaScriptPlugin.java" checks="LineLength" />
<suppress files="plugins[/\\]lang-javascript[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]javascript[/\\]JavaScriptScriptEngineService.java" checks="LineLength" />
<suppress files="plugins[/\\]lang-javascript[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]javascript[/\\]JavaScriptScriptEngineTests.java" checks="LineLength" />
@ -1360,13 +1186,8 @@
<suppress files="plugins[/\\]mapper-murmur3[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]murmur3[/\\]Murmur3FieldMapper.java" checks="LineLength" />
<suppress files="plugins[/\\]mapper-murmur3[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]murmur3[/\\]Murmur3FieldMapperTests.java" checks="LineLength" />
<suppress files="plugins[/\\]mapper-murmur3[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]murmur3[/\\]Murmur3FieldMapperUpgradeTests.java" checks="LineLength" />
<suppress files="plugins[/\\]mapper-size[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]size[/\\]SizeFieldMapper.java" checks="LineLength" />
<suppress files="plugins[/\\]mapper-size[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]size[/\\]SizeFieldMapperUpgradeTests.java" checks="LineLength" />
<suppress files="plugins[/\\]mapper-size[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]size[/\\]SizeMappingIT.java" checks="LineLength" />
<suppress files="plugins[/\\]mapper-size[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]size[/\\]SizeMappingTests.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]blobstore[/\\]AzureBlobContainer.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]blobstore[/\\]AzureBlobStore.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageService.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageServiceImpl.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageSettings.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]azure[/\\]AzureRepository.java" checks="LineLength" />
@ -1386,21 +1207,13 @@
<suppress files="plugins[/\\]repository-s3[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]aws[/\\]blobstore[/\\]MockDefaultS3OutputStream.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-s3[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]s3[/\\]AbstractS3SnapshotRestoreTest.java" checks="LineLength" />
<suppress files="plugins[/\\]store-smb[/\\]src[/\\]main[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]store[/\\]SmbDirectoryWrapper.java" checks="LineLength" />
<suppress files="qa[/\\]evil-tests[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]ESPolicyUnitTests.java" checks="LineLength" />
<suppress files="qa[/\\]evil-tests[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]EvilSecurityTests.java" checks="LineLength" />
<suppress files="qa[/\\]evil-tests[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]cli[/\\]CheckFileCommandTests.java" checks="LineLength" />
<suppress files="qa[/\\]evil-tests[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]tribe[/\\]TribeUnitTests.java" checks="LineLength" />
<suppress files="qa[/\\]smoke-test-client[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]smoketest[/\\]ESSmokeClientTestCase.java" checks="LineLength" />
<suppress files="qa[/\\]smoke-test-ingest-with-all-dependencies[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]CombineProcessorsTests.java" checks="LineLength" />
<suppress files="qa[/\\]smoke-test-ingest-with-all-dependencies[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]IngestDocumentMustacheIT.java" checks="LineLength" />
<suppress files="qa[/\\]smoke-test-ingest-with-all-dependencies[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]IngestMustacheSetProcessorIT.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]BootstrapForTesting.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]MockInternalClusterInfoService.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]TestShardRouting.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]cli[/\\]CliToolTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]MockBigArrays.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]MockSearchService.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]script[/\\]NativeSignificanceScoreScriptWithParams.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]AbstractQueryTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]BackgroundIndexer.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]CompositeTestCluster.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]CorruptionUtils.java" checks="LineLength" />
@ -1423,36 +1236,11 @@
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]disruption[/\\]SlowClusterStateProcessing.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]AssertingSearcher.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]MockEngineSupport.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]MockInternalEngine.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]hamcrest[/\\]ElasticsearchAssertions.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]junit[/\\]listeners[/\\]LoggingListener.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]ESRestTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]RestTestExecutionContext.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]client[/\\]RestClient.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]client[/\\]http[/\\]HttpRequestBuilder.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]json[/\\]JsonPath.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]GreaterThanEqualToParser.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]GreaterThanParser.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]LessThanOrEqualToParser.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]LessThanParser.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]RestTestSuiteParseContext.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]RestTestSuiteParser.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]section[/\\]DoSection.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]section[/\\]GreaterThanAssertion.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]section[/\\]GreaterThanEqualToAssertion.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]section[/\\]LengthAssertion.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]section[/\\]LessThanAssertion.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]section[/\\]LessThanOrEqualToAssertion.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]section[/\\]MatchAssertion.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]spec[/\\]RestApiParser.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]support[/\\]FileUtils.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]store[/\\]MockFSDirectoryService.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]store[/\\]MockFSIndexStore.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]FileUtilsTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]JsonPathTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]RestTestParserTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]test[/\\]InternalTestClusterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]cli[/\\]CliTool.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]indices[/\\]settings[/\\]RestGetSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]tribe[/\\]TribeService.java" checks="LineLength" />

View File

@ -1,2 +1,2 @@
#!/bin/sh -e
#!/bin/bash -e
<% commands.each {command -> %><%= command %><% } %>

View File

@ -1,2 +1,2 @@
#!/bin/sh -e
#!/bin/bash -e
<% commands.each {command -> %><%= command %><% } %>

View File

@ -32,4 +32,7 @@ org.apache.lucene.index.IndexReader#getCombinedCoreAndDeletesKey()
@defaultMessage Soon to be removed
org.apache.lucene.document.FieldType#numericType()
org.apache.lucene.document.InetAddressPoint#newPrefixQuery(java.lang.String, java.net.InetAddress, int) @LUCENE-7232
@defaultMessage Don't use MethodHandles in slow ways, don't be lenient in tests.
java.lang.invoke.MethodHandle#invoke(java.lang.Object[])
java.lang.invoke.MethodHandle#invokeWithArguments(java.lang.Object[])
java.lang.invoke.MethodHandle#invokeWithArguments(java.util.List)

View File

@ -21,5 +21,7 @@ com.carrotsearch.randomizedtesting.annotations.Repeat @ Don't commit hardcoded r
org.apache.lucene.codecs.Codec#setDefault(org.apache.lucene.codecs.Codec) @ Use the SuppressCodecs("*") annotation instead
org.apache.lucene.util.LuceneTestCase$Slow @ Don't write slow tests
org.junit.Ignore @ Use AwaitsFix instead
org.apache.lucene.util.LuceneTestCase$Nightly @ We don't run nightly tests at this point!
com.carrotsearch.randomizedtesting.annotations.Nightly @ We don't run nightly tests at this point!
org.junit.Test @defaultMessage Just name your test method testFooBar

View File

@ -17,9 +17,7 @@
* under the License.
*/
package org.elasticsearch.test.test;
import org.elasticsearch.test.ESTestCase;
package org.elasticsearch.test;
import junit.framework.TestCase;
@ -30,21 +28,35 @@ public class NamingConventionsCheckBadClasses {
public static final class NotImplementingTests {
}
public static final class WrongName extends ESTestCase {
public static final class WrongName extends UnitTestCase {
/*
* Dummy test so the tests pass. We do this *and* skip the tests so anyone who jumps back to a branch without these tests can still
* compile without a failure. That is because clean doesn't actually clean these....
*/
public void testDummy() {}
}
public static abstract class DummyAbstractTests extends ESTestCase {
public abstract static class DummyAbstractTests extends UnitTestCase {
}
public interface DummyInterfaceTests {
}
public static final class InnerTests extends ESTestCase {
public static final class InnerTests extends UnitTestCase {
public void testDummy() {}
}
public static final class WrongNameTheSecond extends ESTestCase {
public static final class WrongNameTheSecond extends UnitTestCase {
public void testDummy() {}
}
public static final class PlainUnit extends TestCase {
public void testDummy() {}
}
public abstract static class UnitTestCase extends TestCase {
}
public abstract static class IntegTestCase extends UnitTestCase {
}
}

View File

@ -1,5 +1,5 @@
elasticsearch = 5.0.0-alpha2
lucene = 6.0.0
elasticsearch = 5.0.0-alpha5
lucene = 6.1.0
# optional dependencies
spatial4j = 0.6
@ -7,15 +7,16 @@ jts = 1.13
jackson = 2.7.1
log4j = 1.2.17
slf4j = 1.6.2
jna = 4.1.0
jna = 4.2.2
# test dependencies
randomizedrunner = 2.3.2
junit = 4.11
# TODO: Upgrade httpclient to a version > 4.5.1 once released. Then remove o.e.test.rest.client.StrictHostnameVerifier* and use
# DefaultHostnameVerifier instead since we no longer need to workaround https://issues.apache.org/jira/browse/HTTPCLIENT-1698
httpclient = 4.3.6
httpcore = 4.3.3
httpclient = 4.5.2
httpcore = 4.4.4
commonslogging = 1.1.3
commonscodec = 1.10
hamcrest = 1.3
securemock = 1.2
# benchmark dependencies
jmh = 1.12

80
client/rest/build.gradle Normal file
View File

@ -0,0 +1,80 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import org.elasticsearch.gradle.precommit.PrecommitTasks
import org.gradle.api.JavaVersion
apply plugin: 'elasticsearch.build'
apply plugin: 'ru.vyarus.animalsniffer'
targetCompatibility = JavaVersion.VERSION_1_7
sourceCompatibility = JavaVersion.VERSION_1_7
group = 'org.elasticsearch.client'
dependencies {
compile "org.apache.httpcomponents:httpclient:${versions.httpclient}"
compile "org.apache.httpcomponents:httpcore:${versions.httpcore}"
compile "commons-codec:commons-codec:${versions.commonscodec}"
compile "commons-logging:commons-logging:${versions.commonslogging}"
testCompile "org.elasticsearch.client:test:${version}"
testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}"
testCompile "junit:junit:${versions.junit}"
testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}"
testCompile "org.elasticsearch:securemock:${versions.securemock}"
testCompile "org.codehaus.mojo:animal-sniffer-annotations:1.15"
signature "org.codehaus.mojo.signature:java17:1.0@signature"
}
forbiddenApisMain {
//client does not depend on core, so only jdk signatures should be checked
signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')]
}
forbiddenApisTest {
//we are using jdk-internal instead of jdk-non-portable to allow for com.sun.net.httpserver.* usage
bundledSignatures -= 'jdk-non-portable'
bundledSignatures += 'jdk-internal'
//client does not depend on core, so only jdk signatures should be checked
signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')]
}
//JarHell is part of es core, which we don't want to pull in
jarHell.enabled=false
namingConventions {
testClass = 'org.elasticsearch.client.RestClientTestCase'
//we don't have integration tests
skipIntegTestInDisguise = true
}
thirdPartyAudit.excludes = [
//commons-logging optional dependencies
'org.apache.avalon.framework.logger.Logger',
'org.apache.log.Hierarchy',
'org.apache.log.Logger',
'org.apache.log4j.Category',
'org.apache.log4j.Level',
'org.apache.log4j.Logger',
'org.apache.log4j.Priority',
//commons-logging provided dependencies
'javax.servlet.ServletContextEvent',
'javax.servlet.ServletContextListener'
]

View File

@ -0,0 +1,17 @@
Apache Commons Codec
Copyright 2002-2014 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
src/test/org/apache/commons/codec/language/DoubleMetaphoneTest.java
contains test data from http://aspell.net/test/orig/batch0.tab.
Copyright (C) 2002 Kevin Atkinson (kevina@gnu.org)
===============================================================================
The content of package org.apache.commons.codec.language.bm has been translated
from the original php source code available at http://stevemorse.org/phoneticinfo.htm
with permission from the original authors.
Original source copyright:
Copyright (c) 2008 Alexander Beider & Stephen P. Morse.

View File

@ -0,0 +1 @@
f6f66e966c70a83ffbdb6f17a0919eaf7c8aca7f

View File

@ -0,0 +1,6 @@
Apache Commons Logging
Copyright 2003-2014 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

View File

@ -0,0 +1 @@
733db77aa8d9b2d68015189df76ab06304406e50

View File

@ -0,0 +1,6 @@
Apache HttpComponents Client
Copyright 1999-2016 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

View File

@ -0,0 +1 @@
b31526a230871fbe285fbcbe2813f9c0839ae9b0

View File

@ -0,0 +1,558 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
=========================================================================
This project includes Public Suffix List copied from
<https://publicsuffix.org/list/effective_tld_names.dat>
licensed under the terms of the Mozilla Public License, v. 2.0
Full license text: <http://mozilla.org/MPL/2.0/>
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View File

@ -0,0 +1,6 @@
Apache HttpComponents Client
Copyright 1999-2016 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

View File

@ -0,0 +1,71 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import java.util.concurrent.TimeUnit;
/**
* Holds the state of a dead connection to a host. Keeps track of how many failed attempts were performed and
* when the host should be retried (based on number of previous failed attempts).
* Class is immutable, a new copy of it should be created each time the state has to be changed.
*/
final class DeadHostState {
private static final long MIN_CONNECTION_TIMEOUT_NANOS = TimeUnit.MINUTES.toNanos(1);
private static final long MAX_CONNECTION_TIMEOUT_NANOS = TimeUnit.MINUTES.toNanos(30);
static final DeadHostState INITIAL_DEAD_STATE = new DeadHostState();
private final int failedAttempts;
private final long deadUntilNanos;
private DeadHostState() {
this.failedAttempts = 1;
this.deadUntilNanos = System.nanoTime() + MIN_CONNECTION_TIMEOUT_NANOS;
}
/**
* We keep track of how many times a certain node fails consecutively. The higher that number is the longer we will wait
* to retry that same node again. Minimum is 1 minute (for a node the only failed once), maximum is 30 minutes (for a node
* that failed many consecutive times).
*/
DeadHostState(DeadHostState previousDeadHostState) {
long timeoutNanos = (long)Math.min(MIN_CONNECTION_TIMEOUT_NANOS * 2 * Math.pow(2, previousDeadHostState.failedAttempts * 0.5 - 1),
MAX_CONNECTION_TIMEOUT_NANOS);
this.deadUntilNanos = System.nanoTime() + timeoutNanos;
this.failedAttempts = previousDeadHostState.failedAttempts + 1;
}
/**
* Returns the timestamp (nanos) till the host is supposed to stay dead without being retried.
* After that the host should be retried.
*/
long getDeadUntilNanos() {
return deadUntilNanos;
}
@Override
public String toString() {
return "DeadHostState{" +
"failedAttempts=" + failedAttempts +
", deadUntilNanos=" + deadUntilNanos +
'}';
}
}

View File

@ -16,8 +16,9 @@
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.test.rest.client.http;
package org.elasticsearch.client;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
import java.net.URI;
@ -25,11 +26,11 @@ import java.net.URI;
/**
* Allows to send DELETE requests providing a body (not supported out of the box)
*/
public class HttpDeleteWithEntity extends HttpEntityEnclosingRequestBase {
final class HttpDeleteWithEntity extends HttpEntityEnclosingRequestBase {
public final static String METHOD_NAME = "DELETE";
static final String METHOD_NAME = HttpDelete.METHOD_NAME;
public HttpDeleteWithEntity(final URI uri) {
HttpDeleteWithEntity(final URI uri) {
setURI(uri);
}

View File

@ -16,20 +16,21 @@
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.test.rest.client.http;
package org.elasticsearch.client;
import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
import org.apache.http.client.methods.HttpGet;
import java.net.URI;
/**
* Allows to send GET requests providing a body (not supported out of the box)
*/
public class HttpGetWithEntity extends HttpEntityEnclosingRequestBase {
final class HttpGetWithEntity extends HttpEntityEnclosingRequestBase {
public final static String METHOD_NAME = "GET";
static final String METHOD_NAME = HttpGet.METHOD_NAME;
public HttpGetWithEntity(final URI uri) {
HttpGetWithEntity(final URI uri) {
setURI(uri);
}

View File

@ -0,0 +1,156 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.http.Header;
import org.apache.http.HttpEntity;
import org.apache.http.HttpEntityEnclosingRequest;
import org.apache.http.HttpHost;
import org.apache.http.HttpResponse;
import org.apache.http.RequestLine;
import org.apache.http.client.methods.HttpUriRequest;
import org.apache.http.entity.BufferedHttpEntity;
import org.apache.http.entity.ContentType;
import org.apache.http.util.EntityUtils;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
/**
* Helper class that exposes static methods to unify the way requests are logged.
* Includes trace logging to log complete requests and responses in curl format.
* Useful for debugging, manually sending logged requests via curl and checking their responses.
* Trace logging is a feature that all the language clients provide.
*/
final class RequestLogger {
private static final Log tracer = LogFactory.getLog("tracer");
private RequestLogger() {
}
/**
* Logs a request that yielded a response
*/
static void logResponse(Log logger, HttpUriRequest request, HttpHost host, HttpResponse httpResponse) {
if (logger.isDebugEnabled()) {
logger.debug("request [" + request.getMethod() + " " + host + getUri(request.getRequestLine()) +
"] returned [" + httpResponse.getStatusLine() + "]");
}
if (tracer.isTraceEnabled()) {
String requestLine;
try {
requestLine = buildTraceRequest(request, host);
} catch(IOException e) {
requestLine = "";
tracer.trace("error while reading request for trace purposes", e);
}
String responseLine;
try {
responseLine = buildTraceResponse(httpResponse);
} catch(IOException e) {
responseLine = "";
tracer.trace("error while reading response for trace purposes", e);
}
tracer.trace(requestLine + '\n' + responseLine);
}
}
/**
* Logs a request that failed
*/
static void logFailedRequest(Log logger, HttpUriRequest request, HttpHost host, IOException e) {
if (logger.isDebugEnabled()) {
logger.debug("request [" + request.getMethod() + " " + host + getUri(request.getRequestLine()) + "] failed", e);
}
if (tracer.isTraceEnabled()) {
String traceRequest;
try {
traceRequest = buildTraceRequest(request, host);
} catch (IOException e1) {
tracer.trace("error while reading request for trace purposes", e);
traceRequest = "";
}
tracer.trace(traceRequest);
}
}
/**
* Creates curl output for given request
*/
static String buildTraceRequest(HttpUriRequest request, HttpHost host) throws IOException {
String requestLine = "curl -iX " + request.getMethod() + " '" + host + getUri(request.getRequestLine()) + "'";
if (request instanceof HttpEntityEnclosingRequest) {
HttpEntityEnclosingRequest enclosingRequest = (HttpEntityEnclosingRequest) request;
if (enclosingRequest.getEntity() != null) {
requestLine += " -d '";
HttpEntity entity = enclosingRequest.getEntity();
if (entity.isRepeatable() == false) {
entity = new BufferedHttpEntity(enclosingRequest.getEntity());
enclosingRequest.setEntity(entity);
}
requestLine += EntityUtils.toString(entity, StandardCharsets.UTF_8) + "'";
}
}
return requestLine;
}
/**
* Creates curl output for given response
*/
static String buildTraceResponse(HttpResponse httpResponse) throws IOException {
String responseLine = "# " + httpResponse.getStatusLine().toString();
for (Header header : httpResponse.getAllHeaders()) {
responseLine += "\n# " + header.getName() + ": " + header.getValue();
}
responseLine += "\n#";
HttpEntity entity = httpResponse.getEntity();
if (entity != null) {
if (entity.isRepeatable() == false) {
entity = new BufferedHttpEntity(entity);
}
httpResponse.setEntity(entity);
ContentType contentType = ContentType.get(entity);
Charset charset = StandardCharsets.UTF_8;
if (contentType != null) {
charset = contentType.getCharset();
}
try (BufferedReader reader = new BufferedReader(new InputStreamReader(entity.getContent(), charset))) {
String line;
while( (line = reader.readLine()) != null) {
responseLine += "\n# " + line;
}
}
}
return responseLine;
}
private static String getUri(RequestLine requestLine) {
if (requestLine.getUri().charAt(0) != '/') {
return "/" + requestLine.getUri();
}
return requestLine.getUri();
}
}

View File

@ -0,0 +1,115 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.http.Header;
import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.RequestLine;
import org.apache.http.StatusLine;
import org.apache.http.client.methods.CloseableHttpResponse;
import java.io.Closeable;
import java.io.IOException;
import java.util.Objects;
/**
* Holds an elasticsearch response. It wraps the {@link CloseableHttpResponse} response and associates it with
* its corresponding {@link RequestLine} and {@link HttpHost}.
* It must be closed to free any resource held by it, as well as the corresponding connection in the connection pool.
*/
public class Response implements Closeable {
private final RequestLine requestLine;
private final HttpHost host;
private final CloseableHttpResponse response;
Response(RequestLine requestLine, HttpHost host, CloseableHttpResponse response) {
Objects.requireNonNull(requestLine, "requestLine cannot be null");
Objects.requireNonNull(host, "node cannot be null");
Objects.requireNonNull(response, "response cannot be null");
this.requestLine = requestLine;
this.host = host;
this.response = response;
}
/**
* Returns the request line that generated this response
*/
public RequestLine getRequestLine() {
return requestLine;
}
/**
* Returns the node that returned this response
*/
public HttpHost getHost() {
return host;
}
/**
* Returns the status line of the current response
*/
public StatusLine getStatusLine() {
return response.getStatusLine();
}
/**
* Returns all the response headers
*/
public Header[] getHeaders() {
return response.getAllHeaders();
}
/**
* Returns the value of the first header with a specified name of this message.
* If there is more than one matching header in the message the first element is returned.
* If there is no matching header in the message <code>null</code> is returned.
*/
public String getHeader(String name) {
Header header = response.getFirstHeader(name);
if (header == null) {
return null;
}
return header.getValue();
}
/**
* Returns the response body available, null otherwise
* @see HttpEntity
*/
public HttpEntity getEntity() {
return response.getEntity();
}
@Override
public String toString() {
return "Response{" +
"requestLine=" + requestLine +
", host=" + host +
", response=" + response.getStatusLine() +
'}';
}
@Override
public void close() throws IOException {
this.response.close();
}
}

View File

@ -0,0 +1,66 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import java.io.IOException;
/**
* Exception thrown when an elasticsearch node responds to a request with a status code that indicates an error.
* Note that the response body gets passed in as a string and read eagerly, which means that the Response object
* is expected to be closed and available only to read metadata like status line, request line, response headers.
*/
public class ResponseException extends IOException {
private Response response;
private final String responseBody;
ResponseException(Response response, String responseBody) throws IOException {
super(buildMessage(response,responseBody));
this.response = response;
this.responseBody = responseBody;
}
private static String buildMessage(Response response, String responseBody) {
String message = response.getRequestLine().getMethod() + " " + response.getHost() + response.getRequestLine().getUri()
+ ": " + response.getStatusLine().toString();
if (responseBody != null) {
message += "\n" + responseBody;
}
return message;
}
/**
* Returns the {@link Response} that caused this exception to be thrown.
* Expected to be used only to read metadata like status line, request line, response headers. The response body should
* be retrieved using {@link #getResponseBody()}
*/
public Response getResponse() {
return response;
}
/**
* Returns the response body as a string or null if there wasn't any.
* The body is eagerly consumed when an ResponseException gets created, and its corresponding Response
* gets closed straightaway so this method is the only way to get back the response body that was returned.
*/
public String getResponseBody() {
return responseBody;
}
}

View File

@ -0,0 +1,508 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.http.Consts;
import org.apache.http.Header;
import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.apache.http.HttpRequest;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
import org.apache.http.client.methods.HttpHead;
import org.apache.http.client.methods.HttpOptions;
import org.apache.http.client.methods.HttpPatch;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.methods.HttpPut;
import org.apache.http.client.methods.HttpRequestBase;
import org.apache.http.client.methods.HttpTrace;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.config.Registry;
import org.apache.http.conn.socket.ConnectionSocketFactory;
import org.apache.http.entity.ContentType;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.util.EntityUtils;
import java.io.Closeable;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
/**
* Client that connects to an elasticsearch cluster through http.
* Must be created using {@link Builder}, which allows to set all the different options or just rely on defaults.
* The hosts that are part of the cluster need to be provided at creation time, but can also be replaced later
* by calling {@link #setHosts(HttpHost...)}.
* The method {@link #performRequest(String, String, Map, HttpEntity, Header...)} allows to send a request to the cluster. When
* sending a request, a host gets selected out of the provided ones in a round-robin fashion. Failing hosts are marked dead and
* retried after a certain amount of time (minimum 1 minute, maximum 30 minutes), depending on how many times they previously
* failed (the more failures, the later they will be retried). In case of failures all of the alive nodes (or dead nodes that
* deserve a retry) are retried till one responds or none of them does, in which case an {@link IOException} will be thrown.
*
* Requests can be traced by enabling trace logging for "tracer". The trace logger outputs requests and responses in curl format.
*/
public final class RestClient implements Closeable {
private static final Log logger = LogFactory.getLog(RestClient.class);
public static ContentType JSON_CONTENT_TYPE = ContentType.create("application/json", Consts.UTF_8);
private final CloseableHttpClient client;
//we don't rely on default headers supported by HttpClient as those cannot be replaced, plus it would get hairy
//when we create the HttpClient instance on our own as there would be two different ways to set the default headers.
private final Header[] defaultHeaders;
private final long maxRetryTimeoutMillis;
private final AtomicInteger lastHostIndex = new AtomicInteger(0);
private volatile Set<HttpHost> hosts;
private final ConcurrentMap<HttpHost, DeadHostState> blacklist = new ConcurrentHashMap<>();
private final FailureListener failureListener;
private RestClient(CloseableHttpClient client, long maxRetryTimeoutMillis, Header[] defaultHeaders,
HttpHost[] hosts, FailureListener failureListener) {
this.client = client;
this.maxRetryTimeoutMillis = maxRetryTimeoutMillis;
this.defaultHeaders = defaultHeaders;
this.failureListener = failureListener;
setHosts(hosts);
}
/**
* Replaces the hosts that the client communicates with.
* @see HttpHost
*/
public synchronized void setHosts(HttpHost... hosts) {
if (hosts == null || hosts.length == 0) {
throw new IllegalArgumentException("hosts must not be null nor empty");
}
Set<HttpHost> httpHosts = new HashSet<>();
for (HttpHost host : hosts) {
Objects.requireNonNull(host, "host cannot be null");
httpHosts.add(host);
}
this.hosts = Collections.unmodifiableSet(httpHosts);
this.blacklist.clear();
}
/**
* Sends a request to the elasticsearch cluster that the current client points to.
* Shortcut to {@link #performRequest(String, String, Map, HttpEntity, Header...)} but without parameters and request body.
*
* @param method the http method
* @param endpoint the path of the request (without host and port)
* @param headers the optional request headers
* @return the response returned by elasticsearch
* @throws IOException in case of a problem or the connection was aborted
* @throws ClientProtocolException in case of an http protocol error
* @throws ResponseException in case elasticsearch responded with a status code that indicated an error
*/
public Response performRequest(String method, String endpoint, Header... headers) throws IOException {
return performRequest(method, endpoint, Collections.<String, String>emptyMap(), null, headers);
}
/**
* Sends a request to the elasticsearch cluster that the current client points to.
* Shortcut to {@link #performRequest(String, String, Map, HttpEntity, Header...)} but without request body.
*
* @param method the http method
* @param endpoint the path of the request (without host and port)
* @param params the query_string parameters
* @param headers the optional request headers
* @return the response returned by elasticsearch
* @throws IOException in case of a problem or the connection was aborted
* @throws ClientProtocolException in case of an http protocol error
* @throws ResponseException in case elasticsearch responded with a status code that indicated an error
*/
public Response performRequest(String method, String endpoint, Map<String, String> params, Header... headers) throws IOException {
return performRequest(method, endpoint, params, null, headers);
}
/**
* Sends a request to the elasticsearch cluster that the current client points to.
* Selects a host out of the provided ones in a round-robin fashion. Failing hosts are marked dead and retried after a certain
* amount of time (minimum 1 minute, maximum 30 minutes), depending on how many times they previously failed (the more failures,
* the later they will be retried). In case of failures all of the alive nodes (or dead nodes that deserve a retry) are retried
* till one responds or none of them does, in which case an {@link IOException} will be thrown.
*
* @param method the http method
* @param endpoint the path of the request (without host and port)
* @param params the query_string parameters
* @param entity the body of the request, null if not applicable
* @param headers the optional request headers
* @return the response returned by elasticsearch
* @throws IOException in case of a problem or the connection was aborted
* @throws ClientProtocolException in case of an http protocol error
* @throws ResponseException in case elasticsearch responded with a status code that indicated an error
*/
public Response performRequest(String method, String endpoint, Map<String, String> params,
HttpEntity entity, Header... headers) throws IOException {
URI uri = buildUri(endpoint, params);
HttpRequestBase request = createHttpRequest(method, uri, entity);
setHeaders(request, headers);
//we apply a soft margin so that e.g. if a request took 59 seconds and timeout is set to 60 we don't do another attempt
long retryTimeoutMillis = Math.round(this.maxRetryTimeoutMillis / (float)100 * 98);
IOException lastSeenException = null;
long startTime = System.nanoTime();
for (HttpHost host : nextHost()) {
if (lastSeenException != null) {
//in case we are retrying, check whether maxRetryTimeout has been reached, in which case an exception will be thrown
long timeElapsedMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime);
long timeout = retryTimeoutMillis - timeElapsedMillis;
if (timeout <= 0) {
IOException retryTimeoutException = new IOException(
"request retries exceeded max retry timeout [" + retryTimeoutMillis + "]");
retryTimeoutException.addSuppressed(lastSeenException);
throw retryTimeoutException;
}
//also reset the request to make it reusable for the next attempt
request.reset();
}
CloseableHttpResponse httpResponse;
try {
httpResponse = client.execute(host, request);
} catch(IOException e) {
RequestLogger.logFailedRequest(logger, request, host, e);
onFailure(host);
lastSeenException = addSuppressedException(lastSeenException, e);
continue;
}
Response response = new Response(request.getRequestLine(), host, httpResponse);
int statusCode = response.getStatusLine().getStatusCode();
if (statusCode < 300 || (request.getMethod().equals(HttpHead.METHOD_NAME) && statusCode == 404) ) {
RequestLogger.logResponse(logger, request, host, httpResponse);
onResponse(host);
return response;
}
RequestLogger.logResponse(logger, request, host, httpResponse);
String responseBody;
try {
if (response.getEntity() == null) {
responseBody = null;
} else {
responseBody = EntityUtils.toString(response.getEntity());
}
} finally {
response.close();
}
lastSeenException = addSuppressedException(lastSeenException, new ResponseException(response, responseBody));
switch(statusCode) {
case 502:
case 503:
case 504:
//mark host dead and retry against next one
onFailure(host);
break;
default:
//mark host alive and don't retry, as the error should be a request problem
onResponse(host);
throw lastSeenException;
}
}
//we get here only when we tried all nodes and they all failed
assert lastSeenException != null;
throw lastSeenException;
}
private void setHeaders(HttpRequest httpRequest, Header[] requestHeaders) {
Objects.requireNonNull(requestHeaders, "request headers must not be null");
for (Header defaultHeader : defaultHeaders) {
httpRequest.setHeader(defaultHeader);
}
for (Header requestHeader : requestHeaders) {
Objects.requireNonNull(requestHeader, "request header must not be null");
httpRequest.setHeader(requestHeader);
}
}
/**
* Returns an iterator of hosts to be used for a request call.
* Ideally, the first host is retrieved from the iterator and used successfully for the request.
* Otherwise, after each failure the next host should be retrieved from the iterator so that the request can be retried till
* the iterator is exhausted. The maximum total of attempts is equal to the number of hosts that are available in the iterator.
* The iterator returned will never be empty, rather an {@link IllegalStateException} in case there are no hosts.
* In case there are no healthy hosts available, or dead ones to be be retried, one dead host gets returned.
*/
private Iterable<HttpHost> nextHost() {
Set<HttpHost> filteredHosts = new HashSet<>(hosts);
for (Map.Entry<HttpHost, DeadHostState> entry : blacklist.entrySet()) {
if (System.nanoTime() - entry.getValue().getDeadUntilNanos() < 0) {
filteredHosts.remove(entry.getKey());
}
}
if (filteredHosts.isEmpty()) {
//last resort: if there are no good hosts to use, return a single dead one, the one that's closest to being retried
List<Map.Entry<HttpHost, DeadHostState>> sortedHosts = new ArrayList<>(blacklist.entrySet());
Collections.sort(sortedHosts, new Comparator<Map.Entry<HttpHost, DeadHostState>>() {
@Override
public int compare(Map.Entry<HttpHost, DeadHostState> o1, Map.Entry<HttpHost, DeadHostState> o2) {
return Long.compare(o1.getValue().getDeadUntilNanos(), o2.getValue().getDeadUntilNanos());
}
});
HttpHost deadHost = sortedHosts.get(0).getKey();
logger.trace("resurrecting host [" + deadHost + "]");
return Collections.singleton(deadHost);
}
List<HttpHost> rotatedHosts = new ArrayList<>(filteredHosts);
Collections.rotate(rotatedHosts, rotatedHosts.size() - lastHostIndex.getAndIncrement());
return rotatedHosts;
}
/**
* Called after each successful request call.
* Receives as an argument the host that was used for the successful request.
*/
private void onResponse(HttpHost host) {
DeadHostState removedHost = this.blacklist.remove(host);
if (logger.isDebugEnabled() && removedHost != null) {
logger.debug("removed host [" + host + "] from blacklist");
}
}
/**
* Called after each failed attempt.
* Receives as an argument the host that was used for the failed attempt.
*/
private void onFailure(HttpHost host) throws IOException {
while(true) {
DeadHostState previousDeadHostState = blacklist.putIfAbsent(host, DeadHostState.INITIAL_DEAD_STATE);
if (previousDeadHostState == null) {
logger.debug("added host [" + host + "] to blacklist");
break;
}
if (blacklist.replace(host, previousDeadHostState, new DeadHostState(previousDeadHostState))) {
logger.debug("updated host [" + host + "] already in blacklist");
break;
}
}
failureListener.onFailure(host);
}
@Override
public void close() throws IOException {
client.close();
}
private static IOException addSuppressedException(IOException suppressedException, IOException currentException) {
if (suppressedException != null) {
currentException.addSuppressed(suppressedException);
}
return currentException;
}
private static HttpRequestBase createHttpRequest(String method, URI uri, HttpEntity entity) {
switch(method.toUpperCase(Locale.ROOT)) {
case HttpDeleteWithEntity.METHOD_NAME:
return addRequestBody(new HttpDeleteWithEntity(uri), entity);
case HttpGetWithEntity.METHOD_NAME:
return addRequestBody(new HttpGetWithEntity(uri), entity);
case HttpHead.METHOD_NAME:
return addRequestBody(new HttpHead(uri), entity);
case HttpOptions.METHOD_NAME:
return addRequestBody(new HttpOptions(uri), entity);
case HttpPatch.METHOD_NAME:
return addRequestBody(new HttpPatch(uri), entity);
case HttpPost.METHOD_NAME:
HttpPost httpPost = new HttpPost(uri);
addRequestBody(httpPost, entity);
return httpPost;
case HttpPut.METHOD_NAME:
return addRequestBody(new HttpPut(uri), entity);
case HttpTrace.METHOD_NAME:
return addRequestBody(new HttpTrace(uri), entity);
default:
throw new UnsupportedOperationException("http method not supported: " + method);
}
}
private static HttpRequestBase addRequestBody(HttpRequestBase httpRequest, HttpEntity entity) {
if (entity != null) {
if (httpRequest instanceof HttpEntityEnclosingRequestBase) {
((HttpEntityEnclosingRequestBase)httpRequest).setEntity(entity);
} else {
throw new UnsupportedOperationException(httpRequest.getMethod() + " with body is not supported");
}
}
return httpRequest;
}
private static URI buildUri(String path, Map<String, String> params) {
Objects.requireNonNull(params, "params must not be null");
try {
URIBuilder uriBuilder = new URIBuilder(path);
for (Map.Entry<String, String> param : params.entrySet()) {
uriBuilder.addParameter(param.getKey(), param.getValue());
}
return uriBuilder.build();
} catch(URISyntaxException e) {
throw new IllegalArgumentException(e.getMessage(), e);
}
}
/**
* Returns a new {@link Builder} to help with {@link RestClient} creation.
*/
public static Builder builder(HttpHost... hosts) {
return new Builder(hosts);
}
/**
* Rest client builder. Helps creating a new {@link RestClient}.
*/
public static final class Builder {
public static final int DEFAULT_CONNECT_TIMEOUT_MILLIS = 1000;
public static final int DEFAULT_SOCKET_TIMEOUT_MILLIS = 10000;
public static final int DEFAULT_MAX_RETRY_TIMEOUT_MILLIS = DEFAULT_SOCKET_TIMEOUT_MILLIS;
public static final int DEFAULT_CONNECTION_REQUEST_TIMEOUT_MILLIS = 500;
private static final Header[] EMPTY_HEADERS = new Header[0];
private final HttpHost[] hosts;
private CloseableHttpClient httpClient;
private int maxRetryTimeout = DEFAULT_MAX_RETRY_TIMEOUT_MILLIS;
private Header[] defaultHeaders = EMPTY_HEADERS;
private FailureListener failureListener;
/**
* Creates a new builder instance and sets the hosts that the client will send requests to.
*/
private Builder(HttpHost... hosts) {
if (hosts == null || hosts.length == 0) {
throw new IllegalArgumentException("no hosts provided");
}
this.hosts = hosts;
}
/**
* Sets the http client. A new default one will be created if not
* specified, by calling {@link #createDefaultHttpClient(Registry)})}.
*
* @see CloseableHttpClient
*/
public Builder setHttpClient(CloseableHttpClient httpClient) {
this.httpClient = httpClient;
return this;
}
/**
* Sets the maximum timeout (in milliseconds) to honour in case of multiple retries of the same request.
* {@link #DEFAULT_MAX_RETRY_TIMEOUT_MILLIS} if not specified.
*
* @throws IllegalArgumentException if maxRetryTimeoutMillis is not greater than 0
*/
public Builder setMaxRetryTimeoutMillis(int maxRetryTimeoutMillis) {
if (maxRetryTimeoutMillis <= 0) {
throw new IllegalArgumentException("maxRetryTimeoutMillis must be greater than 0");
}
this.maxRetryTimeout = maxRetryTimeoutMillis;
return this;
}
/**
* Sets the default request headers, to be used when creating the default http client instance.
* In case the http client is set through {@link #setHttpClient(CloseableHttpClient)}, the default headers need to be
* set to it externally during http client construction.
*/
public Builder setDefaultHeaders(Header[] defaultHeaders) {
Objects.requireNonNull(defaultHeaders, "default headers must not be null");
for (Header defaultHeader : defaultHeaders) {
Objects.requireNonNull(defaultHeader, "default header must not be null");
}
this.defaultHeaders = defaultHeaders;
return this;
}
/**
* Sets the {@link FailureListener} to be notified for each request failure
*/
public Builder setFailureListener(FailureListener failureListener) {
Objects.requireNonNull(failureListener, "failure listener must not be null");
this.failureListener = failureListener;
return this;
}
/**
* Creates a new {@link RestClient} based on the provided configuration.
*/
public RestClient build() {
if (httpClient == null) {
httpClient = createDefaultHttpClient(null);
}
if (failureListener == null) {
failureListener = new FailureListener();
}
return new RestClient(httpClient, maxRetryTimeout, defaultHeaders, hosts, failureListener);
}
/**
* Creates a {@link CloseableHttpClient} with default settings. Used when the http client instance is not provided.
*
* @see CloseableHttpClient
*/
public static CloseableHttpClient createDefaultHttpClient(Registry<ConnectionSocketFactory> socketFactoryRegistry) {
PoolingHttpClientConnectionManager connectionManager;
if (socketFactoryRegistry == null) {
connectionManager = new PoolingHttpClientConnectionManager();
} else {
connectionManager = new PoolingHttpClientConnectionManager(socketFactoryRegistry);
}
//default settings may be too constraining
connectionManager.setDefaultMaxPerRoute(10);
connectionManager.setMaxTotal(30);
//default timeouts are all infinite
RequestConfig requestConfig = RequestConfig.custom().setConnectTimeout(DEFAULT_CONNECT_TIMEOUT_MILLIS)
.setSocketTimeout(DEFAULT_SOCKET_TIMEOUT_MILLIS)
.setConnectionRequestTimeout(DEFAULT_CONNECTION_REQUEST_TIMEOUT_MILLIS).build();
return HttpClientBuilder.create().setConnectionManager(connectionManager).setDefaultRequestConfig(requestConfig).build();
}
}
/**
* Listener that allows to be notified whenever a failure happens. Useful when sniffing is enabled, so that we can sniff on failure.
* The default implementation is a no-op.
*/
public static class FailureListener {
/**
* Notifies that the host provided as argument has just failed
*/
public void onFailure(HttpHost host) throws IOException {
}
}
}

View File

@ -16,26 +16,27 @@
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.script.mustache;
import com.fasterxml.jackson.core.io.JsonStringEncoder;
import com.github.mustachejava.DefaultMustacheFactory;
import com.github.mustachejava.MustacheException;
package org.elasticsearch.client;
import org.apache.http.StatusLine;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.message.BasicHttpResponse;
import java.io.IOException;
import java.io.Writer;
/**
* A MustacheFactory that does simple JSON escaping.
* Simple {@link CloseableHttpResponse} impl needed to easily create http responses that are closeable given that
* org.apache.http.impl.execchain.HttpResponseProxy is not public.
*/
final class JsonEscapingMustacheFactory extends DefaultMustacheFactory {
class CloseableBasicHttpResponse extends BasicHttpResponse implements CloseableHttpResponse {
public CloseableBasicHttpResponse(StatusLine statusline) {
super(statusline);
}
@Override
public void encode(String value, Writer writer) {
try {
writer.write(JsonStringEncoder.getInstance().quoteAsString(value));
} catch (IOException e) {
throw new MustacheException("Failed to encode value: " + value);
}
public void close() throws IOException {
//nothing to close
}
}
}

View File

@ -0,0 +1,152 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import org.apache.http.HttpEntity;
import org.apache.http.HttpEntityEnclosingRequest;
import org.apache.http.HttpHost;
import org.apache.http.ProtocolVersion;
import org.apache.http.client.methods.HttpHead;
import org.apache.http.client.methods.HttpOptions;
import org.apache.http.client.methods.HttpPatch;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.methods.HttpPut;
import org.apache.http.client.methods.HttpRequestBase;
import org.apache.http.client.methods.HttpTrace;
import org.apache.http.entity.InputStreamEntity;
import org.apache.http.entity.StringEntity;
import org.apache.http.message.BasicHttpResponse;
import org.apache.http.message.BasicStatusLine;
import org.apache.http.util.EntityUtils;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.nio.charset.StandardCharsets;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.junit.Assert.assertThat;
public class RequestLoggerTests extends RestClientTestCase {
public void testTraceRequest() throws IOException, URISyntaxException {
HttpHost host = new HttpHost("localhost", 9200, getRandom().nextBoolean() ? "http" : "https");
String expectedEndpoint = "/index/type/_api";
URI uri;
if (randomBoolean()) {
uri = new URI(expectedEndpoint);
} else {
uri = new URI("index/type/_api");
}
HttpRequestBase request;
int requestType = RandomInts.randomIntBetween(getRandom(), 0, 7);
switch(requestType) {
case 0:
request = new HttpGetWithEntity(uri);
break;
case 1:
request = new HttpPost(uri);
break;
case 2:
request = new HttpPut(uri);
break;
case 3:
request = new HttpDeleteWithEntity(uri);
break;
case 4:
request = new HttpHead(uri);
break;
case 5:
request = new HttpTrace(uri);
break;
case 6:
request = new HttpOptions(uri);
break;
case 7:
request = new HttpPatch(uri);
break;
default:
throw new UnsupportedOperationException();
}
String expected = "curl -iX " + request.getMethod() + " '" + host + expectedEndpoint + "'";
boolean hasBody = request instanceof HttpEntityEnclosingRequest && getRandom().nextBoolean();
String requestBody = "{ \"field\": \"value\" }";
if (hasBody) {
expected += " -d '" + requestBody + "'";
HttpEntityEnclosingRequest enclosingRequest = (HttpEntityEnclosingRequest) request;
HttpEntity entity;
if (getRandom().nextBoolean()) {
entity = new StringEntity(requestBody, StandardCharsets.UTF_8);
} else {
entity = new InputStreamEntity(new ByteArrayInputStream(requestBody.getBytes(StandardCharsets.UTF_8)));
}
enclosingRequest.setEntity(entity);
}
String traceRequest = RequestLogger.buildTraceRequest(request, host);
assertThat(traceRequest, equalTo(expected));
if (hasBody) {
//check that the body is still readable as most entities are not repeatable
String body = EntityUtils.toString(((HttpEntityEnclosingRequest) request).getEntity(), StandardCharsets.UTF_8);
assertThat(body, equalTo(requestBody));
}
}
public void testTraceResponse() throws IOException {
ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1);
int statusCode = RandomInts.randomIntBetween(getRandom(), 200, 599);
String reasonPhrase = "REASON";
BasicStatusLine statusLine = new BasicStatusLine(protocolVersion, statusCode, reasonPhrase);
String expected = "# " + statusLine.toString();
BasicHttpResponse httpResponse = new BasicHttpResponse(statusLine);
int numHeaders = RandomInts.randomIntBetween(getRandom(), 0, 3);
for (int i = 0; i < numHeaders; i++) {
httpResponse.setHeader("header" + i, "value");
expected += "\n# header" + i + ": value";
}
expected += "\n#";
boolean hasBody = getRandom().nextBoolean();
String responseBody = "{\n \"field\": \"value\"\n}";
if (hasBody) {
expected += "\n# {";
expected += "\n# \"field\": \"value\"";
expected += "\n# }";
HttpEntity entity;
if (getRandom().nextBoolean()) {
entity = new StringEntity(responseBody, StandardCharsets.UTF_8);
} else {
entity = new InputStreamEntity(new ByteArrayInputStream(responseBody.getBytes(StandardCharsets.UTF_8)));
}
httpResponse.setEntity(entity);
}
String traceResponse = RequestLogger.buildTraceResponse(httpResponse);
assertThat(traceResponse, equalTo(expected));
if (hasBody) {
//check that the body is still readable as most entities are not repeatable
String body = EntityUtils.toString(httpResponse.getEntity(), StandardCharsets.UTF_8);
assertThat(body, equalTo(responseBody));
}
}
}

View File

@ -0,0 +1,111 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import org.apache.http.Header;
import org.apache.http.HttpHost;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.message.BasicHeader;
import java.io.IOException;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.fail;
public class RestClientBuilderTests extends RestClientTestCase {
public void testBuild() throws IOException {
try {
RestClient.builder((HttpHost[])null);
fail("should have failed");
} catch(IllegalArgumentException e) {
assertEquals("no hosts provided", e.getMessage());
}
try {
RestClient.builder();
fail("should have failed");
} catch(IllegalArgumentException e) {
assertEquals("no hosts provided", e.getMessage());
}
try {
RestClient.builder(new HttpHost[]{new HttpHost("localhost", 9200), null}).build();
fail("should have failed");
} catch(NullPointerException e) {
assertEquals("host cannot be null", e.getMessage());
}
try {
RestClient.builder(new HttpHost("localhost", 9200))
.setMaxRetryTimeoutMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
fail("should have failed");
} catch(IllegalArgumentException e) {
assertEquals("maxRetryTimeoutMillis must be greater than 0", e.getMessage());
}
try {
RestClient.builder(new HttpHost("localhost", 9200)).setDefaultHeaders(null);
fail("should have failed");
} catch(NullPointerException e) {
assertEquals("default headers must not be null", e.getMessage());
}
try {
RestClient.builder(new HttpHost("localhost", 9200)).setDefaultHeaders(new Header[]{null});
fail("should have failed");
} catch(NullPointerException e) {
assertEquals("default header must not be null", e.getMessage());
}
try {
RestClient.builder(new HttpHost("localhost", 9200)).setFailureListener(null);
fail("should have failed");
} catch(NullPointerException e) {
assertEquals("failure listener must not be null", e.getMessage());
}
int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5);
HttpHost[] hosts = new HttpHost[numNodes];
for (int i = 0; i < numNodes; i++) {
hosts[i] = new HttpHost("localhost", 9200 + i);
}
RestClient.Builder builder = RestClient.builder(hosts);
if (getRandom().nextBoolean()) {
builder.setHttpClient(HttpClientBuilder.create().build());
}
if (getRandom().nextBoolean()) {
int numHeaders = RandomInts.randomIntBetween(getRandom(), 1, 5);
Header[] headers = new Header[numHeaders];
for (int i = 0; i < numHeaders; i++) {
headers[i] = new BasicHeader("header" + i, "value");
}
builder.setDefaultHeaders(headers);
}
if (getRandom().nextBoolean()) {
builder.setMaxRetryTimeoutMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
}
try (RestClient restClient = builder.build()) {
assertNotNull(restClient);
}
}
}

View File

@ -0,0 +1,221 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import com.carrotsearch.randomizedtesting.generators.RandomStrings;
import com.sun.net.httpserver.Headers;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
import org.apache.http.Consts;
import org.apache.http.Header;
import org.apache.http.HttpHost;
import org.apache.http.entity.StringEntity;
import org.apache.http.message.BasicHeader;
import org.apache.http.util.EntityUtils;
import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import static org.elasticsearch.client.RestClientTestUtil.getAllStatusCodes;
import static org.elasticsearch.client.RestClientTestUtil.getHttpMethods;
import static org.elasticsearch.client.RestClientTestUtil.randomStatusCode;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.assertTrue;
/**
* Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}.
* Works against a real http server, one single host.
*/
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
@IgnoreJRERequirement
public class RestClientIntegTests extends RestClientTestCase {
private static HttpServer httpServer;
private static RestClient restClient;
private static Header[] defaultHeaders;
@BeforeClass
public static void startHttpServer() throws Exception {
httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0);
httpServer.start();
//returns a different status code depending on the path
for (int statusCode : getAllStatusCodes()) {
createStatusCodeContext(httpServer, statusCode);
}
int numHeaders = RandomInts.randomIntBetween(getRandom(), 0, 3);
defaultHeaders = new Header[numHeaders];
for (int i = 0; i < numHeaders; i++) {
String headerName = "Header-default" + (getRandom().nextBoolean() ? i : "");
String headerValue = RandomStrings.randomAsciiOfLengthBetween(getRandom(), 3, 10);
defaultHeaders[i] = new BasicHeader(headerName, headerValue);
}
restClient = RestClient.builder(new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort()))
.setDefaultHeaders(defaultHeaders).build();
}
private static void createStatusCodeContext(HttpServer httpServer, final int statusCode) {
httpServer.createContext("/" + statusCode, new ResponseHandler(statusCode));
}
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
@IgnoreJRERequirement
private static class ResponseHandler implements HttpHandler {
private final int statusCode;
ResponseHandler(int statusCode) {
this.statusCode = statusCode;
}
@Override
public void handle(HttpExchange httpExchange) throws IOException {
StringBuilder body = new StringBuilder();
try (InputStreamReader reader = new InputStreamReader(httpExchange.getRequestBody(), Consts.UTF_8)) {
char[] buffer = new char[256];
int read;
while ((read = reader.read(buffer)) != -1) {
body.append(buffer, 0, read);
}
}
Headers requestHeaders = httpExchange.getRequestHeaders();
Headers responseHeaders = httpExchange.getResponseHeaders();
for (Map.Entry<String, List<String>> header : requestHeaders.entrySet()) {
responseHeaders.put(header.getKey(), header.getValue());
}
httpExchange.getRequestBody().close();
httpExchange.sendResponseHeaders(statusCode, body.length() == 0 ? -1 : body.length());
if (body.length() > 0) {
try (OutputStream out = httpExchange.getResponseBody()) {
out.write(body.toString().getBytes(Consts.UTF_8));
}
}
httpExchange.close();
}
}
@AfterClass
public static void stopHttpServers() throws IOException {
restClient.close();
restClient = null;
httpServer.stop(0);
httpServer = null;
}
/**
* End to end test for headers. We test it explicitly against a real http client as there are different ways
* to set/add headers to the {@link org.apache.http.client.HttpClient}.
* Exercises the test http server ability to send back whatever headers it received.
*/
public void testHeaders() throws Exception {
for (String method : getHttpMethods()) {
Set<String> standardHeaders = new HashSet<>(
Arrays.asList("Accept-encoding", "Connection", "Host", "User-agent", "Date"));
if (method.equals("HEAD") == false) {
standardHeaders.add("Content-length");
}
int numHeaders = RandomInts.randomIntBetween(getRandom(), 1, 5);
Map<String, String> expectedHeaders = new HashMap<>();
for (Header defaultHeader : defaultHeaders) {
expectedHeaders.put(defaultHeader.getName(), defaultHeader.getValue());
}
Header[] headers = new Header[numHeaders];
for (int i = 0; i < numHeaders; i++) {
String headerName = "Header" + (getRandom().nextBoolean() ? i : "");
String headerValue = RandomStrings.randomAsciiOfLengthBetween(getRandom(), 3, 10);
headers[i] = new BasicHeader(headerName, headerValue);
expectedHeaders.put(headerName, headerValue);
}
int statusCode = randomStatusCode(getRandom());
Response esResponse;
try (Response response = restClient.performRequest(method, "/" + statusCode,
Collections.<String, String>emptyMap(), null, headers)) {
esResponse = response;
} catch(ResponseException e) {
esResponse = e.getResponse();
}
assertThat(esResponse.getStatusLine().getStatusCode(), equalTo(statusCode));
for (Header responseHeader : esResponse.getHeaders()) {
if (responseHeader.getName().startsWith("Header")) {
String headerValue = expectedHeaders.remove(responseHeader.getName());
assertNotNull("found response header [" + responseHeader.getName() + "] that wasn't originally sent", headerValue);
} else {
assertTrue("unknown header was returned " + responseHeader.getName(),
standardHeaders.remove(responseHeader.getName()));
}
}
assertEquals("some headers that were sent weren't returned: " + expectedHeaders, 0, expectedHeaders.size());
assertEquals("some expected standard headers weren't returned: " + standardHeaders, 0, standardHeaders.size());
}
}
/**
* End to end test for delete with body. We test it explicitly as it is not supported
* out of the box by {@link org.apache.http.client.HttpClient}.
* Exercises the test http server ability to send back whatever body it received.
*/
public void testDeleteWithBody() throws Exception {
bodyTest("DELETE");
}
/**
* End to end test for get with body. We test it explicitly as it is not supported
* out of the box by {@link org.apache.http.client.HttpClient}.
* Exercises the test http server ability to send back whatever body it received.
*/
public void testGetWithBody() throws Exception {
bodyTest("GET");
}
private void bodyTest(String method) throws Exception {
String requestBody = "{ \"field\": \"value\" }";
StringEntity entity = new StringEntity(requestBody);
Response esResponse;
String responseBody;
int statusCode = randomStatusCode(getRandom());
try (Response response = restClient.performRequest(method, "/" + statusCode,
Collections.<String, String>emptyMap(), entity)) {
responseBody = EntityUtils.toString(response.getEntity());
esResponse = response;
} catch(ResponseException e) {
responseBody = e.getResponseBody();
esResponse = e.getResponse();
}
assertEquals(statusCode, esResponse.getStatusLine().getStatusCode());
assertEquals(requestBody, responseBody);
}
}

View File

@ -0,0 +1,274 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import org.apache.http.HttpHost;
import org.apache.http.HttpRequest;
import org.apache.http.ProtocolVersion;
import org.apache.http.StatusLine;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpUriRequest;
import org.apache.http.conn.ConnectTimeoutException;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.message.BasicStatusLine;
import org.junit.Before;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import java.io.IOException;
import java.net.SocketTimeoutException;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
import static org.elasticsearch.client.RestClientTestUtil.randomErrorNoRetryStatusCode;
import static org.elasticsearch.client.RestClientTestUtil.randomErrorRetryStatusCode;
import static org.elasticsearch.client.RestClientTestUtil.randomHttpMethod;
import static org.elasticsearch.client.RestClientTestUtil.randomOkStatusCode;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.CoreMatchers.instanceOf;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
/**
* Tests for {@link RestClient} behaviour against multiple hosts: fail-over, blacklisting etc.
* Relies on a mock http client to intercept requests and return desired responses based on request path.
*/
public class RestClientMultipleHostsTests extends RestClientTestCase {
private RestClient restClient;
private HttpHost[] httpHosts;
private TrackingFailureListener failureListener;
@Before
public void createRestClient() throws IOException {
CloseableHttpClient httpClient = mock(CloseableHttpClient.class);
when(httpClient.execute(any(HttpHost.class), any(HttpRequest.class))).thenAnswer(new Answer<CloseableHttpResponse>() {
@Override
public CloseableHttpResponse answer(InvocationOnMock invocationOnMock) throws Throwable {
HttpHost httpHost = (HttpHost) invocationOnMock.getArguments()[0];
HttpUriRequest request = (HttpUriRequest) invocationOnMock.getArguments()[1];
//return the desired status code or exception depending on the path
if (request.getURI().getPath().equals("/soe")) {
throw new SocketTimeoutException(httpHost.toString());
} else if (request.getURI().getPath().equals("/coe")) {
throw new ConnectTimeoutException(httpHost.toString());
} else if (request.getURI().getPath().equals("/ioe")) {
throw new IOException(httpHost.toString());
}
int statusCode = Integer.parseInt(request.getURI().getPath().substring(1));
StatusLine statusLine = new BasicStatusLine(new ProtocolVersion("http", 1, 1), statusCode, "");
return new CloseableBasicHttpResponse(statusLine);
}
});
int numHosts = RandomInts.randomIntBetween(getRandom(), 2, 5);
httpHosts = new HttpHost[numHosts];
for (int i = 0; i < numHosts; i++) {
httpHosts[i] = new HttpHost("localhost", 9200 + i);
}
failureListener = new TrackingFailureListener();
restClient = RestClient.builder(httpHosts).setHttpClient(httpClient).setFailureListener(failureListener).build();
}
public void testRoundRobinOkStatusCodes() throws Exception {
int numIters = RandomInts.randomIntBetween(getRandom(), 1, 5);
for (int i = 0; i < numIters; i++) {
Set<HttpHost> hostsSet = new HashSet<>();
Collections.addAll(hostsSet, httpHosts);
for (int j = 0; j < httpHosts.length; j++) {
int statusCode = randomOkStatusCode(getRandom());
try (Response response = restClient.performRequest(randomHttpMethod(getRandom()), "/" + statusCode)) {
assertThat(response.getStatusLine().getStatusCode(), equalTo(statusCode));
assertTrue("host not found: " + response.getHost(), hostsSet.remove(response.getHost()));
}
}
assertEquals("every host should have been used but some weren't: " + hostsSet, 0, hostsSet.size());
}
failureListener.assertNotCalled();
}
public void testRoundRobinNoRetryErrors() throws Exception {
int numIters = RandomInts.randomIntBetween(getRandom(), 1, 5);
for (int i = 0; i < numIters; i++) {
Set<HttpHost> hostsSet = new HashSet<>();
Collections.addAll(hostsSet, httpHosts);
for (int j = 0; j < httpHosts.length; j++) {
String method = randomHttpMethod(getRandom());
int statusCode = randomErrorNoRetryStatusCode(getRandom());
try (Response response = restClient.performRequest(method, "/" + statusCode)) {
if (method.equals("HEAD") && statusCode == 404) {
//no exception gets thrown although we got a 404
assertThat(response.getStatusLine().getStatusCode(), equalTo(404));
assertThat(response.getStatusLine().getStatusCode(), equalTo(statusCode));
assertTrue("host not found: " + response.getHost(), hostsSet.remove(response.getHost()));
} else {
fail("request should have failed");
}
} catch(ResponseException e) {
if (method.equals("HEAD") && statusCode == 404) {
throw e;
}
Response response = e.getResponse();
assertThat(response.getStatusLine().getStatusCode(), equalTo(statusCode));
assertTrue("host not found: " + response.getHost(), hostsSet.remove(response.getHost()));
assertEquals(0, e.getSuppressed().length);
}
}
assertEquals("every host should have been used but some weren't: " + hostsSet, 0, hostsSet.size());
}
failureListener.assertNotCalled();
}
public void testRoundRobinRetryErrors() throws Exception {
String retryEndpoint = randomErrorRetryEndpoint();
try {
restClient.performRequest(randomHttpMethod(getRandom()), retryEndpoint);
fail("request should have failed");
} catch(ResponseException e) {
Set<HttpHost> hostsSet = new HashSet<>();
Collections.addAll(hostsSet, httpHosts);
//first request causes all the hosts to be blacklisted, the returned exception holds one suppressed exception each
failureListener.assertCalled(httpHosts);
do {
Response response = e.getResponse();
assertThat(response.getStatusLine().getStatusCode(), equalTo(Integer.parseInt(retryEndpoint.substring(1))));
assertTrue("host [" + response.getHost() + "] not found, most likely used multiple times",
hostsSet.remove(response.getHost()));
if (e.getSuppressed().length > 0) {
assertEquals(1, e.getSuppressed().length);
Throwable suppressed = e.getSuppressed()[0];
assertThat(suppressed, instanceOf(ResponseException.class));
e = (ResponseException)suppressed;
} else {
e = null;
}
} while(e != null);
assertEquals("every host should have been used but some weren't: " + hostsSet, 0, hostsSet.size());
} catch(IOException e) {
Set<HttpHost> hostsSet = new HashSet<>();
Collections.addAll(hostsSet, httpHosts);
//first request causes all the hosts to be blacklisted, the returned exception holds one suppressed exception each
failureListener.assertCalled(httpHosts);
do {
HttpHost httpHost = HttpHost.create(e.getMessage());
assertTrue("host [" + httpHost + "] not found, most likely used multiple times", hostsSet.remove(httpHost));
if (e.getSuppressed().length > 0) {
assertEquals(1, e.getSuppressed().length);
Throwable suppressed = e.getSuppressed()[0];
assertThat(suppressed, instanceOf(IOException.class));
e = (IOException) suppressed;
} else {
e = null;
}
} while(e != null);
assertEquals("every host should have been used but some weren't: " + hostsSet, 0, hostsSet.size());
}
int numIters = RandomInts.randomIntBetween(getRandom(), 2, 5);
for (int i = 1; i <= numIters; i++) {
//check that one different host is resurrected at each new attempt
Set<HttpHost> hostsSet = new HashSet<>();
Collections.addAll(hostsSet, httpHosts);
for (int j = 0; j < httpHosts.length; j++) {
retryEndpoint = randomErrorRetryEndpoint();
try {
restClient.performRequest(randomHttpMethod(getRandom()), retryEndpoint);
fail("request should have failed");
} catch(ResponseException e) {
Response response = e.getResponse();
assertThat(response.getStatusLine().getStatusCode(), equalTo(Integer.parseInt(retryEndpoint.substring(1))));
assertTrue("host [" + response.getHost() + "] not found, most likely used multiple times",
hostsSet.remove(response.getHost()));
//after the first request, all hosts are blacklisted, a single one gets resurrected each time
failureListener.assertCalled(response.getHost());
assertEquals(0, e.getSuppressed().length);
} catch(IOException e) {
HttpHost httpHost = HttpHost.create(e.getMessage());
assertTrue("host [" + httpHost + "] not found, most likely used multiple times", hostsSet.remove(httpHost));
//after the first request, all hosts are blacklisted, a single one gets resurrected each time
failureListener.assertCalled(httpHost);
assertEquals(0, e.getSuppressed().length);
}
}
assertEquals("every host should have been used but some weren't: " + hostsSet, 0, hostsSet.size());
if (getRandom().nextBoolean()) {
//mark one host back alive through a successful request and check that all requests after that are sent to it
HttpHost selectedHost = null;
int iters = RandomInts.randomIntBetween(getRandom(), 2, 10);
for (int y = 0; y < iters; y++) {
int statusCode = randomErrorNoRetryStatusCode(getRandom());
Response response;
try (Response esResponse = restClient.performRequest(randomHttpMethod(getRandom()), "/" + statusCode)) {
response = esResponse;
}
catch(ResponseException e) {
response = e.getResponse();
}
assertThat(response.getStatusLine().getStatusCode(), equalTo(statusCode));
if (selectedHost == null) {
selectedHost = response.getHost();
} else {
assertThat(response.getHost(), equalTo(selectedHost));
}
}
failureListener.assertNotCalled();
//let the selected host catch up on number of failures, it gets selected a consecutive number of times as it's the one
//selected to be retried earlier (due to lower number of failures) till all the hosts have the same number of failures
for (int y = 0; y < i + 1; y++) {
retryEndpoint = randomErrorRetryEndpoint();
try {
restClient.performRequest(randomHttpMethod(getRandom()), retryEndpoint);
fail("request should have failed");
} catch(ResponseException e) {
Response response = e.getResponse();
assertThat(response.getStatusLine().getStatusCode(), equalTo(Integer.parseInt(retryEndpoint.substring(1))));
assertThat(response.getHost(), equalTo(selectedHost));
failureListener.assertCalled(selectedHost);
} catch(IOException e) {
HttpHost httpHost = HttpHost.create(e.getMessage());
assertThat(httpHost, equalTo(selectedHost));
failureListener.assertCalled(selectedHost);
}
}
}
}
}
private static String randomErrorRetryEndpoint() {
switch(RandomInts.randomIntBetween(getRandom(), 0, 3)) {
case 0:
return "/" + randomErrorRetryStatusCode(getRandom());
case 1:
return "/coe";
case 2:
return "/soe";
case 3:
return "/ioe";
}
throw new UnsupportedOperationException();
}
}

View File

@ -0,0 +1,450 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import com.carrotsearch.randomizedtesting.generators.RandomStrings;
import org.apache.http.Header;
import org.apache.http.HttpEntity;
import org.apache.http.HttpEntityEnclosingRequest;
import org.apache.http.HttpHost;
import org.apache.http.HttpRequest;
import org.apache.http.ProtocolVersion;
import org.apache.http.StatusLine;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpHead;
import org.apache.http.client.methods.HttpOptions;
import org.apache.http.client.methods.HttpPatch;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.methods.HttpPut;
import org.apache.http.client.methods.HttpTrace;
import org.apache.http.client.methods.HttpUriRequest;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.conn.ConnectTimeoutException;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.message.BasicHeader;
import org.apache.http.message.BasicStatusLine;
import org.apache.http.util.EntityUtils;
import org.junit.Before;
import org.mockito.ArgumentCaptor;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import java.io.IOException;
import java.net.SocketTimeoutException;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import static org.elasticsearch.client.RestClientTestUtil.getAllErrorStatusCodes;
import static org.elasticsearch.client.RestClientTestUtil.getHttpMethods;
import static org.elasticsearch.client.RestClientTestUtil.getOkStatusCodes;
import static org.elasticsearch.client.RestClientTestUtil.randomHttpMethod;
import static org.elasticsearch.client.RestClientTestUtil.randomStatusCode;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.CoreMatchers.instanceOf;
import static org.junit.Assert.assertArrayEquals;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
/**
* Tests for basic functionality of {@link RestClient} against one single host: tests http requests being sent, headers,
* body, different status codes and corresponding responses/exceptions.
* Relies on a mock http client to intercept requests and return desired responses based on request path.
*/
public class RestClientSingleHostTests extends RestClientTestCase {
private RestClient restClient;
private Header[] defaultHeaders;
private HttpHost httpHost;
private CloseableHttpClient httpClient;
private TrackingFailureListener failureListener;
@Before
public void createRestClient() throws IOException {
httpClient = mock(CloseableHttpClient.class);
when(httpClient.execute(any(HttpHost.class), any(HttpRequest.class))).thenAnswer(new Answer<CloseableHttpResponse>() {
@Override
public CloseableHttpResponse answer(InvocationOnMock invocationOnMock) throws Throwable {
HttpUriRequest request = (HttpUriRequest) invocationOnMock.getArguments()[1];
//return the desired status code or exception depending on the path
if (request.getURI().getPath().equals("/soe")) {
throw new SocketTimeoutException();
} else if (request.getURI().getPath().equals("/coe")) {
throw new ConnectTimeoutException();
}
int statusCode = Integer.parseInt(request.getURI().getPath().substring(1));
StatusLine statusLine = new BasicStatusLine(new ProtocolVersion("http", 1, 1), statusCode, "");
CloseableHttpResponse httpResponse = new CloseableBasicHttpResponse(statusLine);
//return the same body that was sent
if (request instanceof HttpEntityEnclosingRequest) {
HttpEntity entity = ((HttpEntityEnclosingRequest) request).getEntity();
if (entity != null) {
assertTrue("the entity is not repeatable, cannot set it to the response directly", entity.isRepeatable());
httpResponse.setEntity(entity);
}
}
//return the same headers that were sent
httpResponse.setHeaders(request.getAllHeaders());
return httpResponse;
}
});
int numHeaders = RandomInts.randomIntBetween(getRandom(), 0, 3);
defaultHeaders = new Header[numHeaders];
for (int i = 0; i < numHeaders; i++) {
String headerName = "Header-default" + (getRandom().nextBoolean() ? i : "");
String headerValue = RandomStrings.randomAsciiOfLengthBetween(getRandom(), 3, 10);
defaultHeaders[i] = new BasicHeader(headerName, headerValue);
}
httpHost = new HttpHost("localhost", 9200);
failureListener = new TrackingFailureListener();
restClient = RestClient.builder(httpHost).setHttpClient(httpClient).setDefaultHeaders(defaultHeaders)
.setFailureListener(failureListener).build();
}
/**
* Verifies the content of the {@link HttpRequest} that's internally created and passed through to the http client
*/
public void testInternalHttpRequest() throws Exception {
ArgumentCaptor<HttpUriRequest> requestArgumentCaptor = ArgumentCaptor.forClass(HttpUriRequest.class);
int times = 0;
for (String httpMethod : getHttpMethods()) {
HttpUriRequest expectedRequest = performRandomRequest(httpMethod);
verify(httpClient, times(++times)).execute(any(HttpHost.class), requestArgumentCaptor.capture());
HttpUriRequest actualRequest = requestArgumentCaptor.getValue();
assertEquals(expectedRequest.getURI(), actualRequest.getURI());
assertEquals(expectedRequest.getClass(), actualRequest.getClass());
assertArrayEquals(expectedRequest.getAllHeaders(), actualRequest.getAllHeaders());
if (expectedRequest instanceof HttpEntityEnclosingRequest) {
HttpEntity expectedEntity = ((HttpEntityEnclosingRequest) expectedRequest).getEntity();
if (expectedEntity != null) {
HttpEntity actualEntity = ((HttpEntityEnclosingRequest) actualRequest).getEntity();
assertEquals(EntityUtils.toString(expectedEntity), EntityUtils.toString(actualEntity));
}
}
}
}
public void testSetHosts() throws IOException {
try {
restClient.setHosts((HttpHost[]) null);
fail("setHosts should have failed");
} catch (IllegalArgumentException e) {
assertEquals("hosts must not be null nor empty", e.getMessage());
}
try {
restClient.setHosts();
fail("setHosts should have failed");
} catch (IllegalArgumentException e) {
assertEquals("hosts must not be null nor empty", e.getMessage());
}
try {
restClient.setHosts((HttpHost) null);
fail("setHosts should have failed");
} catch (NullPointerException e) {
assertEquals("host cannot be null", e.getMessage());
}
try {
restClient.setHosts(new HttpHost("localhost", 9200), null, new HttpHost("localhost", 9201));
fail("setHosts should have failed");
} catch (NullPointerException e) {
assertEquals("host cannot be null", e.getMessage());
}
}
/**
* End to end test for ok status codes
*/
public void testOkStatusCodes() throws Exception {
for (String method : getHttpMethods()) {
for (int okStatusCode : getOkStatusCodes()) {
Response response = performRequest(method, "/" + okStatusCode);
assertThat(response.getStatusLine().getStatusCode(), equalTo(okStatusCode));
}
}
failureListener.assertNotCalled();
}
/**
* End to end test for error status codes: they should cause an exception to be thrown, apart from 404 with HEAD requests
*/
public void testErrorStatusCodes() throws Exception {
for (String method : getHttpMethods()) {
//error status codes should cause an exception to be thrown
for (int errorStatusCode : getAllErrorStatusCodes()) {
try (Response response = performRequest(method, "/" + errorStatusCode)) {
if (method.equals("HEAD") && errorStatusCode == 404) {
//no exception gets thrown although we got a 404
assertThat(response.getStatusLine().getStatusCode(), equalTo(errorStatusCode));
} else {
fail("request should have failed");
}
} catch(ResponseException e) {
if (method.equals("HEAD") && errorStatusCode == 404) {
throw e;
}
assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(errorStatusCode));
}
if (errorStatusCode <= 500) {
failureListener.assertNotCalled();
} else {
failureListener.assertCalled(httpHost);
}
}
}
}
public void testIOExceptions() throws IOException {
for (String method : getHttpMethods()) {
//IOExceptions should be let bubble up
try {
performRequest(method, "/coe");
fail("request should have failed");
} catch(IOException e) {
assertThat(e, instanceOf(ConnectTimeoutException.class));
}
failureListener.assertCalled(httpHost);
try {
performRequest(method, "/soe");
fail("request should have failed");
} catch(IOException e) {
assertThat(e, instanceOf(SocketTimeoutException.class));
}
failureListener.assertCalled(httpHost);
}
}
/**
* End to end test for request and response body. Exercises the mock http client ability to send back
* whatever body it has received.
*/
public void testBody() throws Exception {
String body = "{ \"field\": \"value\" }";
StringEntity entity = new StringEntity(body);
for (String method : Arrays.asList("DELETE", "GET", "PATCH", "POST", "PUT")) {
for (int okStatusCode : getOkStatusCodes()) {
try (Response response = restClient.performRequest(method, "/" + okStatusCode,
Collections.<String, String>emptyMap(), entity)) {
assertThat(response.getStatusLine().getStatusCode(), equalTo(okStatusCode));
assertThat(EntityUtils.toString(response.getEntity()), equalTo(body));
}
}
for (int errorStatusCode : getAllErrorStatusCodes()) {
try {
restClient.performRequest(method, "/" + errorStatusCode, Collections.<String, String>emptyMap(), entity);
fail("request should have failed");
} catch(ResponseException e) {
Response response = e.getResponse();
assertThat(response.getStatusLine().getStatusCode(), equalTo(errorStatusCode));
assertThat(EntityUtils.toString(response.getEntity()), equalTo(body));
}
}
}
for (String method : Arrays.asList("HEAD", "OPTIONS", "TRACE")) {
try {
restClient.performRequest(method, "/" + randomStatusCode(getRandom()), Collections.<String, String>emptyMap(), entity);
fail("request should have failed");
} catch(UnsupportedOperationException e) {
assertThat(e.getMessage(), equalTo(method + " with body is not supported"));
}
}
}
public void testNullHeaders() throws Exception {
String method = randomHttpMethod(getRandom());
int statusCode = randomStatusCode(getRandom());
try {
performRequest(method, "/" + statusCode, (Header[])null);
fail("request should have failed");
} catch(NullPointerException e) {
assertEquals("request headers must not be null", e.getMessage());
}
try {
performRequest(method, "/" + statusCode, (Header)null);
fail("request should have failed");
} catch(NullPointerException e) {
assertEquals("request header must not be null", e.getMessage());
}
}
public void testNullParams() throws Exception {
String method = randomHttpMethod(getRandom());
int statusCode = randomStatusCode(getRandom());
try {
restClient.performRequest(method, "/" + statusCode, (Map<String, String>)null);
fail("request should have failed");
} catch(NullPointerException e) {
assertEquals("params must not be null", e.getMessage());
}
try {
restClient.performRequest(method, "/" + statusCode, null, (HttpEntity)null);
fail("request should have failed");
} catch(NullPointerException e) {
assertEquals("params must not be null", e.getMessage());
}
}
/**
* End to end test for request and response headers. Exercises the mock http client ability to send back
* whatever headers it has received.
*/
public void testHeaders() throws Exception {
for (String method : getHttpMethods()) {
Map<String, String> expectedHeaders = new HashMap<>();
for (Header defaultHeader : defaultHeaders) {
expectedHeaders.put(defaultHeader.getName(), defaultHeader.getValue());
}
int numHeaders = RandomInts.randomIntBetween(getRandom(), 1, 5);
Header[] headers = new Header[numHeaders];
for (int i = 0; i < numHeaders; i++) {
String headerName = "Header" + (getRandom().nextBoolean() ? i : "");
String headerValue = RandomStrings.randomAsciiOfLengthBetween(getRandom(), 3, 10);
headers[i] = new BasicHeader(headerName, headerValue);
expectedHeaders.put(headerName, headerValue);
}
int statusCode = randomStatusCode(getRandom());
Response esResponse;
try (Response response = restClient.performRequest(method, "/" + statusCode,
Collections.<String, String>emptyMap(), null, headers)) {
esResponse = response;
} catch(ResponseException e) {
esResponse = e.getResponse();
}
assertThat(esResponse.getStatusLine().getStatusCode(), equalTo(statusCode));
for (Header responseHeader : esResponse.getHeaders()) {
String headerValue = expectedHeaders.remove(responseHeader.getName());
assertNotNull("found response header [" + responseHeader.getName() + "] that wasn't originally sent", headerValue);
}
assertEquals("some headers that were sent weren't returned " + expectedHeaders, 0, expectedHeaders.size());
}
}
private HttpUriRequest performRandomRequest(String method) throws IOException, URISyntaxException {
String uriAsString = "/" + randomStatusCode(getRandom());
URIBuilder uriBuilder = new URIBuilder(uriAsString);
Map<String, String> params = Collections.emptyMap();
boolean hasParams = randomBoolean();
if (hasParams) {
int numParams = RandomInts.randomIntBetween(getRandom(), 1, 3);
params = new HashMap<>(numParams);
for (int i = 0; i < numParams; i++) {
String paramKey = "param-" + i;
String paramValue = RandomStrings.randomAsciiOfLengthBetween(getRandom(), 3, 10);
params.put(paramKey, paramValue);
uriBuilder.addParameter(paramKey, paramValue);
}
}
URI uri = uriBuilder.build();
HttpUriRequest request;
switch(method) {
case "DELETE":
request = new HttpDeleteWithEntity(uri);
break;
case "GET":
request = new HttpGetWithEntity(uri);
break;
case "HEAD":
request = new HttpHead(uri);
break;
case "OPTIONS":
request = new HttpOptions(uri);
break;
case "PATCH":
request = new HttpPatch(uri);
break;
case "POST":
request = new HttpPost(uri);
break;
case "PUT":
request = new HttpPut(uri);
break;
case "TRACE":
request = new HttpTrace(uri);
break;
default:
throw new UnsupportedOperationException("method not supported: " + method);
}
HttpEntity entity = null;
boolean hasBody = request instanceof HttpEntityEnclosingRequest && getRandom().nextBoolean();
if (hasBody) {
entity = new StringEntity(RandomStrings.randomAsciiOfLengthBetween(getRandom(), 10, 100));
((HttpEntityEnclosingRequest) request).setEntity(entity);
}
Header[] headers = new Header[0];
for (Header defaultHeader : defaultHeaders) {
//default headers are expected but not sent for each request
request.setHeader(defaultHeader);
}
if (getRandom().nextBoolean()) {
int numHeaders = RandomInts.randomIntBetween(getRandom(), 1, 5);
headers = new Header[numHeaders];
for (int i = 0; i < numHeaders; i++) {
String headerName = "Header" + (getRandom().nextBoolean() ? i : "");
String headerValue = RandomStrings.randomAsciiOfLengthBetween(getRandom(), 3, 10);
BasicHeader basicHeader = new BasicHeader(headerName, headerValue);
headers[i] = basicHeader;
request.setHeader(basicHeader);
}
}
try {
if (hasParams == false && hasBody == false && randomBoolean()) {
restClient.performRequest(method, uriAsString, headers);
} else if (hasBody == false && randomBoolean()) {
restClient.performRequest(method, uriAsString, params, headers);
} else {
restClient.performRequest(method, uriAsString, params, entity, headers);
}
} catch(ResponseException e) {
//all good
}
return request;
}
private Response performRequest(String method, String endpoint, Header... headers) throws IOException {
switch(randomIntBetween(0, 2)) {
case 0:
return restClient.performRequest(method, endpoint, headers);
case 1:
return restClient.performRequest(method, endpoint, Collections.<String, String>emptyMap(), headers);
case 2:
return restClient.performRequest(method, endpoint, Collections.<String, String>emptyMap(), null, headers);
default:
throw new UnsupportedOperationException();
}
}
}

View File

@ -0,0 +1,52 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import org.apache.http.HttpHost;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import static org.hamcrest.Matchers.containsInAnyOrder;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
/**
* {@link org.elasticsearch.client.RestClient.FailureListener} impl that allows to track when it gets called
*/
class TrackingFailureListener extends RestClient.FailureListener {
private Set<HttpHost> hosts = new HashSet<>();
@Override
public void onFailure(HttpHost host) throws IOException {
hosts.add(host);
}
void assertCalled(HttpHost... hosts) {
assertEquals(hosts.length, this.hosts.size());
assertThat(this.hosts, containsInAnyOrder(hosts));
this.hosts.clear();
}
void assertNotCalled() {
assertEquals(0, hosts.size());
}
}

View File

@ -0,0 +1,88 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import org.elasticsearch.gradle.precommit.PrecommitTasks
import org.gradle.api.JavaVersion
apply plugin: 'elasticsearch.build'
apply plugin: 'ru.vyarus.animalsniffer'
targetCompatibility = JavaVersion.VERSION_1_7
sourceCompatibility = JavaVersion.VERSION_1_7
group = 'org.elasticsearch.client'
dependencies {
compile "org.elasticsearch.client:rest:${version}"
compile "org.apache.httpcomponents:httpclient:${versions.httpclient}"
compile "org.apache.httpcomponents:httpcore:${versions.httpcore}"
compile "commons-codec:commons-codec:${versions.commonscodec}"
compile "commons-logging:commons-logging:${versions.commonslogging}"
compile "com.fasterxml.jackson.core:jackson-core:${versions.jackson}"
testCompile "org.elasticsearch.client:test:${version}"
testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}"
testCompile "junit:junit:${versions.junit}"
testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}"
testCompile "org.elasticsearch:securemock:${versions.securemock}"
testCompile "org.codehaus.mojo:animal-sniffer-annotations:1.15"
signature "org.codehaus.mojo.signature:java17:1.0@signature"
}
forbiddenApisMain {
//client does not depend on core, so only jdk signatures should be checked
signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')]
}
forbiddenApisTest {
//we are using jdk-internal instead of jdk-non-portable to allow for com.sun.net.httpserver.* usage
bundledSignatures -= 'jdk-non-portable'
bundledSignatures += 'jdk-internal'
//client does not depend on core, so only jdk signatures should be checked
signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')]
}
//JarHell is part of es core, which we don't want to pull in
jarHell.enabled=false
namingConventions {
testClass = 'org.elasticsearch.client.RestClientTestCase'
//we don't have integration tests
skipIntegTestInDisguise = true
}
dependencyLicenses {
dependencies = project.configurations.runtime.fileCollection {
it.group.startsWith('org.elasticsearch') == false
}
}
thirdPartyAudit.excludes = [
//commons-logging optional dependencies
'org.apache.avalon.framework.logger.Logger',
'org.apache.log.Hierarchy',
'org.apache.log.Logger',
'org.apache.log4j.Category',
'org.apache.log4j.Level',
'org.apache.log4j.Logger',
'org.apache.log4j.Priority',
//commons-logging provided dependencies
'javax.servlet.ServletContextEvent',
'javax.servlet.ServletContextListener'
]

View File

@ -0,0 +1 @@
4b95f4897fa13f2cd904aee711aeafc0c5295cd8

View File

@ -0,0 +1,17 @@
Apache Commons Codec
Copyright 2002-2014 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
src/test/org/apache/commons/codec/language/DoubleMetaphoneTest.java
contains test data from http://aspell.net/test/orig/batch0.tab.
Copyright (C) 2002 Kevin Atkinson (kevina@gnu.org)
===============================================================================
The content of package org.apache.commons.codec.language.bm has been translated
from the original php source code available at http://stevemorse.org/phoneticinfo.htm
with permission from the original authors.
Original source copyright:
Copyright (c) 2008 Alexander Beider & Stephen P. Morse.

View File

@ -0,0 +1 @@
f6f66e966c70a83ffbdb6f17a0919eaf7c8aca7f

View File

@ -0,0 +1,6 @@
Apache Commons Logging
Copyright 2003-2014 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

View File

@ -0,0 +1 @@
733db77aa8d9b2d68015189df76ab06304406e50

View File

@ -0,0 +1,558 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
=========================================================================
This project includes Public Suffix List copied from
<https://publicsuffix.org/list/effective_tld_names.dat>
licensed under the terms of the Mozilla Public License, v. 2.0
Full license text: <http://mozilla.org/MPL/2.0/>
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View File

@ -0,0 +1,6 @@
Apache HttpComponents Client
Copyright 1999-2016 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

View File

@ -0,0 +1 @@
b31526a230871fbe285fbcbe2813f9c0839ae9b0

View File

@ -0,0 +1,558 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
=========================================================================
This project includes Public Suffix List copied from
<https://publicsuffix.org/list/effective_tld_names.dat>
licensed under the terms of the Mozilla Public License, v. 2.0
Full license text: <http://mozilla.org/MPL/2.0/>
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View File

@ -0,0 +1,6 @@
Apache HttpComponents Client
Copyright 1999-2016 The Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).

View File

@ -0,0 +1 @@
4127b62db028f981e81caa248953c0899d720f98

View File

@ -0,0 +1,194 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.sniff;
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonToken;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.http.HttpEntity;
import org.apache.http.HttpHost;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestClient;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
/**
* Class responsible for sniffing the http hosts from elasticsearch through the nodes info api and returning them back.
* Compatible with elasticsearch 5.x and 2.x.
*/
public class HostsSniffer {
private static final Log logger = LogFactory.getLog(HostsSniffer.class);
private final RestClient restClient;
private final Map<String, String> sniffRequestParams;
private final Scheme scheme;
private final JsonFactory jsonFactory = new JsonFactory();
protected HostsSniffer(RestClient restClient, long sniffRequestTimeoutMillis, Scheme scheme) {
this.restClient = restClient;
this.sniffRequestParams = Collections.<String, String>singletonMap("timeout", sniffRequestTimeoutMillis + "ms");
this.scheme = scheme;
}
/**
* Calls the elasticsearch nodes info api, parses the response and returns all the found http hosts
*/
public List<HttpHost> sniffHosts() throws IOException {
try (Response response = restClient.performRequest("get", "/_nodes/http", sniffRequestParams)) {
return readHosts(response.getEntity());
}
}
private List<HttpHost> readHosts(HttpEntity entity) throws IOException {
try (InputStream inputStream = entity.getContent()) {
JsonParser parser = jsonFactory.createParser(inputStream);
if (parser.nextToken() != JsonToken.START_OBJECT) {
throw new IOException("expected data to start with an object");
}
List<HttpHost> hosts = new ArrayList<>();
while (parser.nextToken() != JsonToken.END_OBJECT) {
if (parser.getCurrentToken() == JsonToken.START_OBJECT) {
if ("nodes".equals(parser.getCurrentName())) {
while (parser.nextToken() != JsonToken.END_OBJECT) {
JsonToken token = parser.nextToken();
assert token == JsonToken.START_OBJECT;
String nodeId = parser.getCurrentName();
HttpHost sniffedHost = readHost(nodeId, parser, this.scheme);
if (sniffedHost != null) {
logger.trace("adding node [" + nodeId + "]");
hosts.add(sniffedHost);
}
}
} else {
parser.skipChildren();
}
}
}
return hosts;
}
}
private static HttpHost readHost(String nodeId, JsonParser parser, Scheme scheme) throws IOException {
HttpHost httpHost = null;
String fieldName = null;
while (parser.nextToken() != JsonToken.END_OBJECT) {
if (parser.getCurrentToken() == JsonToken.FIELD_NAME) {
fieldName = parser.getCurrentName();
} else if (parser.getCurrentToken() == JsonToken.START_OBJECT) {
if ("http".equals(fieldName)) {
while (parser.nextToken() != JsonToken.END_OBJECT) {
if (parser.getCurrentToken() == JsonToken.VALUE_STRING && "publish_address".equals(parser.getCurrentName())) {
URI boundAddressAsURI = URI.create(scheme + "://" + parser.getValueAsString());
httpHost = new HttpHost(boundAddressAsURI.getHost(), boundAddressAsURI.getPort(),
boundAddressAsURI.getScheme());
} else if (parser.getCurrentToken() == JsonToken.START_OBJECT) {
parser.skipChildren();
}
}
} else {
parser.skipChildren();
}
}
}
//http section is not present if http is not enabled on the node, ignore such nodes
if (httpHost == null) {
logger.debug("skipping node [" + nodeId + "] with http disabled");
return null;
}
return httpHost;
}
/**
* Returns a new {@link Builder} to help with {@link HostsSniffer} creation.
*/
public static Builder builder(RestClient restClient) {
return new Builder(restClient);
}
public enum Scheme {
HTTP("http"), HTTPS("https");
private final String name;
Scheme(String name) {
this.name = name;
}
@Override
public String toString() {
return name;
}
}
/**
* HostsSniffer builder. Helps creating a new {@link HostsSniffer}.
*/
public static class Builder {
public static final long DEFAULT_SNIFF_REQUEST_TIMEOUT = TimeUnit.SECONDS.toMillis(1);
private final RestClient restClient;
private long sniffRequestTimeoutMillis = DEFAULT_SNIFF_REQUEST_TIMEOUT;
private Scheme scheme = Scheme.HTTP;
private Builder(RestClient restClient) {
Objects.requireNonNull(restClient, "restClient cannot be null");
this.restClient = restClient;
}
/**
* Sets the sniff request timeout (in milliseconds) to be passed in as a query string parameter to elasticsearch.
* Allows to halt the request without any failure, as only the nodes that have responded within this timeout will be returned.
*/
public Builder setSniffRequestTimeoutMillis(int sniffRequestTimeoutMillis) {
if (sniffRequestTimeoutMillis <= 0) {
throw new IllegalArgumentException("sniffRequestTimeoutMillis must be greater than 0");
}
this.sniffRequestTimeoutMillis = sniffRequestTimeoutMillis;
return this;
}
/**
* Sets the scheme to associate sniffed nodes with (as it is not returned by elasticsearch)
*/
public Builder setScheme(Scheme scheme) {
Objects.requireNonNull(scheme, "scheme cannot be null");
this.scheme = scheme;
return this;
}
/**
* Creates a new {@link HostsSniffer} instance given the provided configuration
*/
public HostsSniffer build() {
return new HostsSniffer(restClient, sniffRequestTimeoutMillis, scheme);
}
}
}

View File

@ -0,0 +1,65 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.sniff;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import java.io.IOException;
import java.util.Objects;
import java.util.concurrent.atomic.AtomicBoolean;
/**
* {@link org.elasticsearch.client.RestClient.FailureListener} implementation that allows to perform
* sniffing on failure. Gets notified whenever a failure happens and uses a {@link Sniffer} instance
* to manually reload hosts and sets them back to the {@link RestClient}. The {@link Sniffer} instance
* needs to be lazily set through {@link #setSniffer(Sniffer)}.
*/
public class SniffOnFailureListener extends RestClient.FailureListener {
private volatile Sniffer sniffer;
private final AtomicBoolean set;
public SniffOnFailureListener() {
this.set = new AtomicBoolean(false);
}
/**
* Sets the {@link Sniffer} instance used to perform sniffing
* @throws IllegalStateException if the sniffer was already set, as it can only be set once
*/
public void setSniffer(Sniffer sniffer) {
Objects.requireNonNull(sniffer, "sniffer must not be null");
if (set.compareAndSet(false, true)) {
this.sniffer = sniffer;
} else {
throw new IllegalStateException("sniffer can only be set once");
}
}
@Override
public void onFailure(HttpHost host) throws IOException {
if (sniffer == null) {
throw new IllegalStateException("sniffer was not set, unable to sniff on failure");
}
//re-sniff immediately but take out the node that failed
sniffer.sniffOnFailure(host);
}
}

View File

@ -0,0 +1,206 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.sniff;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import java.io.Closeable;
import java.io.IOException;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
/**
* Class responsible for sniffing nodes from an elasticsearch cluster and setting them to a provided instance of {@link RestClient}.
* Must be created via {@link Builder}, which allows to set all of the different options or rely on defaults.
* A background task fetches the nodes through the {@link HostsSniffer} and sets them to the {@link RestClient} instance.
* It is possible to perform sniffing on failure by creating a {@link SniffOnFailureListener} and providing it as an argument to
* {@link org.elasticsearch.client.RestClient.Builder#setFailureListener(RestClient.FailureListener)}. The Sniffer implementation
* needs to be lazily set to the previously created SniffOnFailureListener through {@link SniffOnFailureListener#setSniffer(Sniffer)}.
*/
public final class Sniffer implements Closeable {
private static final Log logger = LogFactory.getLog(Sniffer.class);
private final Task task;
private Sniffer(RestClient restClient, HostsSniffer hostsSniffer, long sniffInterval, long sniffAfterFailureDelay) {
this.task = new Task(hostsSniffer, restClient, sniffInterval, sniffAfterFailureDelay);
}
/**
* Triggers a new sniffing round and explicitly takes out the failed host provided as argument
*/
public void sniffOnFailure(HttpHost failedHost) {
this.task.sniffOnFailure(failedHost);
}
@Override
public void close() throws IOException {
task.shutdown();
}
private static class Task implements Runnable {
private final HostsSniffer hostsSniffer;
private final RestClient restClient;
private final long sniffIntervalMillis;
private final long sniffAfterFailureDelayMillis;
private final ScheduledExecutorService scheduledExecutorService;
private final AtomicBoolean running = new AtomicBoolean(false);
private ScheduledFuture<?> scheduledFuture;
private Task(HostsSniffer hostsSniffer, RestClient restClient, long sniffIntervalMillis, long sniffAfterFailureDelayMillis) {
this.hostsSniffer = hostsSniffer;
this.restClient = restClient;
this.sniffIntervalMillis = sniffIntervalMillis;
this.sniffAfterFailureDelayMillis = sniffAfterFailureDelayMillis;
this.scheduledExecutorService = Executors.newScheduledThreadPool(1);
scheduleNextRun(0);
}
synchronized void scheduleNextRun(long delayMillis) {
if (scheduledExecutorService.isShutdown() == false) {
try {
if (scheduledFuture != null) {
//regardless of when the next sniff is scheduled, cancel it and schedule a new one with updated delay
this.scheduledFuture.cancel(false);
}
logger.debug("scheduling next sniff in " + delayMillis + " ms");
this.scheduledFuture = this.scheduledExecutorService.schedule(this, delayMillis, TimeUnit.MILLISECONDS);
} catch(Exception e) {
logger.error("error while scheduling next sniffer task", e);
}
}
}
@Override
public void run() {
sniff(null, sniffIntervalMillis);
}
void sniffOnFailure(HttpHost failedHost) {
sniff(failedHost, sniffAfterFailureDelayMillis);
}
void sniff(HttpHost excludeHost, long nextSniffDelayMillis) {
if (running.compareAndSet(false, true)) {
try {
List<HttpHost> sniffedHosts = hostsSniffer.sniffHosts();
logger.debug("sniffed hosts: " + sniffedHosts);
if (excludeHost != null) {
sniffedHosts.remove(excludeHost);
}
if (sniffedHosts.isEmpty()) {
logger.warn("no hosts to set, hosts will be updated at the next sniffing round");
} else {
this.restClient.setHosts(sniffedHosts.toArray(new HttpHost[sniffedHosts.size()]));
}
} catch (Exception e) {
logger.error("error while sniffing nodes", e);
} finally {
scheduleNextRun(nextSniffDelayMillis);
running.set(false);
}
}
}
synchronized void shutdown() {
scheduledExecutorService.shutdown();
try {
if (scheduledExecutorService.awaitTermination(1000, TimeUnit.MILLISECONDS)) {
return;
}
scheduledExecutorService.shutdownNow();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
/**
* Returns a new {@link Builder} to help with {@link Sniffer} creation.
*/
public static Builder builder(RestClient restClient, HostsSniffer hostsSniffer) {
return new Builder(restClient, hostsSniffer);
}
/**
* Sniffer builder. Helps creating a new {@link Sniffer}.
*/
public static final class Builder {
public static final long DEFAULT_SNIFF_INTERVAL = TimeUnit.MINUTES.toMillis(5);
public static final long DEFAULT_SNIFF_AFTER_FAILURE_DELAY = TimeUnit.MINUTES.toMillis(1);
private final RestClient restClient;
private final HostsSniffer hostsSniffer;
private long sniffIntervalMillis = DEFAULT_SNIFF_INTERVAL;
private long sniffAfterFailureDelayMillis = DEFAULT_SNIFF_AFTER_FAILURE_DELAY;
/**
* Creates a new builder instance by providing the {@link RestClient} that will be used to communicate with elasticsearch,
* and the
*/
private Builder(RestClient restClient, HostsSniffer hostsSniffer) {
Objects.requireNonNull(restClient, "restClient cannot be null");
this.restClient = restClient;
Objects.requireNonNull(hostsSniffer, "hostsSniffer cannot be null");
this.hostsSniffer = hostsSniffer;
}
/**
* Sets the interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when
* sniffOnFailure is disabled or when there are no failures between consecutive sniff executions.
* @throws IllegalArgumentException if sniffIntervalMillis is not greater than 0
*/
public Builder setSniffIntervalMillis(int sniffIntervalMillis) {
if (sniffIntervalMillis <= 0) {
throw new IllegalArgumentException("sniffIntervalMillis must be greater than 0");
}
this.sniffIntervalMillis = sniffIntervalMillis;
return this;
}
/**
* Sets the delay of a sniff execution scheduled after a failure (in milliseconds)
*/
public Builder setSniffAfterFailureDelayMillis(int sniffAfterFailureDelayMillis) {
if (sniffAfterFailureDelayMillis <= 0) {
throw new IllegalArgumentException("sniffAfterFailureDelayMillis must be greater than 0");
}
this.sniffAfterFailureDelayMillis = sniffAfterFailureDelayMillis;
return this;
}
/**
* Creates the {@link Sniffer} based on the provided configuration.
*/
public Sniffer build() {
return new Sniffer(restClient, hostsSniffer, sniffIntervalMillis, sniffAfterFailureDelayMillis);
}
}
}

View File

@ -0,0 +1,73 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.sniff;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import com.carrotsearch.randomizedtesting.generators.RandomPicks;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientTestCase;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.fail;
public class HostsSnifferBuilderTests extends RestClientTestCase {
public void testBuild() throws Exception {
try {
HostsSniffer.builder(null);
fail("should have failed");
} catch(NullPointerException e) {
assertEquals(e.getMessage(), "restClient cannot be null");
}
int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5);
HttpHost[] hosts = new HttpHost[numNodes];
for (int i = 0; i < numNodes; i++) {
hosts[i] = new HttpHost("localhost", 9200 + i);
}
try (RestClient client = RestClient.builder(hosts).build()) {
try {
HostsSniffer.builder(client).setScheme(null);
fail("should have failed");
} catch(NullPointerException e) {
assertEquals(e.getMessage(), "scheme cannot be null");
}
try {
HostsSniffer.builder(client).setSniffRequestTimeoutMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
fail("should have failed");
} catch(IllegalArgumentException e) {
assertEquals(e.getMessage(), "sniffRequestTimeoutMillis must be greater than 0");
}
HostsSniffer.Builder builder = HostsSniffer.builder(client);
if (getRandom().nextBoolean()) {
builder.setScheme(RandomPicks.randomFrom(getRandom(), HostsSniffer.Scheme.values()));
}
if (getRandom().nextBoolean()) {
builder.setSniffRequestTimeoutMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
}
assertNotNull(builder.build());
}
}
}

View File

@ -0,0 +1,276 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.sniff;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import com.carrotsearch.randomizedtesting.generators.RandomPicks;
import com.carrotsearch.randomizedtesting.generators.RandomStrings;
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonGenerator;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
import org.apache.http.Consts;
import org.apache.http.HttpHost;
import org.apache.http.client.methods.HttpGet;
import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.ResponseException;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientTestCase;
import org.junit.After;
import org.junit.Before;
import java.io.IOException;
import java.io.OutputStream;
import java.io.StringWriter;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URISyntaxException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.fail;
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
@IgnoreJRERequirement
public class HostsSnifferTests extends RestClientTestCase {
private int sniffRequestTimeout;
private HostsSniffer.Scheme scheme;
private SniffResponse sniffResponse;
private HttpServer httpServer;
@Before
public void startHttpServer() throws IOException {
this.sniffRequestTimeout = RandomInts.randomIntBetween(getRandom(), 1000, 10000);
this.scheme = RandomPicks.randomFrom(getRandom(), HostsSniffer.Scheme.values());
if (rarely()) {
this.sniffResponse = SniffResponse.buildFailure();
} else {
this.sniffResponse = buildSniffResponse(scheme);
}
this.httpServer = createHttpServer(sniffResponse, sniffRequestTimeout);
this.httpServer.start();
}
@After
public void stopHttpServer() throws IOException {
httpServer.stop(0);
}
public void testSniffNodes() throws IOException, URISyntaxException {
HttpHost httpHost = new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort());
try (RestClient restClient = RestClient.builder(httpHost).build()) {
HostsSniffer.Builder builder = HostsSniffer.builder(restClient).setSniffRequestTimeoutMillis(sniffRequestTimeout);
if (scheme != HostsSniffer.Scheme.HTTP || randomBoolean()) {
builder.setScheme(scheme);
}
HostsSniffer sniffer = builder.build();
try {
List<HttpHost> sniffedHosts = sniffer.sniffHosts();
if (sniffResponse.isFailure) {
fail("sniffNodes should have failed");
}
assertThat(sniffedHosts.size(), equalTo(sniffResponse.hosts.size()));
Iterator<HttpHost> responseHostsIterator = sniffResponse.hosts.iterator();
for (HttpHost sniffedHost : sniffedHosts) {
assertEquals(sniffedHost, responseHostsIterator.next());
}
} catch(ResponseException e) {
Response response = e.getResponse();
if (sniffResponse.isFailure) {
assertThat(e.getMessage(), containsString("GET " + httpHost + "/_nodes/http?timeout=" + sniffRequestTimeout + "ms"));
assertThat(e.getMessage(), containsString(Integer.toString(sniffResponse.nodesInfoResponseCode)));
assertThat(response.getHost(), equalTo(httpHost));
assertThat(response.getStatusLine().getStatusCode(), equalTo(sniffResponse.nodesInfoResponseCode));
assertThat(response.getRequestLine().toString(),
equalTo("GET /_nodes/http?timeout=" + sniffRequestTimeout + "ms HTTP/1.1"));
} else {
fail("sniffNodes should have succeeded: " + response.getStatusLine());
}
}
}
}
private static HttpServer createHttpServer(final SniffResponse sniffResponse, final int sniffTimeoutMillis) throws IOException {
HttpServer httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0);
httpServer.createContext("/_nodes/http", new ResponseHandler(sniffTimeoutMillis, sniffResponse));
return httpServer;
}
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
@IgnoreJRERequirement
private static class ResponseHandler implements HttpHandler {
private final int sniffTimeoutMillis;
private final SniffResponse sniffResponse;
ResponseHandler(int sniffTimeoutMillis, SniffResponse sniffResponse) {
this.sniffTimeoutMillis = sniffTimeoutMillis;
this.sniffResponse = sniffResponse;
}
@Override
public void handle(HttpExchange httpExchange) throws IOException {
if (httpExchange.getRequestMethod().equals(HttpGet.METHOD_NAME)) {
if (httpExchange.getRequestURI().getRawQuery().equals("timeout=" + sniffTimeoutMillis + "ms")) {
String nodesInfoBody = sniffResponse.nodesInfoBody;
httpExchange.sendResponseHeaders(sniffResponse.nodesInfoResponseCode, nodesInfoBody.length());
try (OutputStream out = httpExchange.getResponseBody()) {
out.write(nodesInfoBody.getBytes(Consts.UTF_8));
return;
}
}
}
httpExchange.sendResponseHeaders(404, 0);
httpExchange.close();
}
}
private static SniffResponse buildSniffResponse(HostsSniffer.Scheme scheme) throws IOException {
int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5);
List<HttpHost> hosts = new ArrayList<>(numNodes);
JsonFactory jsonFactory = new JsonFactory();
StringWriter writer = new StringWriter();
JsonGenerator generator = jsonFactory.createGenerator(writer);
generator.writeStartObject();
if (getRandom().nextBoolean()) {
generator.writeStringField("cluster_name", "elasticsearch");
}
if (getRandom().nextBoolean()) {
generator.writeObjectFieldStart("bogus_object");
generator.writeEndObject();
}
generator.writeObjectFieldStart("nodes");
for (int i = 0; i < numNodes; i++) {
String nodeId = RandomStrings.randomAsciiOfLengthBetween(getRandom(), 5, 10);
generator.writeObjectFieldStart(nodeId);
if (getRandom().nextBoolean()) {
generator.writeObjectFieldStart("bogus_object");
generator.writeEndObject();
}
if (getRandom().nextBoolean()) {
generator.writeArrayFieldStart("bogus_array");
generator.writeStartObject();
generator.writeEndObject();
generator.writeEndArray();
}
boolean isHttpEnabled = rarely() == false;
if (isHttpEnabled) {
String host = "host" + i;
int port = RandomInts.randomIntBetween(getRandom(), 9200, 9299);
HttpHost httpHost = new HttpHost(host, port, scheme.toString());
hosts.add(httpHost);
generator.writeObjectFieldStart("http");
if (getRandom().nextBoolean()) {
generator.writeArrayFieldStart("bound_address");
generator.writeString("[fe80::1]:" + port);
generator.writeString("[::1]:" + port);
generator.writeString("127.0.0.1:" + port);
generator.writeEndArray();
}
if (getRandom().nextBoolean()) {
generator.writeObjectFieldStart("bogus_object");
generator.writeEndObject();
}
generator.writeStringField("publish_address", httpHost.toHostString());
if (getRandom().nextBoolean()) {
generator.writeNumberField("max_content_length_in_bytes", 104857600);
}
generator.writeEndObject();
}
if (getRandom().nextBoolean()) {
String[] roles = {"master", "data", "ingest"};
int numRoles = RandomInts.randomIntBetween(getRandom(), 0, 3);
Set<String> nodeRoles = new HashSet<>(numRoles);
for (int j = 0; j < numRoles; j++) {
String role;
do {
role = RandomPicks.randomFrom(getRandom(), roles);
} while(nodeRoles.add(role) == false);
}
generator.writeArrayFieldStart("roles");
for (String nodeRole : nodeRoles) {
generator.writeString(nodeRole);
}
generator.writeEndArray();
}
int numAttributes = RandomInts.randomIntBetween(getRandom(), 0, 3);
Map<String, String> attributes = new HashMap<>(numAttributes);
for (int j = 0; j < numAttributes; j++) {
attributes.put("attr" + j, "value" + j);
}
if (numAttributes > 0) {
generator.writeObjectFieldStart("attributes");
}
for (Map.Entry<String, String> entry : attributes.entrySet()) {
generator.writeStringField(entry.getKey(), entry.getValue());
}
if (numAttributes > 0) {
generator.writeEndObject();
}
generator.writeEndObject();
}
generator.writeEndObject();
generator.writeEndObject();
generator.close();
return SniffResponse.buildResponse(writer.toString(), hosts);
}
private static class SniffResponse {
private final String nodesInfoBody;
private final int nodesInfoResponseCode;
private final List<HttpHost> hosts;
private final boolean isFailure;
SniffResponse(String nodesInfoBody, List<HttpHost> hosts, boolean isFailure) {
this.nodesInfoBody = nodesInfoBody;
this.hosts = hosts;
this.isFailure = isFailure;
if (isFailure) {
this.nodesInfoResponseCode = randomErrorResponseCode();
} else {
this.nodesInfoResponseCode = 200;
}
}
static SniffResponse buildFailure() {
return new SniffResponse("", Collections.<HttpHost>emptyList(), true);
}
static SniffResponse buildResponse(String nodesInfoBody, List<HttpHost> hosts) {
return new SniffResponse(nodesInfoBody, hosts, false);
}
}
private static int randomErrorResponseCode() {
return RandomInts.randomIntBetween(getRandom(), 400, 599);
}
}

View File

@ -17,20 +17,23 @@
* under the License.
*/
package org.elasticsearch.plugins;
package org.elasticsearch.client.sniff;
import org.elasticsearch.common.inject.AbstractModule;
import org.apache.http.HttpHost;
public class PluginsModule extends AbstractModule {
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
private final PluginsService pluginsService;
public PluginsModule(PluginsService pluginsService) {
this.pluginsService = pluginsService;
class MockHostsSniffer extends HostsSniffer {
MockHostsSniffer() {
super(null, -1, null);
}
@Override
protected void configure() {
bind(PluginsService.class).toInstance(pluginsService);
public List<HttpHost> sniffHosts() throws IOException {
List<HttpHost> hosts = new ArrayList<>();
hosts.add(new HttpHost("localhost", 9200));
return hosts;
}
}

View File

@ -0,0 +1,60 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.sniff;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientTestCase;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.fail;
public class SniffOnFailureListenerTests extends RestClientTestCase {
public void testSetSniffer() throws Exception {
SniffOnFailureListener listener = new SniffOnFailureListener();
try {
listener.onFailure(null);
fail("should have failed");
} catch(IllegalStateException e) {
assertEquals("sniffer was not set, unable to sniff on failure", e.getMessage());
}
try {
listener.setSniffer(null);
fail("should have failed");
} catch(NullPointerException e) {
assertEquals("sniffer must not be null", e.getMessage());
}
RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200)).build();
try (Sniffer sniffer = Sniffer.builder(restClient, new MockHostsSniffer()).build()) {
listener.setSniffer(sniffer);
try {
listener.setSniffer(sniffer);
fail("should have failed");
} catch(IllegalStateException e) {
assertEquals("sniffer can only be set once", e.getMessage());
}
listener.onFailure(new HttpHost("localhost", 9200));
}
}
}

View File

@ -0,0 +1,89 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client.sniff;
import com.carrotsearch.randomizedtesting.generators.RandomInts;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientTestCase;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.fail;
public class SnifferBuilderTests extends RestClientTestCase {
public void testBuild() throws Exception {
int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5);
HttpHost[] hosts = new HttpHost[numNodes];
for (int i = 0; i < numNodes; i++) {
hosts[i] = new HttpHost("localhost", 9200 + i);
}
HostsSniffer hostsSniffer = new MockHostsSniffer();
try (RestClient client = RestClient.builder(hosts).build()) {
try {
Sniffer.builder(null, hostsSniffer).build();
fail("should have failed");
} catch(NullPointerException e) {
assertEquals("restClient cannot be null", e.getMessage());
}
try {
Sniffer.builder(client, null).build();
fail("should have failed");
} catch(NullPointerException e) {
assertEquals("hostsSniffer cannot be null", e.getMessage());
}
try {
Sniffer.builder(client, hostsSniffer)
.setSniffIntervalMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
fail("should have failed");
} catch(IllegalArgumentException e) {
assertEquals("sniffIntervalMillis must be greater than 0", e.getMessage());
}
try {
Sniffer.builder(client, hostsSniffer)
.setSniffAfterFailureDelayMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
fail("should have failed");
} catch(IllegalArgumentException e) {
assertEquals("sniffAfterFailureDelayMillis must be greater than 0", e.getMessage());
}
try (Sniffer sniffer = Sniffer.builder(client, hostsSniffer).build()) {
assertNotNull(sniffer);
}
Sniffer.Builder builder = Sniffer.builder(client, hostsSniffer);
if (getRandom().nextBoolean()) {
builder.setSniffIntervalMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
}
if (getRandom().nextBoolean()) {
builder.setSniffAfterFailureDelayMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
}
try (Sniffer sniffer = builder.build()) {
assertNotNull(sniffer);
}
}
}
}

63
client/test/build.gradle Normal file
View File

@ -0,0 +1,63 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import org.elasticsearch.gradle.precommit.PrecommitTasks
import org.gradle.api.JavaVersion
apply plugin: 'elasticsearch.build'
apply plugin: 'ru.vyarus.animalsniffer'
targetCompatibility = JavaVersion.VERSION_1_7
sourceCompatibility = JavaVersion.VERSION_1_7
install.enabled = false
uploadArchives.enabled = false
dependencies {
compile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}"
compile "junit:junit:${versions.junit}"
compile "org.hamcrest:hamcrest-all:${versions.hamcrest}"
compile "org.codehaus.mojo:animal-sniffer-annotations:1.15"
signature "org.codehaus.mojo.signature:java17:1.0@signature"
}
forbiddenApisMain {
//client does not depend on core, so only jdk signatures should be checked
signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')]
}
forbiddenApisTest {
//we are using jdk-internal instead of jdk-non-portable to allow for com.sun.net.httpserver.* usage
bundledSignatures -= 'jdk-non-portable'
bundledSignatures += 'jdk-internal'
//client does not depend on core, so only jdk signatures should be checked
signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')]
}
//JarHell is part of es core, which we don't want to pull in
jarHell.enabled=false
// TODO: should we have licenses for our test deps?
dependencyLicenses.enabled = false
namingConventions.enabled = false
//we aren't releasing this jar
thirdPartyAudit.enabled = false
test.enabled = false

View File

@ -0,0 +1,46 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.JUnit3MethodProvider;
import com.carrotsearch.randomizedtesting.MixWithSuiteName;
import com.carrotsearch.randomizedtesting.RandomizedTest;
import com.carrotsearch.randomizedtesting.annotations.SeedDecorators;
import com.carrotsearch.randomizedtesting.annotations.TestMethodProviders;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakAction;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakGroup;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakLingering;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope;
import com.carrotsearch.randomizedtesting.annotations.ThreadLeakZombies;
import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;
@TestMethodProviders({
JUnit3MethodProvider.class
})
@SeedDecorators({MixWithSuiteName.class}) // See LUCENE-3995 for rationale.
@ThreadLeakScope(ThreadLeakScope.Scope.SUITE)
@ThreadLeakGroup(ThreadLeakGroup.Group.MAIN)
@ThreadLeakAction({ThreadLeakAction.Action.WARN, ThreadLeakAction.Action.INTERRUPT})
@ThreadLeakZombies(ThreadLeakZombies.Consequence.IGNORE_REMAINING_TESTS)
@ThreadLeakLingering(linger = 5000) // 5 sec lingering
@TimeoutSuite(millis = 2 * 60 * 60 * 1000)
public abstract class RestClientTestCase extends RandomizedTest {
}

View File

@ -0,0 +1,84 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.client;
import com.carrotsearch.randomizedtesting.generators.RandomPicks;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Random;
final class RestClientTestUtil {
private static final String[] HTTP_METHODS = new String[]{"DELETE", "HEAD", "GET", "OPTIONS", "PATCH", "POST", "PUT", "TRACE"};
private static final List<Integer> ALL_STATUS_CODES;
private static final List<Integer> OK_STATUS_CODES = Arrays.asList(200, 201);
private static final List<Integer> ALL_ERROR_STATUS_CODES;
private static List<Integer> ERROR_NO_RETRY_STATUS_CODES = Arrays.asList(400, 401, 403, 404, 405, 500);
private static List<Integer> ERROR_RETRY_STATUS_CODES = Arrays.asList(502, 503, 504);
static {
ALL_ERROR_STATUS_CODES = new ArrayList<>(ERROR_RETRY_STATUS_CODES);
ALL_ERROR_STATUS_CODES.addAll(ERROR_NO_RETRY_STATUS_CODES);
ALL_STATUS_CODES = new ArrayList<>(ALL_ERROR_STATUS_CODES);
ALL_STATUS_CODES.addAll(OK_STATUS_CODES);
}
private RestClientTestUtil() {
}
static String[] getHttpMethods() {
return HTTP_METHODS;
}
static String randomHttpMethod(Random random) {
return RandomPicks.randomFrom(random, HTTP_METHODS);
}
static int randomStatusCode(Random random) {
return RandomPicks.randomFrom(random, ALL_ERROR_STATUS_CODES);
}
static int randomOkStatusCode(Random random) {
return RandomPicks.randomFrom(random, OK_STATUS_CODES);
}
static int randomErrorNoRetryStatusCode(Random random) {
return RandomPicks.randomFrom(random, ERROR_NO_RETRY_STATUS_CODES);
}
static int randomErrorRetryStatusCode(Random random) {
return RandomPicks.randomFrom(random, ERROR_RETRY_STATUS_CODES);
}
static List<Integer> getOkStatusCodes() {
return OK_STATUS_CODES;
}
static List<Integer> getAllErrorStatusCodes() {
return ALL_ERROR_STATUS_CODES;
}
static List<Integer> getAllStatusCodes() {
return ALL_STATUS_CODES;
}
}

View File

@ -1,235 +0,0 @@
h1. Elasticsearch
h2. A Distributed RESTful Search Engine
h3. "https://www.elastic.co/products/elasticsearch":https://www.elastic.co/products/elasticsearch
Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:
* Distributed and Highly Available Search Engine.
** Each index is fully sharded with a configurable number of shards.
** Each shard can have one or more replicas.
** Read / Search operations performed on either one of the replica shard.
* Multi Tenant with Multi Types.
** Support for more than one index.
** Support for more than one type per index.
** Index level configuration (number of shards, index storage, ...).
* Various set of APIs
** HTTP RESTful API
** Native Java API.
** All APIs perform automatic node operation rerouting.
* Document oriented
** No need for upfront schema definition.
** Schema can be defined per type for customization of the indexing process.
* Reliable, Asynchronous Write Behind for long term persistency.
* (Near) Real Time Search.
* Built on top of Lucene
** Each shard is a fully functional Lucene index
** All the power of Lucene easily exposed through simple configuration / plugins.
* Per operation consistency
** Single document level operations are atomic, consistent, isolated and durable.
* Open Source under the Apache License, version 2 ("ALv2")
h2. Getting Started
First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about.
h3. Requirements
You need to have a recent version of Java installed. See the "Setup":http://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html#jvm-version page for more information.
h3. Installation
* "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.
* Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows.
* Run @curl -X GET http://localhost:9200/@.
* Start more servers ...
h3. Indexing
Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
Now, let's see if the information was added by GETting it:
<pre>
curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'
</pre>
h3. Searching
Mmm search..., shouldn't it be elastic?
Let's find all the tweets that @kimchy@ posted:
<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
</pre>
We can also use the JSON query language Elasticsearch provides instead of a query string:
<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
{
"query" : {
"match" : { "user": "kimchy" }
}
}'
</pre>
Just for kicks, let's get all the documents stored (we should see the user as well):
<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
We can also do range search (the @postDate@ was automatically identified as date)
<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"range" : {
"postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
}
}
}'
</pre>
There are many more options to perform search, after all, it's a search product no? All the familiar Lucene queries are available through the JSON query language, or through the query parser.
h3. Multi Tenant - Indices and Types
Maan, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.
Elasticsearch supports multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
The above will index information into the @kimchy@ index, with two types, @info@ and @tweet@. Each user will get his own special index.
Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
<pre>
curl -XPUT http://localhost:9200/another_user/ -d '
{
"index" : {
"numberOfShards" : 1,
"numberOfReplicas" : 1
}
}'
</pre>
Search (and similar operations) are multi index aware. This means that we can easily search on more than one
index (twitter user), for example:
<pre>
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
Or on all the indices:
<pre>
curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
{One liner teaser}: And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from friends of my friends).
h3. Distributed, Highly Available
Let's face it, things will fail....
Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).
In order to play with the distributed nature of Elasticsearch, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.
h3. Where to go from here?
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website.
h3. Building from Source
Elasticsearch uses "Maven":http://maven.apache.org for its build system.
In order to create a distribution, simply run the @mvn clean package
-DskipTests@ command in the cloned directory.
The distribution will be created under @target/releases@.
See the "TESTING":TESTING.asciidoc file for more information about
running the Elasticsearch test suite.
h3. Upgrading to Elasticsearch 1.x?
In order to ensure a smooth upgrade process from earlier versions of Elasticsearch (< 1.0.0), it is recommended to perform a full cluster restart. Please see the "setup reference":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.
h1. License
<pre>
This software is licensed under the Apache License, version 2 ("ALv2"), quoted below.
Copyright 2009-2016 Elasticsearch <https://www.elastic.co>
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
</pre>

View File

@ -24,6 +24,16 @@ import org.elasticsearch.gradle.BuildPlugin
apply plugin: 'elasticsearch.build'
apply plugin: 'com.bmuschko.nexus'
apply plugin: 'nebula.optional-base'
apply plugin: 'nebula.maven-base-publish'
apply plugin: 'nebula.maven-scm'
publishing {
publications {
nebula {
artifactId 'elasticsearch'
}
}
}
archivesBaseName = 'elasticsearch'
@ -46,14 +56,14 @@ dependencies {
compile "org.apache.lucene:lucene-spatial3d:${versions.lucene}"
compile "org.apache.lucene:lucene-suggest:${versions.lucene}"
compile 'org.elasticsearch:securesm:1.0'
compile 'org.elasticsearch:securesm:1.1'
// utilities
compile 'net.sf.jopt-simple:jopt-simple:4.9'
compile 'net.sf.jopt-simple:jopt-simple:5.0.2'
compile 'com.carrotsearch:hppc:0.7.1'
// time handling, remove with java 8 time
compile 'joda-time:joda-time:2.8.2'
compile 'joda-time:joda-time:2.9.4'
// joda 2.0 moved to using volatile fields for datetime
// When updating to a new version, make sure to update our copy of BaseDateTime
compile 'org.joda:joda-convert:1.2'
@ -65,7 +75,7 @@ dependencies {
compile "com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:${versions.jackson}"
// network stack
compile 'io.netty:netty:3.10.5.Final'
compile 'io.netty:netty:3.10.6.Final'
// percentiles aggregation
compile 'com.tdunning:t-digest:3.0'
// precentil ranks aggregation
@ -79,7 +89,7 @@ dependencies {
compile "log4j:log4j:${versions.log4j}", optional
compile "log4j:apache-log4j-extras:${versions.log4j}", optional
compile "net.java.dev.jna:jna:${versions.jna}", optional
compile "net.java.dev.jna:jna:${versions.jna}"
if (isEclipse == false || project.path == ":core-tests") {
testCompile("org.elasticsearch.test:framework:${version}") {
@ -111,6 +121,36 @@ forbiddenPatterns {
exclude '**/org/elasticsearch/cluster/routing/shard_routes.txt'
}
task generateModulesList {
List<String> modules = project(':modules').subprojects.collect { it.name }
File modulesFile = new File(buildDir, 'generated-resources/modules.txt')
processResources.from(modulesFile)
inputs.property('modules', modules)
outputs.file(modulesFile)
doLast {
modulesFile.parentFile.mkdirs()
modulesFile.setText(modules.join('\n'), 'UTF-8')
}
}
task generatePluginsList {
List<String> plugins = project(':plugins').subprojects
.findAll { it.name.contains('example') == false }
.collect { it.name }
File pluginsFile = new File(buildDir, 'generated-resources/plugins.txt')
processResources.from(pluginsFile)
inputs.property('plugins', plugins)
outputs.file(pluginsFile)
doLast {
pluginsFile.parentFile.mkdirs()
pluginsFile.setText(plugins.join('\n'), 'UTF-8')
}
}
processResources {
dependsOn generateModulesList, generatePluginsList
}
thirdPartyAudit.excludes = [
// uses internal java api: sun.security.x509 (X509CertInfo, X509CertImpl, X500Name)
'org.jboss.netty.handler.ssl.util.OpenJdkSelfSignedCertGenerator',

View File

@ -17,19 +17,21 @@
* under the License.
*/
package org.elasticsearch.search.query;
package org.apache.log4j;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.search.SearchParseElement;
import org.elasticsearch.search.internal.SearchContext;
import org.apache.log4j.helpers.ThreadLocalMap;
/**
* Log4j 1.2 MDC breaks because it parses java.version incorrectly (does not handle new java9 versioning).
*
* This hack fixes up the pkg private members as if it had detected the java version correctly.
*/
public class QueryParseElement implements SearchParseElement {
public class Java9Hack {
@Override
public void parse(XContentParser parser, SearchContext context) throws Exception {
context.parsedQuery(context.getQueryShardContext().parse(parser));
public static void fixLog4j() {
if (MDC.mdc.tlm == null) {
MDC.mdc.java1 = false;
MDC.mdc.tlm = new ThreadLocalMap();
}
}
}

Some files were not shown because too many files have changed in this diff Show More