Merge branch 'master' into pr/15724-gce-network-host-master

This commit is contained in:
David Pilato 2016-06-03 17:22:24 +02:00
commit b9989f88d0
2049 changed files with 74086 additions and 46658 deletions

View File

@ -1,10 +0,0 @@
language: java
jdk:
- openjdk7
env:
- ES_TEST_LOCAL=true
- ES_TEST_LOCAL=false
notifications:
email: false

View File

@ -71,12 +71,47 @@ Once your changes and tests are ready to submit for review:
Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Elasticsearch.
Please adhere to the general guideline that you should never force push
to a publicly shared branch. Once you have opened your pull request, you
should consider your branch publicly shared. Instead of force pushing
you can just add incremental commits; this is generally easier on your
reviewers. If you need to pick up changes from master, you can merge
master into your branch. A reviewer might ask you to rebase a
long-running pull request in which case force pushing is okay for that
request. Note that squashing at the end of the review process should
also not be done, that can be done when the pull request is [integrated
via GitHub](https://github.com/blog/2141-squash-your-commits).
Contributing to the Elasticsearch codebase
------------------------------------------
**Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)
Make sure you have [Gradle](http://gradle.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE: `gradle eclipse` then `File: Import: Existing Projects into Workspace`. Select the option `Search for nested projects`. Additionally you will want to ensure that Eclipse is using 2048m of heap by modifying `eclipse.ini` accordingly to avoid GC overhead errors.
Make sure you have [Gradle](http://gradle.org) installed, as
Elasticsearch uses it as its build system.
Eclipse users can automatically configure their IDE: `gradle eclipse`
then `File: Import: Existing Projects into Workspace`. Select the
option `Search for nested projects`. Additionally you will want to
ensure that Eclipse is using 2048m of heap by modifying `eclipse.ini`
accordingly to avoid GC overhead errors.
IntelliJ users can automatically configure their IDE: `gradle idea`
then `File->New Project From Existing Sources`. Point to the root of
the source directory, select
`Import project from external model->Gradle`, enable
`Use auto-import`.
The Elasticsearch codebase makes heavy use of Java `assert`s and the
test runner requires that assertions be enabled within the JVM. This
can be accomplished by passing the flag `-ea` to the JVM on startup.
For IntelliJ, go to
`Run->Edit Configurations...->Defaults->JUnit->VM options` and input
`-ea`.
For Eclipse, go to `Preferences->Java->Installed JREs` and add `-ea` to
`VM Arguments`.
Please follow these formatting guidelines:

View File

@ -50,19 +50,19 @@ h3. Indexing
Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
@ -101,7 +101,7 @@ Just for kicks, let's get all the documents stored (we should see the user as we
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -113,7 +113,7 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"range" : {
"postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
"post_date" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
}
}
}'
@ -130,19 +130,19 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
@ -152,11 +152,11 @@ The above will index information into the @kimchy@ index, with two types, @info@
Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
<pre>
curl -XPUT http://localhost:9200/another_user/ -d '
curl -XPUT http://localhost:9200/another_user?pretty -d '
{
"index" : {
"numberOfShards" : 1,
"numberOfReplicas" : 1
"number_of_shards" : 1,
"number_of_replicas" : 1
}
}'
</pre>
@ -168,7 +168,7 @@ index (twitter user), for example:
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -179,7 +179,7 @@ Or on all the indices:
curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>

View File

@ -18,24 +18,18 @@ gradle assemble
== Other test options
To disable and enable network transport, set the `Des.node.mode`.
To disable and enable network transport, set the `tests.es.node.mode` system property.
Use network transport:
------------------------------------
-Des.node.mode=network
-Dtests.es.node.mode=network
------------------------------------
Use local transport (default since 1.3):
-------------------------------------
-Des.node.mode=local
-------------------------------------
Alternatively, you can set the `ES_TEST_LOCAL` environment variable:
-------------------------------------
export ES_TEST_LOCAL=true && gradle test
-Dtests.es.node.mode=local
-------------------------------------
=== Running Elasticsearch from a checkout
@ -201,7 +195,7 @@ gradle test -Dtests.timeoutSuite=5000! ...
Change the logging level of ES (not gradle)
--------------------------------
gradle test -Des.logger.level=DEBUG
gradle test -Dtests.es.logger.level=DEBUG
--------------------------------
Print all the logging output from the test runs to the commandline

View File

@ -27,6 +27,31 @@ import org.apache.tools.ant.taskdefs.condition.Os
subprojects {
group = 'org.elasticsearch'
version = org.elasticsearch.gradle.VersionProperties.elasticsearch
description = "Elasticsearch subproject ${project.path}"
// we only use maven publish to add tasks for pom generation
plugins.withType(MavenPublishPlugin).whenPluginAdded {
publishing {
publications {
// add license information to generated poms
all {
pom.withXml { XmlProvider xml ->
Node node = xml.asNode()
node.appendNode('inceptionYear', '2009')
Node license = node.appendNode('licenses').appendNode('license')
license.appendNode('name', 'The Apache Software License, Version 2.0')
license.appendNode('url', 'http://www.apache.org/licenses/LICENSE-2.0.txt')
license.appendNode('distribution', 'repo')
Node developer = node.appendNode('developers').appendNode('developer')
developer.appendNode('name', 'Elastic')
developer.appendNode('url', 'http://www.elastic.co')
}
}
}
}
}
plugins.withType(NexusPlugin).whenPluginAdded {
modifyPom {
@ -56,7 +81,7 @@ subprojects {
nexus {
String buildSnapshot = System.getProperty('build.snapshot', 'true')
if (buildSnapshot == 'false') {
Repository repo = new RepositoryBuilder().findGitDir(new File('.')).build()
Repository repo = new RepositoryBuilder().findGitDir(project.rootDir).build()
String shortHash = repo.resolve('HEAD')?.name?.substring(0,7)
repositoryUrl = project.hasProperty('build.repository') ? project.property('build.repository') : "file://${System.getenv('HOME')}/elasticsearch-releases/${version}-${shortHash}/"
}
@ -119,6 +144,14 @@ subprojects {
// see https://discuss.gradle.org/t/add-custom-javadoc-option-that-does-not-take-an-argument/5959
javadoc.options.encoding='UTF8'
javadoc.options.addStringOption('Xdoclint:all,-missing', '-quiet')
/*
TODO: building javadocs with java 9 b118 is currently broken with weird errors, so
for now this is commented out...try again with the next ea build...
javadoc.executable = new File(project.javaHome, 'bin/javadoc')
if (project.javaVersion == JavaVersion.VERSION_1_9) {
// TODO: remove this hack! gradle should be passing this...
javadoc.options.addStringOption('source', '8')
}*/
}
}
@ -224,7 +257,6 @@ allprojects {
idea {
project {
languageLevel = org.elasticsearch.gradle.BuildPlugin.minimumJava.toString()
vcs = 'Git'
}
}
@ -236,13 +268,6 @@ tasks.idea.doLast {
if (System.getProperty('idea.active') != null && ideaMarker.exists() == false) {
throw new GradleException('You must run gradle idea from the root of elasticsearch before importing into IntelliJ')
}
// add buildSrc itself as a groovy project
task buildSrcIdea(type: GradleBuild) {
buildFile = 'buildSrc/build.gradle'
tasks = ['cleanIdea', 'ideaModule']
}
tasks.idea.dependsOn(buildSrcIdea)
// eclipse configuration
allprojects {
@ -278,20 +303,14 @@ allprojects {
into '.settings'
}
// otherwise .settings is not nuked entirely
tasks.cleanEclipse {
task wipeEclipseSettings(type: Delete) {
delete '.settings'
}
tasks.cleanEclipse.dependsOn(wipeEclipseSettings)
// otherwise the eclipse merging is *super confusing*
tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)
}
// add buildSrc itself as a groovy project
task buildSrcEclipse(type: GradleBuild) {
buildFile = 'buildSrc/build.gradle'
tasks = ['cleanEclipse', 'eclipse']
}
tasks.eclipse.dependsOn(buildSrcEclipse)
// we need to add the same --debug-jvm option as
// the real RunTask has, so we can pass it through
class Run extends DefaultTask {

1
buildSrc/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
build-bootstrap/

View File

@ -1,5 +1,3 @@
import java.nio.file.Files
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
@ -19,25 +17,21 @@ import java.nio.file.Files
* under the License.
*/
// we must use buildscript + apply so that an external plugin
// can apply this file, since the plugins directive is not
// supported through file includes
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.bmuschko:gradle-nexus-plugin:2.3.1'
}
}
import java.nio.file.Files
apply plugin: 'groovy'
apply plugin: 'com.bmuschko.nexus'
// TODO: move common IDE configuration to a common file to include
apply plugin: 'idea'
apply plugin: 'eclipse'
group = 'org.elasticsearch.gradle'
archivesBaseName = 'build-tools'
if (project == rootProject) {
// change the build dir used during build init, so that doing a clean
// won't wipe out the buildscript jar
buildDir = 'build-bootstrap'
}
/*****************************************************************************
* Propagating version.properties to the rest of the build *
*****************************************************************************/
Properties props = new Properties()
props.load(project.file('version.properties').newDataInputStream())
@ -51,32 +45,6 @@ if (snapshot) {
props.put("elasticsearch", version);
}
repositories {
mavenCentral()
maven {
name 'sonatype-snapshots'
url "https://oss.sonatype.org/content/repositories/snapshots/"
}
jcenter()
}
dependencies {
compile gradleApi()
compile localGroovy()
compile "com.carrotsearch.randomizedtesting:junit4-ant:${props.getProperty('randomizedrunner')}"
compile("junit:junit:${props.getProperty('junit')}") {
transitive = false
}
compile 'com.netflix.nebula:gradle-extra-configurations-plugin:3.0.3'
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
compile 'de.thetaphi:forbiddenapis:2.0'
compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
compile 'org.apache.rat:apache-rat:0.11'
}
File tempPropertiesFile = new File(project.buildDir, "version.properties")
task writeVersionProperties {
inputs.properties(props)
@ -96,31 +64,77 @@ processResources {
from tempPropertiesFile
}
extraArchive {
javadoc = false
tests = false
/*****************************************************************************
* Dependencies used by the entire build *
*****************************************************************************/
repositories {
jcenter()
}
idea {
module {
inheritOutputDirs = false
outputDir = file('build-idea/classes/main')
testOutputDir = file('build-idea/classes/test')
dependencies {
compile gradleApi()
compile localGroovy()
compile "com.carrotsearch.randomizedtesting:junit4-ant:${props.getProperty('randomizedrunner')}"
compile("junit:junit:${props.getProperty('junit')}") {
transitive = false
}
compile 'com.netflix.nebula:gradle-extra-configurations-plugin:3.0.3'
compile 'com.netflix.nebula:nebula-publishing-plugin:4.4.4'
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
compile 'de.thetaphi:forbiddenapis:2.1'
compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
compile 'org.apache.rat:apache-rat:0.11'
}
/*****************************************************************************
* Bootstrap repositories *
*****************************************************************************/
// this will only happen when buildSrc is built on its own during build init
if (project == rootProject) {
repositories {
mavenCentral()
maven {
name 'sonatype-snapshots'
url "https://oss.sonatype.org/content/repositories/snapshots/"
}
}
}
eclipse {
classpath {
defaultOutputDir = file('build-eclipse')
/*****************************************************************************
* Normal project checks *
*****************************************************************************/
// this happens when included as a normal project in the build, which we do
// to enforce precommit checks like forbidden apis, as well as setup publishing
if (project != rootProject) {
apply plugin: 'elasticsearch.build'
apply plugin: 'nebula.maven-base-publish'
apply plugin: 'nebula.maven-scm'
// groovydoc succeeds, but has some weird internal exception...
groovydoc.enabled = false
// build-tools is not ready for primetime with these...
dependencyLicenses.enabled = false
forbiddenApisMain.enabled = false
jarHell.enabled = false
loggerUsageCheck.enabled = false
thirdPartyAudit.enabled = false
// test for elasticsearch.build tries to run with ES...
test.enabled = false
// TODO: re-enable once randomizedtesting gradle code is published and removed from here
licenseHeaders.enabled = false
forbiddenPatterns {
exclude '**/*.wav'
// the file that actually defines nocommit
exclude '**/ForbiddenPatternsTask.groovy'
}
}
task copyEclipseSettings(type: Copy) {
from project.file('src/main/resources/eclipse.settings')
into '.settings'
}
// otherwise .settings is not nuked entirely
tasks.cleanEclipse {
delete '.settings'
}
tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)

View File

@ -19,6 +19,7 @@
package org.elasticsearch.gradle
import nebula.plugin.extraconfigurations.ProvidedBasePlugin
import nebula.plugin.publishing.maven.MavenBasePublishPlugin
import org.elasticsearch.gradle.precommit.PrecommitTasks
import org.gradle.api.GradleException
import org.gradle.api.JavaVersion
@ -33,6 +34,8 @@ import org.gradle.api.artifacts.ProjectDependency
import org.gradle.api.artifacts.ResolvedArtifact
import org.gradle.api.artifacts.dsl.RepositoryHandler
import org.gradle.api.artifacts.maven.MavenPom
import org.gradle.api.publish.maven.MavenPublication
import org.gradle.api.publish.maven.tasks.GenerateMavenPom
import org.gradle.api.tasks.bundling.Jar
import org.gradle.api.tasks.compile.JavaCompile
import org.gradle.internal.jvm.Jvm
@ -54,7 +57,7 @@ class BuildPlugin implements Plugin<Project> {
project.pluginManager.apply('java')
project.pluginManager.apply('carrotsearch.randomized-testing')
// these plugins add lots of info to our jars
configureJarManifest(project) // jar config must be added before info broker
configureJars(project) // jar config must be added before info broker
project.pluginManager.apply('nebula.info-broker')
project.pluginManager.apply('nebula.info-basic')
project.pluginManager.apply('nebula.info-java')
@ -68,6 +71,7 @@ class BuildPlugin implements Plugin<Project> {
configureConfigurations(project)
project.ext.versions = VersionProperties.versions
configureCompile(project)
configurePomGeneration(project)
configureTest(project)
configurePrecommit(project)
@ -109,7 +113,7 @@ class BuildPlugin implements Plugin<Project> {
}
// enforce gradle version
GradleVersion minGradle = GradleVersion.version('2.8')
GradleVersion minGradle = GradleVersion.version('2.13')
if (GradleVersion.current() < minGradle) {
throw new GradleException("${minGradle} or above is required to build elasticsearch")
}
@ -139,7 +143,7 @@ class BuildPlugin implements Plugin<Project> {
}
project.rootProject.ext.javaHome = javaHome
project.rootProject.ext.javaVersion = javaVersion
project.rootProject.ext.javaVersion = javaVersionEnum
project.rootProject.ext.buildChecksDone = true
}
project.targetCompatibility = minimumJava
@ -228,7 +232,7 @@ class BuildPlugin implements Plugin<Project> {
*/
static void configureConfigurations(Project project) {
// we are not shipping these jars, we act like dumb consumers of these things
if (project.path.startsWith(':test:fixtures')) {
if (project.path.startsWith(':test:fixtures') || project.path == ':build-tools') {
return
}
// fail on any conflicting dependency versions
@ -266,44 +270,7 @@ class BuildPlugin implements Plugin<Project> {
// add exclusions to the pom directly, for each of the transitive deps of this project's deps
project.modifyPom { MavenPom pom ->
pom.withXml { XmlProvider xml ->
// first find if we have dependencies at all, and grab the node
NodeList depsNodes = xml.asNode().get('dependencies')
if (depsNodes.isEmpty()) {
return
}
// check each dependency for any transitive deps
for (Node depNode : depsNodes.get(0).children()) {
String groupId = depNode.get('groupId').get(0).text()
String artifactId = depNode.get('artifactId').get(0).text()
String version = depNode.get('version').get(0).text()
// collect the transitive deps now that we know what this dependency is
String depConfig = transitiveDepConfigName(groupId, artifactId, version)
Configuration configuration = project.configurations.findByName(depConfig)
if (configuration == null) {
continue // we did not make this dep non-transitive
}
Set<ResolvedArtifact> artifacts = configuration.resolvedConfiguration.resolvedArtifacts
if (artifacts.size() <= 1) {
// this dep has no transitive deps (or the only artifact is itself)
continue
}
// we now know we have something to exclude, so add the exclusion elements
Node exclusions = depNode.appendNode('exclusions')
for (ResolvedArtifact transitiveArtifact : artifacts) {
ModuleVersionIdentifier transitiveDep = transitiveArtifact.moduleVersion.id
if (transitiveDep.group == groupId && transitiveDep.name == artifactId) {
continue; // don't exclude the dependency itself!
}
Node exclusion = exclusions.appendNode('exclusion')
exclusion.appendNode('groupId', transitiveDep.group)
exclusion.appendNode('artifactId', transitiveDep.name)
}
}
}
pom.withXml(removeTransitiveDependencies(project))
}
}
@ -332,6 +299,70 @@ class BuildPlugin implements Plugin<Project> {
}
}
/** Returns a closure which can be used with a MavenPom for removing transitive dependencies. */
private static Closure removeTransitiveDependencies(Project project) {
// TODO: remove this when enforcing gradle 2.13+, it now properly handles exclusions
return { XmlProvider xml ->
// first find if we have dependencies at all, and grab the node
NodeList depsNodes = xml.asNode().get('dependencies')
if (depsNodes.isEmpty()) {
return
}
// check each dependency for any transitive deps
for (Node depNode : depsNodes.get(0).children()) {
String groupId = depNode.get('groupId').get(0).text()
String artifactId = depNode.get('artifactId').get(0).text()
String version = depNode.get('version').get(0).text()
// collect the transitive deps now that we know what this dependency is
String depConfig = transitiveDepConfigName(groupId, artifactId, version)
Configuration configuration = project.configurations.findByName(depConfig)
if (configuration == null) {
continue // we did not make this dep non-transitive
}
Set<ResolvedArtifact> artifacts = configuration.resolvedConfiguration.resolvedArtifacts
if (artifacts.size() <= 1) {
// this dep has no transitive deps (or the only artifact is itself)
continue
}
// we now know we have something to exclude, so add the exclusion elements
Node exclusions = depNode.appendNode('exclusions')
for (ResolvedArtifact transitiveArtifact : artifacts) {
ModuleVersionIdentifier transitiveDep = transitiveArtifact.moduleVersion.id
if (transitiveDep.group == groupId && transitiveDep.name == artifactId) {
continue; // don't exclude the dependency itself!
}
Node exclusion = exclusions.appendNode('exclusion')
exclusion.appendNode('groupId', transitiveDep.group)
exclusion.appendNode('artifactId', transitiveDep.name)
}
}
}
}
/**Configuration generation of maven poms. */
public static void configurePomGeneration(Project project) {
project.plugins.withType(MavenBasePublishPlugin.class).whenPluginAdded {
project.publishing {
publications {
all { MavenPublication publication -> // we only deal with maven
// add exclusions to the pom directly, for each of the transitive deps of this project's deps
publication.pom.withXml(removeTransitiveDependencies(project))
}
}
}
project.tasks.withType(GenerateMavenPom.class) { GenerateMavenPom t ->
// place the pom next to the jar it is for
t.destination = new File(project.buildDir, "distributions/${project.archivesBaseName}-${project.version}.pom")
// build poms with assemble
project.assemble.dependsOn(t)
}
}
}
/** Adds compiler settings to the project */
static void configureCompile(Project project) {
project.ext.compactProfile = 'compact3'
@ -341,32 +372,40 @@ class BuildPlugin implements Plugin<Project> {
options.fork = true
options.forkOptions.executable = new File(project.javaHome, 'bin/javac')
options.forkOptions.memoryMaximumSize = "1g"
if (project.targetCompatibility >= JavaVersion.VERSION_1_8) {
// compile with compact 3 profile by default
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
if (project.compactProfile != 'full') {
options.compilerArgs << '-profile' << project.compactProfile
}
}
/*
* -path because gradle will send in paths that don't always exist.
* -missing because we have tons of missing @returns and @param.
* -serial because we don't use java serialization.
*/
// don't even think about passing args with -J-xxx, oracle will ask you to submit a bug report :)
options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial' << '-Xdoclint:all' << '-Xdoclint:-missing'
// compile with compact 3 profile by default
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
if (project.compactProfile != 'full') {
options.compilerArgs << '-profile' << project.compactProfile
}
options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial,-options,-deprecation' << '-Xdoclint:all' << '-Xdoclint:-missing'
options.encoding = 'UTF-8'
//options.incremental = true
// gradle ignores target/source compatibility when it is "unnecessary", but since to compile with
// java 9, gradle is running in java 8, it incorrectly thinks it is unnecessary
assert minimumJava == JavaVersion.VERSION_1_8
options.compilerArgs << '-target' << '1.8' << '-source' << '1.8'
if (project.javaVersion == JavaVersion.VERSION_1_9) {
// hack until gradle supports java 9's new "-release" arg
assert minimumJava == JavaVersion.VERSION_1_8
options.compilerArgs << '-release' << '8'
project.sourceCompatibility = null
project.targetCompatibility = null
}
}
}
}
/** Adds additional manifest info to jars */
static void configureJarManifest(Project project) {
/** Adds additional manifest info to jars, and adds source and javadoc jars */
static void configureJars(Project project) {
project.tasks.withType(Jar) { Jar jarTask ->
// we put all our distributable files under distributions
jarTask.destinationDir = new File(project.buildDir, 'distributions')
// fixup the jar manifest
jarTask.doFirst {
boolean isSnapshot = VersionProperties.elasticsearch.endsWith("-SNAPSHOT");
String version = VersionProperties.elasticsearch;
@ -422,7 +461,7 @@ class BuildPlugin implements Plugin<Project> {
// default test sysprop values
systemProperty 'tests.ifNoTests', 'fail'
// TODO: remove setting logging level via system property
systemProperty 'es.logger.level', 'WARN'
systemProperty 'tests.logger.level', 'WARN'
for (Map.Entry<String, String> property : System.properties.entrySet()) {
if (property.getKey().startsWith('tests.') ||
property.getKey().startsWith('es.')) {

View File

@ -26,14 +26,17 @@ import org.gradle.api.tasks.Exec
* A wrapper around gradle's Exec task to capture output and log on error.
*/
class LoggedExec extends Exec {
protected ByteArrayOutputStream output = new ByteArrayOutputStream()
LoggedExec() {
if (logger.isInfoEnabled() == false) {
standardOutput = new ByteArrayOutputStream()
errorOutput = standardOutput
standardOutput = output
errorOutput = output
ignoreExitValue = true
doLast {
if (execResult.exitValue != 0) {
standardOutput.toString('UTF-8').eachLine { line -> logger.error(line) }
output.toString('UTF-8').eachLine { line -> logger.error(line) }
throw new GradleException("Process '${executable} ${args.join(' ')}' finished with non-zero exit value ${execResult.exitValue}")
}
}

View File

@ -0,0 +1,65 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.elasticsearch.gradle.test.RestTestPlugin
import org.gradle.api.Project
import org.gradle.api.Task
/**
* Sets up tests for documentation.
*/
public class DocsTestPlugin extends RestTestPlugin {
@Override
public void apply(Project project) {
super.apply(project)
Task listSnippets = project.tasks.create('listSnippets', SnippetsTask)
listSnippets.group 'Docs'
listSnippets.description 'List each snippet'
listSnippets.perSnippet { println(it.toString()) }
Task listConsoleCandidates = project.tasks.create(
'listConsoleCandidates', SnippetsTask)
listConsoleCandidates.group 'Docs'
listConsoleCandidates.description
'List snippets that probably should be marked // CONSOLE'
listConsoleCandidates.perSnippet {
if (
it.console // Already marked, nothing to do
|| it.testResponse // It is a response
) {
return
}
List<String> languages = [
// These languages should almost always be marked console
'js', 'json',
// These are often curl commands that should be converted but
// are probably false positives
'sh', 'shell',
]
if (false == languages.contains(it.language)) {
return
}
println(it.toString())
}
project.tasks.create('buildRestTests', RestTestsFromSnippetsTask)
}
}

View File

@ -0,0 +1,219 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.elasticsearch.gradle.doc.SnippetsTask.Snippet
import org.gradle.api.InvalidUserDataException
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.OutputDirectory
import java.nio.file.Files
import java.nio.file.Path
import java.util.regex.Matcher
/**
* Generates REST tests for each snippet marked // TEST.
*/
public class RestTestsFromSnippetsTask extends SnippetsTask {
@Input
Map<String, String> setups = new HashMap()
/**
* Root directory of the tests being generated. To make rest tests happy
* we generate them in a testRoot() which is contained in this directory.
*/
@OutputDirectory
File testRoot = project.file('build/rest')
public RestTestsFromSnippetsTask() {
project.afterEvaluate {
// Wait to set this so testRoot can be customized
project.sourceSets.test.output.dir(testRoot, builtBy: this)
}
TestBuilder builder = new TestBuilder()
doFirst { outputRoot().delete() }
perSnippet builder.&handleSnippet
doLast builder.&finishLastTest
}
/**
* Root directory containing all the files generated by this task. It is
* contained withing testRoot.
*/
File outputRoot() {
return new File(testRoot, '/rest-api-spec/test')
}
private class TestBuilder {
private static final String SYNTAX = {
String method = /(?<method>GET|PUT|POST|HEAD|OPTIONS|DELETE)/
String pathAndQuery = /(?<pathAndQuery>[^\n]+)/
String badBody = /GET|PUT|POST|HEAD|OPTIONS|DELETE|#/
String body = /(?<body>(?:\n(?!$badBody)[^\n]+)+)/
String nonComment = /$method\s+$pathAndQuery$body?/
String comment = /(?<comment>#.+)/
/(?:$comment|$nonComment)\n+/
}()
/**
* The file in which we saw the last snippet that made a test.
*/
Path lastDocsPath
/**
* The file we're building.
*/
PrintWriter current
/**
* Called each time a snippet is encountered. Tracks the snippets and
* calls buildTest to actually build the test.
*/
void handleSnippet(Snippet snippet) {
if (snippet.language == 'json') {
throw new InvalidUserDataException(
"$snippet: Use `js` instead of `json`.")
}
if (snippet.testSetup) {
setup(snippet)
return
}
if (snippet.testResponse) {
response(snippet)
return
}
if (snippet.test || snippet.console) {
test(snippet)
return
}
// Must be an unmarked snippet....
}
private void test(Snippet test) {
setupCurrent(test)
if (false == test.continued) {
current.println('---')
current.println("\"$test.start\":")
}
if (test.skipTest) {
current.println(" - skip:")
current.println(" features: always_skip")
current.println(" reason: $test.skipTest")
}
if (test.setup != null) {
String setup = setups[test.setup]
if (setup == null) {
throw new InvalidUserDataException("Couldn't find setup "
+ "for $test")
}
current.println(setup)
}
body(test)
}
private void response(Snippet response) {
current.println(" - response_body: |")
response.contents.eachLine { current.println(" $it") }
}
void emitDo(String method, String pathAndQuery,
String body, String catchPart) {
def (String path, String query) = pathAndQuery.tokenize('?')
current.println(" - do:")
if (catchPart != null) {
current.println(" catch: $catchPart")
}
current.println(" raw:")
current.println(" method: $method")
current.println(" path: \"$path\"")
if (query != null) {
for (String param: query.tokenize('&')) {
def (String name, String value) = param.tokenize('=')
if (value == null) {
value = ''
}
current.println(" $name: \"$value\"")
}
}
if (body != null) {
// Throw out the leading newline we get from parsing the body
body = body.substring(1)
current.println(" body: |")
body.eachLine { current.println(" $it") }
}
}
private void setup(Snippet setup) {
if (lastDocsPath == setup.path) {
throw new InvalidUserDataException("$setup: wasn't first")
}
setupCurrent(setup)
current.println('---')
current.println("setup:")
body(setup)
}
private void body(Snippet snippet) {
parse("$snippet", snippet.contents, SYNTAX) { matcher, last ->
if (matcher.group("comment") != null) {
// Comment
return
}
String method = matcher.group("method")
String pathAndQuery = matcher.group("pathAndQuery")
String body = matcher.group("body")
String catchPart = last ? snippet.catchPart : null
if (pathAndQuery.startsWith('/')) {
// Leading '/'s break the generated paths
pathAndQuery = pathAndQuery.substring(1)
}
emitDo(method, pathAndQuery, body, catchPart)
}
}
private PrintWriter setupCurrent(Snippet test) {
if (lastDocsPath == test.path) {
return
}
finishLastTest()
lastDocsPath = test.path
// Make the destination file:
// Shift the path into the destination directory tree
Path dest = outputRoot().toPath().resolve(test.path)
// Replace the extension
String fileName = dest.getName(dest.nameCount - 1)
dest = dest.parent.resolve(fileName.replace('.asciidoc', '.yaml'))
// Now setup the writer
Files.createDirectories(dest.parent)
current = dest.newPrintWriter('UTF-8')
}
void finishLastTest() {
if (current != null) {
current.close()
current = null
}
}
}
}

View File

@ -0,0 +1,308 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.gradle.api.DefaultTask
import org.gradle.api.InvalidUserDataException
import org.gradle.api.file.ConfigurableFileTree
import org.gradle.api.tasks.InputFiles
import org.gradle.api.tasks.TaskAction
import java.nio.file.Path
import java.util.regex.Matcher
/**
* A task which will run a closure on each snippet in the documentation.
*/
public class SnippetsTask extends DefaultTask {
private static final String SCHAR = /(?:\\\/|[^\/])/
private static final String SUBSTITUTION = /s\/($SCHAR+)\/($SCHAR*)\//
private static final String CATCH = /catch:\s*((?:\/[^\/]+\/)|[^ \]]+)/
private static final String SKIP = /skip:([^\]]+)/
private static final String SETUP = /setup:([^ \]]+)/
private static final String TEST_SYNTAX =
/(?:$CATCH|$SUBSTITUTION|$SKIP|(continued)|$SETUP) ?/
/**
* Action to take on each snippet. Called with a single parameter, an
* instance of Snippet.
*/
Closure perSnippet
/**
* The docs to scan. Defaults to every file in the directory exception the
* build.gradle file because that is appropriate for Elasticsearch's docs
* directory.
*/
@InputFiles
ConfigurableFileTree docs = project.fileTree(project.projectDir) {
// No snippets in the build file
exclude 'build.gradle'
// That is where the snippets go, not where they come from!
exclude 'build'
}
@TaskAction
public void executeTask() {
/*
* Walks each line of each file, building snippets as it encounters
* the lines that make up the snippet.
*/
for (File file: docs) {
String lastLanguage
int lastLanguageLine
Snippet snippet = null
StringBuilder contents = null
List substitutions = null
Closure emit = {
snippet.contents = contents.toString()
contents = null
if (substitutions != null) {
substitutions.each { String pattern, String subst ->
/*
* $body is really common but it looks like a
* backreference so we just escape it here to make the
* tests cleaner.
*/
subst = subst.replace('$body', '\\$body')
// \n is a new line....
subst = subst.replace('\\n', '\n')
snippet.contents = snippet.contents.replaceAll(
pattern, subst)
}
substitutions = null
}
perSnippet(snippet)
snippet = null
}
file.eachLine('UTF-8') { String line, int lineNumber ->
Matcher matcher
if (line ==~ /-{4,}\s*/) { // Four dashes looks like a snippet
if (snippet == null) {
Path path = docs.dir.toPath().relativize(file.toPath())
snippet = new Snippet(path: path, start: lineNumber)
if (lastLanguageLine == lineNumber - 1) {
snippet.language = lastLanguage
}
} else {
snippet.end = lineNumber
}
return
}
matcher = line =~ /\[source,(\w+)]\s*/
if (matcher.matches()) {
lastLanguage = matcher.group(1)
lastLanguageLine = lineNumber
return
}
if (line ==~ /\/\/\s*AUTOSENSE\s*/) {
throw new InvalidUserDataException("AUTOSENSE has been " +
"replaced by CONSOLE. Use that instead at " +
"$file:$lineNumber")
}
if (line ==~ /\/\/\s*CONSOLE\s*/) {
if (snippet == null) {
throw new InvalidUserDataException("CONSOLE not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.console = true
return
}
matcher = line =~ /\/\/\s*TEST(\[(.+)\])?\s*/
if (matcher.matches()) {
if (snippet == null) {
throw new InvalidUserDataException("TEST not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.test = true
if (matcher.group(2) != null) {
String loc = "$file:$lineNumber"
parse(loc, matcher.group(2), TEST_SYNTAX) {
if (it.group(1) != null) {
snippet.catchPart = it.group(1)
return
}
if (it.group(2) != null) {
if (substitutions == null) {
substitutions = []
}
substitutions.add([it.group(2), it.group(3)])
return
}
if (it.group(4) != null) {
snippet.skipTest = it.group(4)
return
}
if (it.group(5) != null) {
snippet.continued = true
return
}
if (it.group(6) != null) {
snippet.setup = it.group(6)
return
}
throw new InvalidUserDataException(
"Invalid test marker: $line")
}
}
return
}
matcher = line =~ /\/\/\s*TESTRESPONSE(\[(.+)\])?\s*/
if (matcher.matches()) {
if (snippet == null) {
throw new InvalidUserDataException("TESTRESPONSE not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.testResponse = true
if (matcher.group(2) != null) {
if (substitutions == null) {
substitutions = []
}
String loc = "$file:$lineNumber"
parse(loc, matcher.group(2), /$SUBSTITUTION ?/) {
substitutions.add([it.group(1), it.group(2)])
}
}
return
}
if (line ==~ /\/\/\s*TESTSETUP\s*/) {
snippet.testSetup = true
return
}
if (snippet == null) {
// Outside
return
}
if (snippet.end == Snippet.NOT_FINISHED) {
// Inside
if (contents == null) {
contents = new StringBuilder()
}
// We don't need the annotations
line = line.replaceAll(/<\d+>/, '')
// Nor any trailing spaces
line = line.replaceAll(/\s+$/, '')
contents.append(line).append('\n')
return
}
// Just finished
emit()
}
if (snippet != null) emit()
}
}
static class Snippet {
static final int NOT_FINISHED = -1
/**
* Path to the file containing this snippet. Relative to docs.dir of the
* SnippetsTask that created it.
*/
Path path
int start
int end = NOT_FINISHED
String contents
boolean console = false
boolean test = false
boolean testResponse = false
boolean testSetup = false
String skipTest = null
boolean continued = false
String language = null
String catchPart = null
String setup = null
@Override
public String toString() {
String result = "$path[$start:$end]"
if (language != null) {
result += "($language)"
}
if (console) {
result += '// CONSOLE'
}
if (test) {
result += '// TEST'
if (catchPart) {
result += "[catch: $catchPart]"
}
if (skipTest) {
result += "[skip=$skipTest]"
}
if (continued) {
result += '[continued]'
}
if (setup) {
result += "[setup:$setup]"
}
}
if (testResponse) {
result += '// TESTRESPONSE'
}
if (testSetup) {
result += '// TESTSETUP'
}
return result
}
}
/**
* Repeatedly match the pattern to the string, calling the closure with the
* matchers each time there is a match. If there are characters that don't
* match then blow up. If the closure takes two parameters then the second
* one is "is this the last match?".
*/
protected parse(String location, String s, String pattern, Closure c) {
if (s == null) {
return // Silly null, only real stuff gets to match!
}
Matcher m = s =~ pattern
int offset = 0
Closure extraContent = { message ->
StringBuilder cutOut = new StringBuilder()
cutOut.append(s[offset - 6..offset - 1])
cutOut.append('*')
cutOut.append(s[offset..Math.min(offset + 5, s.length() - 1)])
String cutOutNoNl = cutOut.toString().replace('\n', '\\n')
throw new InvalidUserDataException("$location: Extra content "
+ "$message ('$cutOutNoNl') matching [$pattern]: $s")
}
while (m.find()) {
if (m.start() != offset) {
extraContent("between [$offset] and [${m.start()}]")
}
offset = m.end()
if (c.maximumNumberOfParameters == 1) {
c(m)
} else {
c(m, offset == s.length())
}
}
if (offset == 0) {
throw new InvalidUserDataException("$location: Didn't match "
+ "$pattern: $s")
}
if (offset != s.length()) {
extraContent("after [$offset]")
}
}
}

View File

@ -18,11 +18,13 @@
*/
package org.elasticsearch.gradle.plugin
import nebula.plugin.publishing.maven.MavenBasePublishPlugin
import nebula.plugin.publishing.maven.MavenManifestPlugin
import nebula.plugin.publishing.maven.MavenScmPlugin
import org.elasticsearch.gradle.BuildPlugin
import org.elasticsearch.gradle.test.RestIntegTestTask
import org.elasticsearch.gradle.test.RunTask
import org.gradle.api.Project
import org.gradle.api.artifacts.Dependency
import org.gradle.api.tasks.SourceSet
import org.gradle.api.tasks.bundling.Zip
@ -50,6 +52,7 @@ public class PluginBuildPlugin extends BuildPlugin {
} else {
project.integTest.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)
project.tasks.run.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)
addPomGeneration(project)
}
project.namingConventions {
@ -125,4 +128,32 @@ public class PluginBuildPlugin extends BuildPlugin {
project.configurations.getByName('default').extendsFrom = []
project.artifacts.add('default', bundle)
}
/**
* Adds the plugin jar and zip as publications.
*/
protected static void addPomGeneration(Project project) {
project.plugins.apply(MavenBasePublishPlugin.class)
project.plugins.apply(MavenScmPlugin.class)
project.publishing {
publications {
nebula {
artifact project.bundlePlugin
pom.withXml {
// overwrite the name/description in the pom nebula set up
Node root = asNode()
for (Node node : root.children()) {
if (node.name() == 'name') {
node.setValue(project.pluginProperties.extension.name)
} else if (node.name() == 'description') {
node.setValue(project.pluginProperties.extension.description)
}
}
}
}
}
}
}
}

View File

@ -62,9 +62,8 @@ class PrecommitTasks {
private static Task configureForbiddenApis(Project project) {
project.pluginManager.apply(ForbiddenApisPlugin.class)
project.forbiddenApis {
internalRuntimeForbidden = true
failOnUnsupportedJava = false
bundledSignatures = ['jdk-unsafe', 'jdk-deprecated', 'jdk-system-out']
bundledSignatures = ['jdk-unsafe', 'jdk-deprecated', 'jdk-non-portable', 'jdk-system-out']
signaturesURLs = [getClass().getResource('/forbidden/jdk-signatures.txt'),
getClass().getResource('/forbidden/es-all-signatures.txt')]
suppressAnnotations = ['**.SuppressForbidden']

View File

@ -203,8 +203,7 @@ public class ThirdPartyAuditTask extends AntTask {
Set<String> sheistySet = getSheistyClasses(tmpDir.toPath());
try {
ant.thirdPartyAudit(internalRuntimeForbidden: false,
failOnUnsupportedJava: false,
ant.thirdPartyAudit(failOnUnsupportedJava: false,
failOnMissingClasses: false,
signaturesFile: new File(getClass().getResource('/forbidden/third-party-audit.txt').toURI()),
classpath: classpath.asPath) {

View File

@ -418,8 +418,7 @@ class ClusterFormationTasks {
// argument are wrapped in an ExecArgWrapper that escapes commas
args execArgs.collect { a -> new EscapeCommaWrapper(arg: a) }
} else {
executable 'sh'
args execArgs
commandLine execArgs
}
}
}

View File

@ -129,18 +129,18 @@ class NodeInfo {
}
env = [ 'JAVA_HOME' : project.javaHome ]
args.addAll("-E", "es.node.portsfile=true")
args.addAll("-E", "node.portsfile=true")
String collectedSystemProperties = config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ")
String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + " " + config.jvmArgs
env.put('ES_JAVA_OPTS', esJavaOpts)
for (Map.Entry<String, String> property : System.properties.entrySet()) {
if (property.getKey().startsWith('es.')) {
if (property.key.startsWith('tests.es.')) {
args.add("-E")
args.add("${property.getKey()}=${property.getValue()}")
args.add("${property.key.substring('tests.es.'.size())}=${property.value}")
}
}
env.put('ES_JVM_OPTIONS', new File(confDir, 'jvm.options'))
args.addAll("-E", "es.path.conf=${confDir}")
args.addAll("-E", "path.conf=${confDir}")
if (Os.isFamily(Os.FAMILY_WINDOWS)) {
args.add('"') // end the entire command, quoted
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.gradle.vagrant
import org.gradle.api.DefaultTask
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.TaskAction
import org.gradle.logging.ProgressLoggerFactory
import org.gradle.process.internal.ExecAction
@ -30,41 +31,22 @@ import javax.inject.Inject
* Runs bats over vagrant. Pretty much like running it using Exec but with a
* nicer output formatter.
*/
class BatsOverVagrantTask extends DefaultTask {
String command
String boxName
ExecAction execAction
public class BatsOverVagrantTask extends VagrantCommandTask {
BatsOverVagrantTask() {
execAction = getExecActionFactory().newExecAction()
}
@Input
String command
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
BatsOverVagrantTask() {
project.afterEvaluate {
args 'ssh', boxName, '--command', command
}
}
@Inject
ExecActionFactory getExecActionFactory() {
throw new UnsupportedOperationException();
}
void boxName(String boxName) {
this.boxName = boxName
}
void command(String command) {
this.command = command
}
@TaskAction
void exec() {
// It'd be nice if --machine-readable were, well, nice
execAction.commandLine(['vagrant', 'ssh', boxName, '--command', command])
execAction.setStandardOutput(new TapLoggerOutputStream(
command: command,
factory: getProgressLoggerFactory(),
logger: logger))
execAction.execute();
}
@Override
protected OutputStream createLoggerOutputStream() {
return new TapLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
logger: logger)
}
}

View File

@ -19,9 +19,11 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
import groovy.transform.PackageScope
import org.gradle.api.GradleScriptException
import org.gradle.api.logging.Logger
import org.gradle.logging.ProgressLogger
import org.gradle.logging.ProgressLoggerFactory
import java.util.regex.Matcher
@ -35,73 +37,77 @@ import java.util.regex.Matcher
* There is a Tap4j project but we can't use it because it wants to parse the
* entire TAP stream at once and won't parse it stream-wise.
*/
class TapLoggerOutputStream extends LoggingOutputStream {
ProgressLogger progressLogger
Logger logger
int testsCompleted = 0
int testsFailed = 0
int testsSkipped = 0
Integer testCount
String countsFormat
public class TapLoggerOutputStream extends LoggingOutputStream {
private final ProgressLogger progressLogger
private boolean isStarted = false
private final Logger logger
private int testsCompleted = 0
private int testsFailed = 0
private int testsSkipped = 0
private Integer testCount
private String countsFormat
TapLoggerOutputStream(Map args) {
logger = args.logger
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("TAP output for $args.command")
progressLogger.started()
progressLogger.progress("Starting $args.command...")
}
void flush() {
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// System.out.print "===> $line\n"
if (testCount == null) {
try {
testCount = line.split('\\.').last().toInteger()
def length = (testCount as String).length()
countsFormat = "%0${length}d"
countsFormat = "[$countsFormat|$countsFormat|$countsFormat/$countsFormat]"
return
} catch (Exception e) {
throw new GradleScriptException(
'Error parsing first line of TAP stream!!', e)
}
}
Matcher m = line =~ /(?<status>ok|not ok) \d+(?<skip> # skip (?<skipReason>\(.+\))?)? \[(?<suite>.+)\] (?<test>.+)/
if (!m.matches()) {
/* These might be failure report lines or comments or whatever. Its hard
to tell and it doesn't matter. */
logger.warn(line)
return
}
boolean skipped = m.group('skip') != null
boolean success = !skipped && m.group('status') == 'ok'
String skipReason = m.group('skipReason')
String suiteName = m.group('suite')
String testName = m.group('test')
String status
if (skipped) {
status = "SKIPPED"
testsSkipped++
} else if (success) {
status = " OK"
testsCompleted++
} else {
status = " FAILED"
testsFailed++
TapLoggerOutputStream(Map args) {
logger = args.logger
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("TAP output for `${args.command}`")
}
String counts = sprintf(countsFormat,
[testsCompleted, testsFailed, testsSkipped, testCount])
progressLogger.progress("Tests $counts, $status [$suiteName] $testName")
if (!success) {
logger.warn(line)
@Override
public void flush() {
if (isStarted == false) {
progressLogger.started()
isStarted = true
}
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// System.out.print "===> $line\n"
if (testCount == null) {
try {
testCount = line.split('\\.').last().toInteger()
def length = (testCount as String).length()
countsFormat = "%0${length}d"
countsFormat = "[$countsFormat|$countsFormat|$countsFormat/$countsFormat]"
return
} catch (Exception e) {
throw new GradleScriptException(
'Error parsing first line of TAP stream!!', e)
}
}
Matcher m = line =~ /(?<status>ok|not ok) \d+(?<skip> # skip (?<skipReason>\(.+\))?)? \[(?<suite>.+)\] (?<test>.+)/
if (!m.matches()) {
/* These might be failure report lines or comments or whatever. Its hard
to tell and it doesn't matter. */
logger.warn(line)
return
}
boolean skipped = m.group('skip') != null
boolean success = !skipped && m.group('status') == 'ok'
String skipReason = m.group('skipReason')
String suiteName = m.group('suite')
String testName = m.group('test')
String status
if (skipped) {
status = "SKIPPED"
testsSkipped++
} else if (success) {
status = " OK"
testsCompleted++
} else {
status = " FAILED"
testsFailed++
}
String counts = sprintf(countsFormat,
[testsCompleted, testsFailed, testsSkipped, testCount])
progressLogger.progress("Tests $counts, $status [$suiteName] $testName")
if (!success) {
logger.warn(line)
}
}
}
}

View File

@ -18,11 +18,10 @@
*/
package org.elasticsearch.gradle.vagrant
import org.gradle.api.DefaultTask
import org.gradle.api.tasks.TaskAction
import org.apache.commons.io.output.TeeOutputStream
import org.elasticsearch.gradle.LoggedExec
import org.gradle.api.tasks.Input
import org.gradle.logging.ProgressLoggerFactory
import org.gradle.process.internal.ExecAction
import org.gradle.process.internal.ExecActionFactory
import javax.inject.Inject
@ -30,43 +29,30 @@ import javax.inject.Inject
* Runs a vagrant command. Pretty much like Exec task but with a nicer output
* formatter and defaults to `vagrant` as first part of commandLine.
*/
class VagrantCommandTask extends DefaultTask {
List<Object> commandLine
String boxName
ExecAction execAction
public class VagrantCommandTask extends LoggedExec {
VagrantCommandTask() {
execAction = getExecActionFactory().newExecAction()
}
@Input
String boxName
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
public VagrantCommandTask() {
executable = 'vagrant'
project.afterEvaluate {
// It'd be nice if --machine-readable were, well, nice
standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())
}
}
@Inject
ExecActionFactory getExecActionFactory() {
throw new UnsupportedOperationException();
}
protected OutputStream createLoggerOutputStream() {
return new VagrantLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
/* Vagrant tends to output a lot of stuff, but most of the important
stuff starts with ==> $box */
squashedPrefix: "==> $boxName: ")
}
void boxName(String boxName) {
this.boxName = boxName
}
void commandLine(Object... commandLine) {
this.commandLine = commandLine
}
@TaskAction
void exec() {
// It'd be nice if --machine-readable were, well, nice
execAction.commandLine(['vagrant'] + commandLine)
execAction.setStandardOutput(new VagrantLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
/* Vagrant tends to output a lot of stuff, but most of the important
stuff starts with ==> $box */
squashedPrefix: "==> $boxName: "))
execAction.execute();
}
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
}

View File

@ -19,7 +19,9 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
import org.gradle.api.logging.Logger
import org.gradle.logging.ProgressLogger
import org.gradle.logging.ProgressLoggerFactory
/**
* Adapts an OutputStream being written to by vagrant into a ProcessLogger. It
@ -42,79 +44,60 @@ import org.gradle.logging.ProgressLogger
* to catch so it can render the output like
* "Heading text > stdout from the provisioner".
*/
class VagrantLoggerOutputStream extends LoggingOutputStream {
static final String HEADING_PREFIX = '==> '
public class VagrantLoggerOutputStream extends LoggingOutputStream {
private static final String HEADING_PREFIX = '==> '
ProgressLogger progressLogger
String squashedPrefix
String lastLine = ''
boolean inProgressReport = false
String heading = ''
private final ProgressLogger progressLogger
private boolean isStarted = false
private String squashedPrefix
private String lastLine = ''
private boolean inProgressReport = false
private String heading = ''
VagrantLoggerOutputStream(Map args) {
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("Vagrant $args.command")
progressLogger.started()
progressLogger.progress("Starting vagrant $args.command...")
squashedPrefix = args.squashedPrefix
}
void flush() {
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// debugPrintLine(line) // Uncomment me to log every incoming line
if (line.startsWith('\r\u001b')) {
/* We don't want to try to be a full terminal emulator but we want to
keep the escape sequences from leaking and catch _some_ of the
meaning. */
line = line.substring(2)
if ('[K' == line) {
inProgressReport = true
}
return
VagrantLoggerOutputStream(Map args) {
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("Vagrant output for `$args.command`")
squashedPrefix = args.squashedPrefix
}
if (line.startsWith(squashedPrefix)) {
line = line.substring(squashedPrefix.length())
inProgressReport = false
lastLine = line
if (line.startsWith(HEADING_PREFIX)) {
line = line.substring(HEADING_PREFIX.length())
heading = line + ' > '
} else {
line = heading + line
}
} else if (inProgressReport) {
inProgressReport = false
line = lastLine + line
} else {
return
}
// debugLogLine(line) // Uncomment me to log every line we add to the logger
progressLogger.progress(line)
}
void debugPrintLine(line) {
System.out.print '----------> '
for (int i = start; i < end; i++) {
switch (buffer[i] as char) {
case ' '..'~':
System.out.print buffer[i] as char
break
default:
System.out.print '%'
System.out.print Integer.toHexString(buffer[i])
}
@Override
public void flush() {
if (isStarted == false) {
progressLogger.started()
isStarted = true
}
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
System.out.print '\n'
}
void debugLogLine(line) {
System.out.print '>>>>>>>>>>> '
System.out.print line
System.out.print '\n'
}
void line(String line) {
if (line.startsWith('\r\u001b')) {
/* We don't want to try to be a full terminal emulator but we want to
keep the escape sequences from leaking and catch _some_ of the
meaning. */
line = line.substring(2)
if ('[K' == line) {
inProgressReport = true
}
return
}
if (line.startsWith(squashedPrefix)) {
line = line.substring(squashedPrefix.length())
inProgressReport = false
lastLine = line
if (line.startsWith(HEADING_PREFIX)) {
line = line.substring(HEADING_PREFIX.length())
heading = line + ' > '
} else {
line = heading + line
}
} else if (inProgressReport) {
inProgressReport = false
line = lastLine + line
} else {
return
}
progressLogger.progress(line)
}
}

View File

@ -0,0 +1,20 @@
#
# Licensed to Elasticsearch under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
implementation-class=org.elasticsearch.gradle.doc.DocsTestPlugin

View File

@ -7,8 +7,8 @@
<!-- On Windows, Checkstyle matches files using \ path separator -->
<!-- These files are generated by ANTLR so its silly to hold them to our rules. -->
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]PainlessLexer\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]PainlessParser(|BaseVisitor|Visitor)\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]antlr[/\\]PainlessLexer\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]antlr[/\\]PainlessParser(|BaseVisitor|Visitor)\.java" checks="." />
<!-- Hopefully temporary suppression of LineLength on files that don't pass it. We should remove these when we the
files start to pass. -->
@ -21,7 +21,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ActionRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ReplicationResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]TransportClusterHealthAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]hotthreads[/\\]NodesHotThreadsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]hotthreads[/\\]TransportNodesHotThreadsAction.java" checks="LineLength" />
@ -38,8 +37,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]put[/\\]TransportPutRepositoryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]verify[/\\]TransportVerifyRepositoryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]verify[/\\]VerifyRepositoryRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]ClusterRerouteRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]ClusterRerouteRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]TransportClusterRerouteAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsRequestBuilder.java" checks="LineLength" />
@ -61,7 +58,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]snapshots[/\\]status[/\\]TransportSnapshotsStatusAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]ClusterStateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]TransportClusterStateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsIndices.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsNodeResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]TransportClusterStatsAction.java" checks="LineLength" />
@ -178,21 +174,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestProxyActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]PutPipelineTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateExecutionService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolateRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateShardResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportMultiPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportShardMultiPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]MultiSearchRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchPhaseExecutionException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]ShardSearchFailure.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]TransportClearScrollAction.java" checks="LineLength" />
@ -201,10 +187,8 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]AutoCreateIndex.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]DelegatingActionListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]DestructiveOperations.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]HandledTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]IndicesOptions.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ThreadedActionListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ToXContentToBytes.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]broadcast[/\\]BroadcastOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]broadcast[/\\]BroadcastRequest.java" checks="LineLength" />
@ -216,13 +200,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeReadOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeReadAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]TransportClusterInfoAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]nodes[/\\]NodesOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]nodes[/\\]TransportNodesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]ReplicationRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]ReplicationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]TransportBroadcastReplicationAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]instance[/\\]InstanceShardOperationRequestBuilder.java" checks="LineLength" />
@ -251,7 +233,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]JarHell.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]Seccomp.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]Security.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cache[/\\]recycler[/\\]PageCacheRecycler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]ElasticsearchClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]FilterClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]node[/\\]NodeClient.java" checks="LineLength" />
@ -267,7 +248,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]InternalClusterInfoService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]LocalNodeMasterListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]SnapshotsInProgress.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]MappingUpdatedAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]NodeIndexDeletedAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]NodeMappingRefreshAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]shard[/\\]ShardStateAction.java" checks="LineLength" />
@ -375,10 +355,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PendingClusterStatesQueue.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PublishClusterStateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]ESFileStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]Environment.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]NodeEnvironment.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]AsyncShardFetch.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]DanglingIndicesState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayAllocator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayMetaState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayService.java" checks="LineLength" />
@ -392,21 +369,16 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexSettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexingSlowLog.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergePolicyConfig.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergeSchedulerConfig.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]NodeServicesProvider.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]SearchSlowLog.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisRegistry.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]CommonGramsTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]CustomAnalyzerProvider.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]EdgeNGramTokenizerFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]NumericDoubleAnalyzer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]ShingleTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]StemmerOverrideTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]StopTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]compound[/\\]HyphenationCompoundWordTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]cache[/\\]bitset[/\\]BitsetFilterCache.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]cache[/\\]request[/\\]ShardRequestCache.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]codec[/\\]PerFieldMappingPostingFormatCodec.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]engine[/\\]ElasticsearchConcurrentMergeScheduler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]engine[/\\]Engine.java" checks="LineLength" />
@ -420,16 +392,13 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]fieldcomparator[/\\]FloatValuesComparatorSource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]fieldcomparator[/\\]LongValuesComparatorSource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]GlobalOrdinalsBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]GlobalOrdinalsIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]InternalGlobalOrdinalsIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]MultiOrdinals.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]OrdinalsBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]SinglePackedOrdinals.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractAtomicParentChildFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractIndexGeoPointFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractIndexOrdinalsFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]BinaryDVIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]GeoPointArrayIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]PagedBytesIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]ParentChildIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]SortedNumericDVIndexFieldData.java" checks="LineLength" />
@ -476,14 +445,9 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]ObjectMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]RootObjectMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]merge[/\\]MergeStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]ExtractQueryTermsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorQueriesRegistry.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]AbstractQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MatchQueryParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MoreLikeThisQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryBuilders.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryShardContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryValidationException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]InnerHitsQueryParserHelper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]QueryParsers.java" checks="LineLength" />
@ -500,7 +464,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]ShardStateMetaData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]StoreRecovery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]TranslogRecoveryPerformer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]similarity[/\\]SimilarityService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]snapshots[/\\]blobstore[/\\]BlobStoreIndexShardRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]snapshots[/\\]blobstore[/\\]BlobStoreIndexShardSnapshots.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]store[/\\]IndexStore.java" checks="LineLength" />
@ -530,31 +493,22 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoveryFailedException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySourceHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoveryState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]StartRecoveryRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]TransportNodesListShardStoreMetaData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]ttl[/\\]IndicesTTLService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]IngestMetadata.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineExecutionService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]CompoundProcessor.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]IngestDocument.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]Pipeline.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]ConvertProcessor.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]fs[/\\]FsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]DeadlockAnalyzer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]GcNames.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]HotThreads.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmGcMonitorService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]os[/\\]OsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]process[/\\]ProcessService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]Node.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]internal[/\\]InternalSettingsPreparer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorQuery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]DummyPluginInfo.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]RemovePluginCommand.java" checks="LineLength" />
@ -569,13 +523,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]fs[/\\]FsRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]uri[/\\]URLIndexShardRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]uri[/\\]URLRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BaseRestHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BytesRestResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]RestController.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]RestClusterHealthAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]info[/\\]RestNodesInfoAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]stats[/\\]RestNodesStatsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]RestClusterRerouteAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]RestClusterGetSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]RestClusterUpdateSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]RestClusterStateAction.java" checks="LineLength" />
@ -596,20 +548,15 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]bulk[/\\]RestBulkAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestCountAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestIndicesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestNodeAttrsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestNodesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestPendingClusterTasksAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestShardsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestThreadPoolAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]get[/\\]RestMultiGetAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]index[/\\]RestIndexAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]main[/\\]RestMainAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]percolate[/\\]RestPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]script[/\\]RestDeleteIndexedScriptAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]script[/\\]RestPutIndexedScriptAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestClearScrollAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestMultiSearchAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestSearchAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestSearchScrollAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]suggest[/\\]RestSuggestAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]support[/\\]RestActions.java" checks="LineLength" />
@ -626,7 +573,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]MultiValueMode.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]SearchService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]AggregatorFactories.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]AggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]InternalAggregation.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]InternalMultiBucketAggregation.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]ValuesSourceAggregationBuilder.java" checks="LineLength" />
@ -639,10 +585,8 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]filters[/\\]FiltersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]filters[/\\]InternalFilters.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]geogrid[/\\]GeoHashGridAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]geogrid[/\\]GeoHashGridParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]global[/\\]GlobalAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]global[/\\]InternalGlobal.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]histogram[/\\]DateHistogramParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]histogram[/\\]HistogramAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]missing[/\\]InternalMissing.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]missing[/\\]MissingAggregator.java" checks="LineLength" />
@ -651,10 +595,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]nested[/\\]ReverseNestedAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]InternalRange.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]RangeAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]date[/\\]DateRangeParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]date[/\\]InternalDateRange.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]geodistance[/\\]GeoDistanceParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]geodistance[/\\]InternalGeoDistance.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]ipv4[/\\]InternalIPv4Range.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]sampler[/\\]DiversifiedBytesHashSamplerAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]sampler[/\\]DiversifiedMapSamplerAggregator.java" checks="LineLength" />
@ -665,15 +606,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]GlobalOrdinalsSignificantTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]InternalSignificantTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantLongTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantLongTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantStringTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantStringTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsParametersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]UnmappedSignificantTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]GND.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]JLHScore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]NXYSignificanceHeuristic.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]PercentageScore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]ScriptHeuristic.java" checks="LineLength" />
@ -691,37 +628,23 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsParametersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]UnmappedTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]support[/\\]IncludeExclude.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]ValuesSourceMetricsAggregationBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]CardinalityAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]CardinalityAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]HyperLogLogPlusPlus.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]geobounds[/\\]GeoBoundsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]geobounds[/\\]InternalGeoBounds.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]AbstractPercentilesParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]AbstractTDigestPercentilesAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]TDigestPercentileRanksAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]TDigestPercentilesAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]scripted[/\\]InternalScriptedMetric.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]scripted[/\\]ScriptedMetricAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]StatsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]ExtendedStatsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]ExtendedStatsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]InternalExtendedStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]tophits[/\\]TopHitsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]BucketHelpers.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketmetrics[/\\]BucketMetricsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketmetrics[/\\]avg[/\\]AvgBucketPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketscript[/\\]BucketScriptPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]cumulativesum[/\\]CumulativeSumPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]derivative[/\\]DerivativePipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]derivative[/\\]InternalDerivative.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]having[/\\]BucketSelectorPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]AggregationContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]AggregationPath.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]GeoPointParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]ValuesSourceParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]format[/\\]ValueFormat.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]format[/\\]ValueParser.java" checks="LineLength" />
@ -736,10 +659,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]FetchSubPhaseParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]explain[/\\]ExplainFetchSubPhase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]fielddata[/\\]FieldDataFieldsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsFetchSubPhase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]script[/\\]ScriptFieldsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]source[/\\]FetchSourceContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]highlight[/\\]FastVectorHighlighter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]highlight[/\\]HighlightPhase.java" checks="LineLength" />
@ -764,7 +684,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]GeoDistanceSortParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]ScriptSortParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]SortParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestContextParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestUtils.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]completion[/\\]CompletionSuggestParser.java" checks="LineLength" />
@ -778,7 +697,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]phrase[/\\]WordScorer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]term[/\\]TermSuggestParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]RestoreService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotInfo.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotShardFailure.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotShardsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotsService.java" checks="LineLength" />
@ -788,7 +706,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ESExceptionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]NamingConventionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]VersionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ListenerActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]RejectionActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]HotThreadsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthResponsesTests.java" checks="LineLength" />
@ -805,19 +722,16 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreRequestIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]put[/\\]MetaDataIndexTemplateServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]upgrade[/\\]UpgradeIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkProcessorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]RetryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]MultiGetShardRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]BulkRequestModifierTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestProxyActionFilterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateDocumentSimpleResultTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateExecutionServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequestParsingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]WriteableIngestDocumentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolatorRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]MultiSearchRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchRequestBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]AutoCreateIndexTests.java" checks="LineLength" />
@ -844,7 +758,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterHealthIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterInfoServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterModuleTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateDiffIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]DiskUsageTests.java" checks="LineLength" />
@ -865,7 +778,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]allocation[/\\]SimpleAllocationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]ClusterIndexHealthTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]ClusterStateHealthTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]RoutingTableGenerator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]AutoExpandReplicasTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]DateMathExpressionResolverTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]HumanReadableIndexSettingsTests.java" checks="LineLength" />
@ -932,7 +844,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]geo[/\\]ShapeBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]hash[/\\]MessageDigestsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]inject[/\\]ModuleTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]io[/\\]stream[/\\]BytesStreamsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]index[/\\]FreqTermsEnumTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]uid[/\\]VersionsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]network[/\\]CidrsTests.java" checks="LineLength" />
@ -948,14 +859,11 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]builder[/\\]XContentBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]cbor[/\\]JsonVsCborTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]smile[/\\]JsonVsSmileTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]support[/\\]filtering[/\\]AbstractFilteringJsonGeneratorTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]support[/\\]filtering[/\\]FilterPathGeneratorFilteringTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]consistencylevel[/\\]WriteConsistencyLevelIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]deps[/\\]joda[/\\]SimpleJodaTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]deps[/\\]lucene[/\\]VectorHighlighterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]BlockingClusterStatePublishResponseHandlerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoveryWithServiceDisruptionsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ZenFaultDetectionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ZenUnicastDiscoveryIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]NodeJoinControllerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ZenDiscoveryUnitTests.java" checks="LineLength" />
@ -1007,11 +915,8 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]BinaryDVFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]DuelFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]FieldDataCacheTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]FilterFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]IndexFieldDataServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]PagedBytesStringFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ParentChildFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]SortedSetDVStringFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]DocumentFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]DynamicMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]FieldTypeTestCase.java" checks="LineLength" />
@ -1062,8 +967,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]typelevels[/\\]ParseDocumentTypeLevelsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]update[/\\]UpdateMappingOnClusterIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]update[/\\]UpdateMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]AbstractQueryTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoolQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoostingQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]CommonTermsQueryBuilderTests.java" checks="LineLength" />
@ -1075,7 +978,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MultiMatchQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RandomQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RangeQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]ScoreModeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanMultiTermQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanNotQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]plugin[/\\]CustomQueryParserIT.java" checks="LineLength" />
@ -1106,7 +1008,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerSingleNodeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesOptionsIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]PreBuiltAnalyzerIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]AnalyzeActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]HunspellServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]exists[/\\]indices[/\\]IndicesExistsIT.java" checks="LineLength" />
@ -1136,12 +1037,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]template[/\\]SimpleIndexTemplateIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineExecutionServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineStoreTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]CompoundProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]IngestDocumentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]PipelineFactoryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]ValueSourceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]AbstractStringProcessorTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]AppendProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]DateFormatTests.java" checks="LineLength" />
@ -1155,9 +1050,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]os[/\\]OsProbeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]nodesinfo[/\\]NodeInfoStreamingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]options[/\\]detailederrors[/\\]DetailedErrorsEnabledIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorQueryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginInfoTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]recovery[/\\]FullRollingRestartIT.java" checks="LineLength" />
@ -1168,7 +1060,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BytesRestResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]CorsRegexDefaultIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]CorsRegexIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]NoOpClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]RestControllerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]util[/\\]RestUtilsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]routing[/\\]AliasResolveRoutingIT.java" checks="LineLength" />
@ -1280,13 +1171,11 @@
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]BulkTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]DoubleTermsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]EquivalenceTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]GeoDistanceTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HDRPercentileRanksTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HDRPercentilesTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HistogramTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IPv4RangeTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndexLookupTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndexedScriptTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndicesRequestTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]LongTermsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]MinDocCountTests.java" checks="LineLength" />
@ -1303,12 +1192,21 @@
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovySecurityTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustachePlugin.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]RenderSearchTemplateTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]SuggestSearchTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TemplateQueryParserTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TemplateQueryTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]package-info.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustacheScriptEngineTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustacheTests.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequest.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequestBuilder.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolateShardResponse.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportMultiPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportShardMultiPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]RestPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorIT.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorIT.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorRequestTests.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuCollationTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuFoldingTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuNormalizerTokenFilterFactory.java" checks="LineLength" />
@ -1319,22 +1217,12 @@
<suppress files="plugins[/\\]analysis-phonetic[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PhoneticTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-smartcn[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]SimpleSmartChineseAnalysisTests.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-stempel[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PolishAnalysisTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryRequest.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryRequestBuilder.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryResponse.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]TransportDeleteByQueryAction.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]IndexDeleteByQueryResponseTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]TransportDeleteByQueryActionTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugin[/\\]deletebyquery[/\\]DeleteByQueryTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]management[/\\]AzureComputeService.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]AbstractAzureTestCase.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureMinimumMasterNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureSimpleTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureTwoStartedNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]aws[/\\]AwsEc2ServiceImpl.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]aws[/\\]AbstractAwsTestCase.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ec2[/\\]AmazonEC2Mock.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-gce[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]gce[/\\]GceUnicastHostsProvider.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-gce[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]gce[/\\]GceNetworkTests.java" checks="LineLength" />
<suppress files="plugins[/\\]ingest-geoip[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]geoip[/\\]GeoIpProcessor.java" checks="LineLength" />
<suppress files="plugins[/\\]ingest-geoip[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]geoip[/\\]GeoIpProcessorFactoryTests.java" checks="LineLength" />
@ -1366,7 +1254,6 @@
<suppress files="plugins[/\\]mapper-size[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]size[/\\]SizeMappingTests.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]blobstore[/\\]AzureBlobContainer.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]blobstore[/\\]AzureBlobStore.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageService.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageServiceImpl.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageSettings.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]azure[/\\]AzureRepository.java" checks="LineLength" />
@ -1399,8 +1286,8 @@
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]TestShardRouting.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]cli[/\\]CliToolTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]MockBigArrays.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]MockSearchService.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]script[/\\]NativeSignificanceScoreScriptWithParams.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]AbstractQueryTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]BackgroundIndexer.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]CompositeTestCluster.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]CorruptionUtils.java" checks="LineLength" />
@ -1423,12 +1310,10 @@
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]disruption[/\\]SlowClusterStateProcessing.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]AssertingSearcher.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]MockEngineSupport.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]MockInternalEngine.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]hamcrest[/\\]ElasticsearchAssertions.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]junit[/\\]listeners[/\\]LoggingListener.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]ESRestTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]RestTestExecutionContext.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]client[/\\]RestClient.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]client[/\\]http[/\\]HttpRequestBuilder.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]json[/\\]JsonPath.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]GreaterThanEqualToParser.java" checks="LineLength" />
@ -1452,7 +1337,6 @@
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]JsonPathTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]RestTestParserTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]test[/\\]InternalTestClusterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]cli[/\\]CliTool.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]indices[/\\]settings[/\\]RestGetSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]tribe[/\\]TribeService.java" checks="LineLength" />

View File

@ -1,2 +1,2 @@
#!/bin/sh -e
#!/bin/bash -e
<% commands.each {command -> %><%= command %><% } %>

View File

@ -1,2 +1,2 @@
#!/bin/sh -e
#!/bin/bash -e
<% commands.each {command -> %><%= command %><% } %>

View File

@ -21,5 +21,7 @@ com.carrotsearch.randomizedtesting.annotations.Repeat @ Don't commit hardcoded r
org.apache.lucene.codecs.Codec#setDefault(org.apache.lucene.codecs.Codec) @ Use the SuppressCodecs("*") annotation instead
org.apache.lucene.util.LuceneTestCase$Slow @ Don't write slow tests
org.junit.Ignore @ Use AwaitsFix instead
org.apache.lucene.util.LuceneTestCase$Nightly @ We don't run nightly tests at this point!
com.carrotsearch.randomizedtesting.annotations.Nightly @ We don't run nightly tests at this point!
org.junit.Test @defaultMessage Just name your test method testFooBar

View File

@ -1,5 +1,5 @@
elasticsearch = 5.0.0-alpha2
lucene = 6.0.0
elasticsearch = 5.0.0
lucene = 6.0.1
# optional dependencies
spatial4j = 0.6
@ -13,9 +13,7 @@ jna = 4.1.0
# test dependencies
randomizedrunner = 2.3.2
junit = 4.11
# TODO: Upgrade httpclient to a version > 4.5.1 once released. Then remove o.e.test.rest.client.StrictHostnameVerifier* and use
# DefaultHostnameVerifier instead since we no longer need to workaround https://issues.apache.org/jira/browse/HTTPCLIENT-1698
httpclient = 4.3.6
httpcore = 4.3.3
httpclient = 4.5.2
httpcore = 4.4.4
commonslogging = 1.1.3
commonscodec = 1.10

View File

@ -1,235 +0,0 @@
h1. Elasticsearch
h2. A Distributed RESTful Search Engine
h3. "https://www.elastic.co/products/elasticsearch":https://www.elastic.co/products/elasticsearch
Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:
* Distributed and Highly Available Search Engine.
** Each index is fully sharded with a configurable number of shards.
** Each shard can have one or more replicas.
** Read / Search operations performed on either one of the replica shard.
* Multi Tenant with Multi Types.
** Support for more than one index.
** Support for more than one type per index.
** Index level configuration (number of shards, index storage, ...).
* Various set of APIs
** HTTP RESTful API
** Native Java API.
** All APIs perform automatic node operation rerouting.
* Document oriented
** No need for upfront schema definition.
** Schema can be defined per type for customization of the indexing process.
* Reliable, Asynchronous Write Behind for long term persistency.
* (Near) Real Time Search.
* Built on top of Lucene
** Each shard is a fully functional Lucene index
** All the power of Lucene easily exposed through simple configuration / plugins.
* Per operation consistency
** Single document level operations are atomic, consistent, isolated and durable.
* Open Source under the Apache License, version 2 ("ALv2")
h2. Getting Started
First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about.
h3. Requirements
You need to have a recent version of Java installed. See the "Setup":http://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html#jvm-version page for more information.
h3. Installation
* "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.
* Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows.
* Run @curl -X GET http://localhost:9200/@.
* Start more servers ...
h3. Indexing
Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
Now, let's see if the information was added by GETting it:
<pre>
curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'
</pre>
h3. Searching
Mmm search..., shouldn't it be elastic?
Let's find all the tweets that @kimchy@ posted:
<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
</pre>
We can also use the JSON query language Elasticsearch provides instead of a query string:
<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
{
"query" : {
"match" : { "user": "kimchy" }
}
}'
</pre>
Just for kicks, let's get all the documents stored (we should see the user as well):
<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
We can also do range search (the @postDate@ was automatically identified as date)
<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"range" : {
"postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
}
}
}'
</pre>
There are many more options to perform search, after all, it's a search product no? All the familiar Lucene queries are available through the JSON query language, or through the query parser.
h3. Multi Tenant - Indices and Types
Maan, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.
Elasticsearch supports multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
The above will index information into the @kimchy@ index, with two types, @info@ and @tweet@. Each user will get his own special index.
Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
<pre>
curl -XPUT http://localhost:9200/another_user/ -d '
{
"index" : {
"numberOfShards" : 1,
"numberOfReplicas" : 1
}
}'
</pre>
Search (and similar operations) are multi index aware. This means that we can easily search on more than one
index (twitter user), for example:
<pre>
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
Or on all the indices:
<pre>
curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
{One liner teaser}: And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from friends of my friends).
h3. Distributed, Highly Available
Let's face it, things will fail....
Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).
In order to play with the distributed nature of Elasticsearch, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.
h3. Where to go from here?
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website.
h3. Building from Source
Elasticsearch uses "Maven":http://maven.apache.org for its build system.
In order to create a distribution, simply run the @mvn clean package
-DskipTests@ command in the cloned directory.
The distribution will be created under @target/releases@.
See the "TESTING":TESTING.asciidoc file for more information about
running the Elasticsearch test suite.
h3. Upgrading to Elasticsearch 1.x?
In order to ensure a smooth upgrade process from earlier versions of Elasticsearch (< 1.0.0), it is recommended to perform a full cluster restart. Please see the "setup reference":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.
h1. License
<pre>
This software is licensed under the Apache License, version 2 ("ALv2"), quoted below.
Copyright 2009-2016 Elasticsearch <https://www.elastic.co>
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
</pre>

View File

@ -24,6 +24,16 @@ import org.elasticsearch.gradle.BuildPlugin
apply plugin: 'elasticsearch.build'
apply plugin: 'com.bmuschko.nexus'
apply plugin: 'nebula.optional-base'
apply plugin: 'nebula.maven-base-publish'
apply plugin: 'nebula.maven-scm'
publishing {
publications {
nebula {
artifactId 'elasticsearch'
}
}
}
archivesBaseName = 'elasticsearch'
@ -53,7 +63,7 @@ dependencies {
compile 'com.carrotsearch:hppc:0.7.1'
// time handling, remove with java 8 time
compile 'joda-time:joda-time:2.8.2'
compile 'joda-time:joda-time:2.9.4'
// joda 2.0 moved to using volatile fields for datetime
// When updating to a new version, make sure to update our copy of BaseDateTime
compile 'org.joda:joda-convert:1.2'
@ -111,6 +121,36 @@ forbiddenPatterns {
exclude '**/org/elasticsearch/cluster/routing/shard_routes.txt'
}
task generateModulesList {
List<String> modules = project(':modules').subprojects.collect { it.name }
File modulesFile = new File(buildDir, 'generated-resources/modules.txt')
processResources.from(modulesFile)
inputs.property('modules', modules)
outputs.file(modulesFile)
doLast {
modulesFile.parentFile.mkdirs()
modulesFile.setText(modules.join('\n'), 'UTF-8')
}
}
task generatePluginsList {
List<String> plugins = project(':plugins').subprojects
.findAll { it.name.contains('example') == false }
.collect { it.name }
File pluginsFile = new File(buildDir, 'generated-resources/plugins.txt')
processResources.from(pluginsFile)
inputs.property('plugins', plugins)
outputs.file(pluginsFile)
doLast {
pluginsFile.parentFile.mkdirs()
pluginsFile.setText(plugins.join('\n'), 'UTF-8')
}
}
processResources {
dependsOn generateModulesList, generatePluginsList
}
thirdPartyAudit.excludes = [
// uses internal java api: sun.security.x509 (X509CertInfo, X509CertImpl, X500Name)
'org.jboss.netty.handler.ssl.util.OpenJdkSelfSignedCertGenerator',

View File

@ -17,22 +17,21 @@
* under the License.
*/
package org.elasticsearch.search.query;
package org.apache.log4j;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.search.SearchParseElement;
import org.elasticsearch.search.internal.SearchContext;
import org.apache.log4j.helpers.ThreadLocalMap;
/**
* Log4j 1.2 MDC breaks because it parses java.version incorrectly (does not handle new java9 versioning).
*
* This hack fixes up the pkg private members as if it had detected the java version correctly.
*/
public class MinScoreParseElement implements SearchParseElement {
public class Java9Hack {
@Override
public void parse(XContentParser parser, SearchContext context) throws Exception {
XContentParser.Token token = parser.currentToken();
if (token.isValue()) {
context.minimumScore(parser.floatValue());
public static void fixLog4j() {
if (MDC.mdc.tlm == null) {
MDC.mdc.java1 = false;
MDC.mdc.tlm = new ThreadLocalMap();
}
}
}

View File

@ -21,9 +21,9 @@ package org.elasticsearch;
import org.elasticsearch.action.support.replication.ReplicationOperation;
import org.elasticsearch.cluster.action.shard.ShardStateAction;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.logging.LoggerMessageFormat;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
@ -47,7 +47,7 @@ import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_UUID_NA_VAL
/**
* A base class for all elasticsearch exceptions.
*/
public class ElasticsearchException extends RuntimeException implements ToXContent {
public class ElasticsearchException extends RuntimeException implements ToXContent, Writeable {
public static final String REST_EXCEPTION_SKIP_CAUSE = "rest.exception.cause.skip";
public static final String REST_EXCEPTION_SKIP_STACK_TRACE = "rest.exception.stacktrace.skip";
@ -200,41 +200,7 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
return rootCause;
}
/**
* Check whether this exception contains an exception of the given type:
* either it is of the given class itself or it contains a nested cause
* of the given type.
*
* @param exType the exception type to look for
* @return whether there is a nested exception of the specified type
*/
public boolean contains(Class<? extends Throwable> exType) {
if (exType == null) {
return false;
}
if (exType.isInstance(this)) {
return true;
}
Throwable cause = getCause();
if (cause == this) {
return false;
}
if (cause instanceof ElasticsearchException) {
return ((ElasticsearchException) cause).contains(exType);
} else {
while (cause != null) {
if (exType.isInstance(cause)) {
return true;
}
if (cause.getCause() == cause) {
break;
}
cause = cause.getCause();
}
return false;
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeOptionalString(this.getMessage());
out.writeThrowable(this.getCause());
@ -530,7 +496,8 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
org.elasticsearch.index.shard.IndexShardStartedException::new, 23),
SEARCH_CONTEXT_MISSING_EXCEPTION(org.elasticsearch.search.SearchContextMissingException.class,
org.elasticsearch.search.SearchContextMissingException::new, 24),
SCRIPT_EXCEPTION(org.elasticsearch.script.ScriptException.class, org.elasticsearch.script.ScriptException::new, 25),
GENERAL_SCRIPT_EXCEPTION(org.elasticsearch.script.GeneralScriptException.class,
org.elasticsearch.script.GeneralScriptException::new, 25),
BATCH_OPERATION_EXCEPTION(org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException.class,
org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException::new, 26),
SNAPSHOT_CREATION_EXCEPTION(org.elasticsearch.snapshots.SnapshotCreationException.class,
@ -679,8 +646,6 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
org.elasticsearch.index.shard.IndexShardRecoveryException::new, 106),
REPOSITORY_MISSING_EXCEPTION(org.elasticsearch.repositories.RepositoryMissingException.class,
org.elasticsearch.repositories.RepositoryMissingException::new, 107),
PERCOLATOR_EXCEPTION(org.elasticsearch.index.percolator.PercolatorException.class,
org.elasticsearch.index.percolator.PercolatorException::new, 108),
DOCUMENT_SOURCE_MISSING_EXCEPTION(org.elasticsearch.index.engine.DocumentSourceMissingException.class,
org.elasticsearch.index.engine.DocumentSourceMissingException::new, 109),
FLUSH_NOT_ALLOWED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.FlushNotAllowedEngineException.class,
@ -742,7 +707,8 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
QUERY_SHARD_EXCEPTION(org.elasticsearch.index.query.QueryShardException.class,
org.elasticsearch.index.query.QueryShardException::new, 141),
NO_LONGER_PRIMARY_SHARD_EXCEPTION(ShardStateAction.NoLongerPrimaryShardException.class,
ShardStateAction.NoLongerPrimaryShardException::new, 142);
ShardStateAction.NoLongerPrimaryShardException::new, 142),
SCRIPT_EXCEPTION(org.elasticsearch.script.ScriptException.class, org.elasticsearch.script.ScriptException::new, 143);
final Class<? extends ElasticsearchException> exceptionClass;

View File

@ -32,7 +32,6 @@ import java.io.IOException;
/**
*/
@SuppressWarnings("deprecation")
public class Version {
/*
* The logic for ID is: XXYYZZAA, where XX is major version, YY is minor version, ZZ is revision, and AA is alpha/beta/rc indicator AA
@ -69,11 +68,17 @@ public class Version {
public static final Version V_2_3_1 = new Version(V_2_3_1_ID, org.apache.lucene.util.Version.LUCENE_5_5_0);
public static final int V_2_3_2_ID = 2030299;
public static final Version V_2_3_2 = new Version(V_2_3_2_ID, org.apache.lucene.util.Version.LUCENE_5_5_0);
public static final int V_2_3_3_ID = 2030399;
public static final Version V_2_3_3 = new Version(V_2_3_3_ID, org.apache.lucene.util.Version.LUCENE_5_5_0);
public static final int V_5_0_0_alpha1_ID = 5000001;
public static final Version V_5_0_0_alpha1 = new Version(V_5_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
public static final int V_5_0_0_alpha2_ID = 5000002;
public static final Version V_5_0_0_alpha2 = new Version(V_5_0_0_alpha2_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
public static final Version CURRENT = V_5_0_0_alpha2;
public static final int V_5_0_0_alpha3_ID = 5000003;
public static final Version V_5_0_0_alpha3 = new Version(V_5_0_0_alpha3_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
public static final int V_5_0_0_ID = 5000099;
public static final Version V_5_0_0 = new Version(V_5_0_0_ID, org.apache.lucene.util.Version.LUCENE_6_0_1);
public static final Version CURRENT = V_5_0_0;
static {
assert CURRENT.luceneVersion.equals(org.apache.lucene.util.Version.LATEST) : "Version must be upgraded to ["
@ -86,10 +91,16 @@ public class Version {
public static Version fromId(int id) {
switch (id) {
case V_5_0_0_ID:
return V_5_0_0;
case V_5_0_0_alpha3_ID:
return V_5_0_0_alpha3;
case V_5_0_0_alpha2_ID:
return V_5_0_0_alpha2;
case V_5_0_0_alpha1_ID:
return V_5_0_0_alpha1;
case V_2_3_3_ID:
return V_2_3_3;
case V_2_3_2_ID:
return V_2_3_2;
case V_2_3_1_ID:

View File

@ -115,6 +115,8 @@ import org.elasticsearch.action.admin.indices.settings.put.TransportUpdateSettin
import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsAction;
import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresAction;
import org.elasticsearch.action.admin.indices.shards.TransportIndicesShardStoresAction;
import org.elasticsearch.action.admin.indices.shrink.ShrinkAction;
import org.elasticsearch.action.admin.indices.shrink.TransportShrinkAction;
import org.elasticsearch.action.admin.indices.stats.IndicesStatsAction;
import org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction;
import org.elasticsearch.action.admin.indices.template.delete.DeleteIndexTemplateAction;
@ -165,10 +167,6 @@ import org.elasticsearch.action.ingest.SimulatePipelineAction;
import org.elasticsearch.action.ingest.SimulatePipelineTransportAction;
import org.elasticsearch.action.main.MainAction;
import org.elasticsearch.action.main.TransportMainAction;
import org.elasticsearch.action.percolate.MultiPercolateAction;
import org.elasticsearch.action.percolate.PercolateAction;
import org.elasticsearch.action.percolate.TransportMultiPercolateAction;
import org.elasticsearch.action.percolate.TransportPercolateAction;
import org.elasticsearch.action.search.ClearScrollAction;
import org.elasticsearch.action.search.MultiSearchAction;
import org.elasticsearch.action.search.SearchAction;
@ -290,6 +288,7 @@ public class ActionModule extends AbstractModule {
registerAction(IndicesSegmentsAction.INSTANCE, TransportIndicesSegmentsAction.class);
registerAction(IndicesShardStoresAction.INSTANCE, TransportIndicesShardStoresAction.class);
registerAction(CreateIndexAction.INSTANCE, TransportCreateIndexAction.class);
registerAction(ShrinkAction.INSTANCE, TransportShrinkAction.class);
registerAction(DeleteIndexAction.INSTANCE, TransportDeleteIndexAction.class);
registerAction(GetIndexAction.INSTANCE, TransportGetIndexAction.class);
registerAction(OpenIndexAction.INSTANCE, TransportOpenIndexAction.class);
@ -332,8 +331,6 @@ public class ActionModule extends AbstractModule {
registerAction(SearchAction.INSTANCE, TransportSearchAction.class);
registerAction(SearchScrollAction.INSTANCE, TransportSearchScrollAction.class);
registerAction(MultiSearchAction.INSTANCE, TransportMultiSearchAction.class);
registerAction(PercolateAction.INSTANCE, TransportPercolateAction.class);
registerAction(MultiPercolateAction.INSTANCE, TransportMultiPercolateAction.class);
registerAction(ExplainAction.INSTANCE, TransportExplainAction.class);
registerAction(ClearScrollAction.INSTANCE, TransportClearScrollAction.class);
registerAction(RecoveryAction.INSTANCE, TransportRecoveryAction.class);

View File

@ -39,6 +39,10 @@ public abstract class ActionRequest<Request extends ActionRequest<Request>> exte
public abstract ActionRequestValidationException validate();
public boolean getShouldPersistResult() {
return false;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);

View File

@ -40,7 +40,7 @@ public interface IndicesRequest {
*/
IndicesOptions indicesOptions();
static interface Replaceable extends IndicesRequest {
interface Replaceable extends IndicesRequest {
/**
* Sets the indices that the action relates to.
*/

View File

@ -42,17 +42,22 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
private final ShardId shard;
private final boolean primary;
private final boolean hasPendingAsyncFetch;
private final String assignedNodeId;
private final UnassignedInfo unassignedInfo;
private final long allocationDelayMillis;
private final long remainingDelayMillis;
private final Map<DiscoveryNode, NodeExplanation> nodeExplanations;
public ClusterAllocationExplanation(ShardId shard, boolean primary, @Nullable String assignedNodeId, long remainingDelayMillis,
@Nullable UnassignedInfo unassignedInfo, Map<DiscoveryNode, NodeExplanation> nodeExplanations) {
public ClusterAllocationExplanation(ShardId shard, boolean primary, @Nullable String assignedNodeId, long allocationDelayMillis,
long remainingDelayMillis, @Nullable UnassignedInfo unassignedInfo, boolean hasPendingAsyncFetch,
Map<DiscoveryNode, NodeExplanation> nodeExplanations) {
this.shard = shard;
this.primary = primary;
this.hasPendingAsyncFetch = hasPendingAsyncFetch;
this.assignedNodeId = assignedNodeId;
this.unassignedInfo = unassignedInfo;
this.allocationDelayMillis = allocationDelayMillis;
this.remainingDelayMillis = remainingDelayMillis;
this.nodeExplanations = nodeExplanations;
}
@ -60,8 +65,10 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
public ClusterAllocationExplanation(StreamInput in) throws IOException {
this.shard = ShardId.readShardId(in);
this.primary = in.readBoolean();
this.hasPendingAsyncFetch = in.readBoolean();
this.assignedNodeId = in.readOptionalString();
this.unassignedInfo = in.readOptionalWriteable(UnassignedInfo::new);
this.allocationDelayMillis = in.readVLong();
this.remainingDelayMillis = in.readVLong();
int mapSize = in.readVInt();
@ -77,8 +84,10 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
public void writeTo(StreamOutput out) throws IOException {
this.getShard().writeTo(out);
out.writeBoolean(this.isPrimary());
out.writeBoolean(this.isStillFetchingShardData());
out.writeOptionalString(this.getAssignedNodeId());
out.writeOptionalWriteable(this.getUnassignedInfo());
out.writeVLong(allocationDelayMillis);
out.writeVLong(remainingDelayMillis);
out.writeVInt(this.nodeExplanations.size());
@ -97,6 +106,11 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
return this.primary;
}
/** Return turn if shard data is still being fetched for the allocation */
public boolean isStillFetchingShardData() {
return this.hasPendingAsyncFetch;
}
/** Return turn if the shard is assigned to a node */
public boolean isAssigned() {
return this.assignedNodeId != null;
@ -114,7 +128,12 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
return this.unassignedInfo;
}
/** Return the remaining allocation delay for this shard in millisocends */
/** Return the configured delay before the shard can be allocated in milliseconds */
public long getAllocationDelayMillis() {
return this.allocationDelayMillis;
}
/** Return the remaining allocation delay for this shard in milliseconds */
public long getRemainingDelayMillis() {
return this.remainingDelayMillis;
}
@ -138,11 +157,11 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
if (assignedNodeId != null) {
builder.field("assigned_node_id", this.assignedNodeId);
}
builder.field("shard_state_fetch_pending", this.hasPendingAsyncFetch);
// If we have unassigned info, show that
if (unassignedInfo != null) {
unassignedInfo.toXContent(builder, params);
long delay = unassignedInfo.getLastComputedLeftDelayNanos();
builder.timeValueField("allocation_delay_in_millis", "allocation_delay", TimeValue.timeValueNanos(delay));
builder.timeValueField("allocation_delay_in_millis", "allocation_delay", TimeValue.timeValueMillis(allocationDelayMillis));
builder.timeValueField("remaining_delay_in_millis", "remaining_delay", TimeValue.timeValueMillis(remainingDelayMillis));
}
builder.startObject("nodes");

View File

@ -19,7 +19,6 @@
package org.elasticsearch.action.admin.cluster.allocation;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import org.apache.lucene.index.CorruptIndexException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ExceptionsHelper;
@ -30,24 +29,18 @@ import org.elasticsearch.action.admin.indices.shards.TransportIndicesShardStores
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
import org.elasticsearch.cluster.ClusterInfoService;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.metadata.MetaData.Custom;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.routing.RoutingNode;
import org.elasticsearch.cluster.routing.RoutingNodes;
import org.elasticsearch.cluster.routing.RoutingNodes.RoutingNodesIterator;
import org.elasticsearch.cluster.routing.RoutingTable;
import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.routing.UnassignedInfo;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.cluster.routing.allocation.RoutingExplanations;
import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator;
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;
import org.elasticsearch.cluster.routing.allocation.decider.Decision;
@ -56,15 +49,17 @@ import org.elasticsearch.common.collect.ImmutableOpenIntMap;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.gateway.GatewayAllocator;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;
/**
* The {@code TransportClusterAllocationExplainAction} is responsible for actually executing the explanation of a shard's allocation on the
* master node in the cluster.
@ -72,26 +67,26 @@ import java.util.Set;
public class TransportClusterAllocationExplainAction
extends TransportMasterNodeAction<ClusterAllocationExplainRequest, ClusterAllocationExplainResponse> {
private final AllocationService allocationService;
private final ClusterInfoService clusterInfoService;
private final AllocationDeciders allocationDeciders;
private final ShardsAllocator shardAllocator;
private final TransportIndicesShardStoresAction shardStoresAction;
private final GatewayAllocator gatewayAllocator;
@Inject
public TransportClusterAllocationExplainAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver,
AllocationService allocationService, ClusterInfoService clusterInfoService,
AllocationDeciders allocationDeciders, ShardsAllocator shardAllocator,
TransportIndicesShardStoresAction shardStoresAction) {
ClusterInfoService clusterInfoService, AllocationDeciders allocationDeciders,
ShardsAllocator shardAllocator, TransportIndicesShardStoresAction shardStoresAction,
GatewayAllocator gatewayAllocator) {
super(settings, ClusterAllocationExplainAction.NAME, transportService, clusterService, threadPool, actionFilters,
indexNameExpressionResolver, ClusterAllocationExplainRequest::new);
this.allocationService = allocationService;
this.clusterInfoService = clusterInfoService;
this.allocationDeciders = allocationDeciders;
this.shardAllocator = shardAllocator;
this.shardStoresAction = shardStoresAction;
this.gatewayAllocator = gatewayAllocator;
}
@Override
@ -140,7 +135,8 @@ public class TransportClusterAllocationExplainAction
Float nodeWeight,
IndicesShardStoresResponse.StoreStatus storeStatus,
String assignedNodeId,
Set<String> activeAllocationIds) {
Set<String> activeAllocationIds,
boolean hasPendingAsyncFetch) {
final ClusterAllocationExplanation.FinalDecision finalDecision;
final ClusterAllocationExplanation.StoreCopy storeCopy;
final String finalExplanation;
@ -171,6 +167,19 @@ public class TransportClusterAllocationExplainAction
if (node.getId().equals(assignedNodeId)) {
finalDecision = ClusterAllocationExplanation.FinalDecision.ALREADY_ASSIGNED;
finalExplanation = "the shard is already assigned to this node";
} else if (hasPendingAsyncFetch &&
shard.primary() == false &&
shard.unassigned() &&
shard.allocatedPostIndexCreate(indexMetaData) &&
nodeDecision.type() != Decision.Type.YES) {
finalExplanation = "the shard cannot be assigned because allocation deciders return a " + nodeDecision.type().name() +
" decision and the shard's state is still being fetched";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else if (hasPendingAsyncFetch &&
shard.unassigned() &&
shard.allocatedPostIndexCreate(indexMetaData)) {
finalExplanation = "the shard's state is still being fetched so it cannot be allocated";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else if (shard.primary() && shard.unassigned() && shard.allocatedPostIndexCreate(indexMetaData) &&
storeCopy == ClusterAllocationExplanation.StoreCopy.STALE) {
finalExplanation = "the copy of the shard is stale, allocation ids do not match";
@ -190,6 +199,7 @@ public class TransportClusterAllocationExplainAction
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
finalExplanation = "the shard cannot be assigned because one or more allocation decider returns a 'NO' decision";
} else {
// TODO: handle throttling decision better here
finalDecision = ClusterAllocationExplanation.FinalDecision.YES;
if (storeCopy == ClusterAllocationExplanation.StoreCopy.AVAILABLE) {
finalExplanation = "the shard can be assigned and the node contains a valid copy of the shard data";
@ -208,16 +218,15 @@ public class TransportClusterAllocationExplainAction
*/
public static ClusterAllocationExplanation explainShard(ShardRouting shard, RoutingAllocation allocation, RoutingNodes routingNodes,
boolean includeYesDecisions, ShardsAllocator shardAllocator,
List<IndicesShardStoresResponse.StoreStatus> shardStores) {
List<IndicesShardStoresResponse.StoreStatus> shardStores,
GatewayAllocator gatewayAllocator) {
// don't short circuit deciders, we want a full explanation
allocation.debugDecision(true);
// get the existing unassigned info if available
UnassignedInfo ui = shard.unassignedInfo();
RoutingNodesIterator iter = routingNodes.nodes();
Map<DiscoveryNode, Decision> nodeToDecision = new HashMap<>();
while (iter.hasNext()) {
RoutingNode node = iter.next();
for (RoutingNode node : routingNodes) {
DiscoveryNode discoNode = node.node();
if (discoNode.isDataNode()) {
Decision d = tryShardOnNode(shard, node, allocation, includeYesDecisions);
@ -227,9 +236,9 @@ public class TransportClusterAllocationExplainAction
long remainingDelayMillis = 0;
final MetaData metadata = allocation.metaData();
final IndexMetaData indexMetaData = metadata.index(shard.index());
if (ui != null) {
final Settings indexSettings = indexMetaData.getSettings();
long remainingDelayNanos = ui.getRemainingDelay(System.nanoTime(), metadata.settings(), indexSettings);
long allocationDelayMillis = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexMetaData.getSettings()).getMillis();
if (ui != null && ui.isDelayed()) {
long remainingDelayNanos = ui.getRemainingDelay(System.nanoTime(), indexMetaData.getSettings());
remainingDelayMillis = TimeValue.timeValueNanos(remainingDelayNanos).millis();
}
@ -248,19 +257,21 @@ public class TransportClusterAllocationExplainAction
Float weight = weights.get(node);
IndicesShardStoresResponse.StoreStatus storeStatus = nodeToStatus.get(node);
NodeExplanation nodeExplanation = calculateNodeExplanation(shard, indexMetaData, node, decision, weight,
storeStatus, shard.currentNodeId(), indexMetaData.activeAllocationIds(shard.getId()));
storeStatus, shard.currentNodeId(), indexMetaData.activeAllocationIds(shard.getId()),
allocation.hasPendingAsyncFetch());
explanations.put(node, nodeExplanation);
}
return new ClusterAllocationExplanation(shard.shardId(), shard.primary(),
shard.currentNodeId(), remainingDelayMillis, ui, explanations);
shard.currentNodeId(), allocationDelayMillis, remainingDelayMillis, ui,
gatewayAllocator.hasFetchPending(shard.shardId(), shard.primary()), explanations);
}
@Override
protected void masterOperation(final ClusterAllocationExplainRequest request, final ClusterState state,
final ActionListener<ClusterAllocationExplainResponse> listener) {
final RoutingNodes routingNodes = state.getRoutingNodes();
final RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, state.nodes(),
clusterInfoService.getClusterInfo(), System.nanoTime());
final RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, state,
clusterInfoService.getClusterInfo(), System.nanoTime(), false);
ShardRouting foundShard = null;
if (request.useAnyUnassignedShard()) {
@ -307,7 +318,7 @@ public class TransportClusterAllocationExplainAction
shardStoreResponse.getStoreStatuses().get(shardRouting.getIndexName());
List<IndicesShardStoresResponse.StoreStatus> shardStoreStatus = shardStatuses.get(shardRouting.id());
ClusterAllocationExplanation cae = explainShard(shardRouting, allocation, routingNodes,
request.includeYesDecisions(), shardAllocator, shardStoreStatus);
request.includeYesDecisions(), shardAllocator, shardStoreStatus, gatewayAllocator);
listener.onResponse(new ClusterAllocationExplainResponse(cae));
}

View File

@ -182,7 +182,7 @@ public class ClusterHealthResponse extends ActionResponse implements StatusToXCo
super.readFrom(in);
clusterName = in.readString();
clusterHealthStatus = ClusterHealthStatus.fromValue(in.readByte());
clusterStateHealth = ClusterStateHealth.readClusterHealth(in);
clusterStateHealth = new ClusterStateHealth(in);
numberOfPendingTasks = in.readInt();
timedOut = in.readBoolean();
numberOfInFlightFetch = in.readInt();
@ -222,50 +222,48 @@ public class ClusterHealthResponse extends ActionResponse implements StatusToXCo
return isTimedOut() ? RestStatus.REQUEST_TIMEOUT : RestStatus.OK;
}
static final class Fields {
static final String CLUSTER_NAME = "cluster_name";
static final String STATUS = "status";
static final String TIMED_OUT = "timed_out";
static final String NUMBER_OF_NODES = "number_of_nodes";
static final String NUMBER_OF_DATA_NODES = "number_of_data_nodes";
static final String NUMBER_OF_PENDING_TASKS = "number_of_pending_tasks";
static final String NUMBER_OF_IN_FLIGHT_FETCH = "number_of_in_flight_fetch";
static final String DELAYED_UNASSIGNED_SHARDS = "delayed_unassigned_shards";
static final String TASK_MAX_WAIT_TIME_IN_QUEUE = "task_max_waiting_in_queue";
static final String TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS = "task_max_waiting_in_queue_millis";
static final String ACTIVE_SHARDS_PERCENT_AS_NUMBER = "active_shards_percent_as_number";
static final String ACTIVE_SHARDS_PERCENT = "active_shards_percent";
static final String ACTIVE_PRIMARY_SHARDS = "active_primary_shards";
static final String ACTIVE_SHARDS = "active_shards";
static final String RELOCATING_SHARDS = "relocating_shards";
static final String INITIALIZING_SHARDS = "initializing_shards";
static final String UNASSIGNED_SHARDS = "unassigned_shards";
static final String INDICES = "indices";
}
private static final String CLUSTER_NAME = "cluster_name";
private static final String STATUS = "status";
private static final String TIMED_OUT = "timed_out";
private static final String NUMBER_OF_NODES = "number_of_nodes";
private static final String NUMBER_OF_DATA_NODES = "number_of_data_nodes";
private static final String NUMBER_OF_PENDING_TASKS = "number_of_pending_tasks";
private static final String NUMBER_OF_IN_FLIGHT_FETCH = "number_of_in_flight_fetch";
private static final String DELAYED_UNASSIGNED_SHARDS = "delayed_unassigned_shards";
private static final String TASK_MAX_WAIT_TIME_IN_QUEUE = "task_max_waiting_in_queue";
private static final String TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS = "task_max_waiting_in_queue_millis";
private static final String ACTIVE_SHARDS_PERCENT_AS_NUMBER = "active_shards_percent_as_number";
private static final String ACTIVE_SHARDS_PERCENT = "active_shards_percent";
private static final String ACTIVE_PRIMARY_SHARDS = "active_primary_shards";
private static final String ACTIVE_SHARDS = "active_shards";
private static final String RELOCATING_SHARDS = "relocating_shards";
private static final String INITIALIZING_SHARDS = "initializing_shards";
private static final String UNASSIGNED_SHARDS = "unassigned_shards";
private static final String INDICES = "indices";
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field(Fields.CLUSTER_NAME, getClusterName());
builder.field(Fields.STATUS, getStatus().name().toLowerCase(Locale.ROOT));
builder.field(Fields.TIMED_OUT, isTimedOut());
builder.field(Fields.NUMBER_OF_NODES, getNumberOfNodes());
builder.field(Fields.NUMBER_OF_DATA_NODES, getNumberOfDataNodes());
builder.field(Fields.ACTIVE_PRIMARY_SHARDS, getActivePrimaryShards());
builder.field(Fields.ACTIVE_SHARDS, getActiveShards());
builder.field(Fields.RELOCATING_SHARDS, getRelocatingShards());
builder.field(Fields.INITIALIZING_SHARDS, getInitializingShards());
builder.field(Fields.UNASSIGNED_SHARDS, getUnassignedShards());
builder.field(Fields.DELAYED_UNASSIGNED_SHARDS, getDelayedUnassignedShards());
builder.field(Fields.NUMBER_OF_PENDING_TASKS, getNumberOfPendingTasks());
builder.field(Fields.NUMBER_OF_IN_FLIGHT_FETCH, getNumberOfInFlightFetch());
builder.timeValueField(Fields.TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, Fields.TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
builder.percentageField(Fields.ACTIVE_SHARDS_PERCENT_AS_NUMBER, Fields.ACTIVE_SHARDS_PERCENT, getActiveShardsPercent());
builder.field(CLUSTER_NAME, getClusterName());
builder.field(STATUS, getStatus().name().toLowerCase(Locale.ROOT));
builder.field(TIMED_OUT, isTimedOut());
builder.field(NUMBER_OF_NODES, getNumberOfNodes());
builder.field(NUMBER_OF_DATA_NODES, getNumberOfDataNodes());
builder.field(ACTIVE_PRIMARY_SHARDS, getActivePrimaryShards());
builder.field(ACTIVE_SHARDS, getActiveShards());
builder.field(RELOCATING_SHARDS, getRelocatingShards());
builder.field(INITIALIZING_SHARDS, getInitializingShards());
builder.field(UNASSIGNED_SHARDS, getUnassignedShards());
builder.field(DELAYED_UNASSIGNED_SHARDS, getDelayedUnassignedShards());
builder.field(NUMBER_OF_PENDING_TASKS, getNumberOfPendingTasks());
builder.field(NUMBER_OF_IN_FLIGHT_FETCH, getNumberOfInFlightFetch());
builder.timeValueField(TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
builder.percentageField(ACTIVE_SHARDS_PERCENT_AS_NUMBER, ACTIVE_SHARDS_PERCENT, getActiveShardsPercent());
String level = params.param("level", "cluster");
boolean outputIndices = "indices".equals(level) || "shards".equals(level);
if (outputIndices) {
builder.startObject(Fields.INDICES);
builder.startObject(INDICES);
for (ClusterIndexHealth indexHealth : clusterStateHealth.getIndices().values()) {
builder.startObject(indexHealth.getIndex());
indexHealth.toXContent(builder, params);

View File

@ -54,7 +54,8 @@ public class TransportClusterHealthAction extends TransportMasterNodeReadAction<
public TransportClusterHealthAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, ClusterName clusterName, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver, GatewayAllocator gatewayAllocator) {
super(settings, ClusterHealthAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterHealthRequest::new);
super(settings, ClusterHealthAction.NAME, false, transportService, clusterService, threadPool, actionFilters,
indexNameExpressionResolver, ClusterHealthRequest::new);
this.clusterName = clusterName;
this.gatewayAllocator = gatewayAllocator;
}

View File

@ -19,12 +19,14 @@
package org.elasticsearch.action.admin.cluster.node.hotthreads;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import java.util.List;
/**
*/
@ -33,26 +35,18 @@ public class NodesHotThreadsResponse extends BaseNodesResponse<NodeHotThreads> {
NodesHotThreadsResponse() {
}
public NodesHotThreadsResponse(ClusterName clusterName, NodeHotThreads[] nodes) {
super(clusterName, nodes);
public NodesHotThreadsResponse(ClusterName clusterName, List<NodeHotThreads> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeHotThreads[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = NodeHotThreads.readNodeHotThreads(in);
}
protected List<NodeHotThreads> readNodesFrom(StreamInput in) throws IOException {
return in.readList(NodeHotThreads::readNodeHotThreads);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeHotThreads node : nodes) {
node.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeHotThreads> nodes) throws IOException {
out.writeStreamableList(nodes);
}
}

View File

@ -20,6 +20,7 @@
package org.elasticsearch.action.admin.cluster.node.hotthreads;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
@ -35,33 +36,28 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
*/
public class TransportNodesHotThreadsAction extends TransportNodesAction<NodesHotThreadsRequest, NodesHotThreadsResponse, TransportNodesHotThreadsAction.NodeRequest, NodeHotThreads> {
public class TransportNodesHotThreadsAction extends TransportNodesAction<NodesHotThreadsRequest,
NodesHotThreadsResponse,
TransportNodesHotThreadsAction.NodeRequest,
NodeHotThreads> {
@Inject
public TransportNodesHotThreadsAction(Settings settings, ClusterName clusterName, ThreadPool threadPool,
ClusterService clusterService, TransportService transportService,
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesHotThreadsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, NodesHotThreadsRequest::new, NodeRequest::new, ThreadPool.Names.GENERIC);
indexNameExpressionResolver, NodesHotThreadsRequest::new, NodeRequest::new, ThreadPool.Names.GENERIC, NodeHotThreads.class);
}
@Override
protected NodesHotThreadsResponse newResponse(NodesHotThreadsRequest request, AtomicReferenceArray responses) {
final List<NodeHotThreads> nodes = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeHotThreads) {
nodes.add((NodeHotThreads) resp);
}
}
return new NodesHotThreadsResponse(clusterName, nodes.toArray(new NodeHotThreads[nodes.size()]));
protected NodesHotThreadsResponse newResponse(NodesHotThreadsRequest request,
List<NodeHotThreads> responses, List<FailedNodeException> failures) {
return new NodesHotThreadsResponse(clusterName, responses, failures);
}
@Override

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.info;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.node.DiscoveryNode;
@ -30,6 +31,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
import java.util.List;
import java.util.Map;
/**
@ -40,34 +42,24 @@ public class NodesInfoResponse extends BaseNodesResponse<NodeInfo> implements To
public NodesInfoResponse() {
}
public NodesInfoResponse(ClusterName clusterName, NodeInfo[] nodes) {
super(clusterName, nodes);
public NodesInfoResponse(ClusterName clusterName, List<NodeInfo> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeInfo[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = NodeInfo.readNodeInfo(in);
}
protected List<NodeInfo> readNodesFrom(StreamInput in) throws IOException {
return in.readList(NodeInfo::readNodeInfo);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeInfo node : nodes) {
node.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeInfo> nodes) throws IOException {
out.writeStreamableList(nodes);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("cluster_name", getClusterName().value());
builder.startObject("nodes");
for (NodeInfo nodeInfo : this) {
for (NodeInfo nodeInfo : getNodes()) {
builder.startObject(nodeInfo.getNode().getId());
builder.field("name", nodeInfo.getNode().getName());

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.info;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
@ -34,36 +35,32 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
*/
public class TransportNodesInfoAction extends TransportNodesAction<NodesInfoRequest, NodesInfoResponse, TransportNodesInfoAction.NodeInfoRequest, NodeInfo> {
public class TransportNodesInfoAction extends TransportNodesAction<NodesInfoRequest,
NodesInfoResponse,
TransportNodesInfoAction.NodeInfoRequest,
NodeInfo> {
private final NodeService nodeService;
@Inject
public TransportNodesInfoAction(Settings settings, ClusterName clusterName, ThreadPool threadPool,
ClusterService clusterService, TransportService transportService,
NodeService nodeService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
NodeService nodeService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesInfoAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, NodesInfoRequest::new, NodeInfoRequest::new, ThreadPool.Names.MANAGEMENT);
indexNameExpressionResolver, NodesInfoRequest::new, NodeInfoRequest::new, ThreadPool.Names.MANAGEMENT, NodeInfo.class);
this.nodeService = nodeService;
}
@Override
protected NodesInfoResponse newResponse(NodesInfoRequest nodesInfoRequest, AtomicReferenceArray responses) {
final List<NodeInfo> nodesInfos = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeInfo) {
nodesInfos.add((NodeInfo) resp);
}
}
return new NodesInfoResponse(clusterName, nodesInfos.toArray(new NodeInfo[nodesInfos.size()]));
protected NodesInfoResponse newResponse(NodesInfoRequest nodesInfoRequest,
List<NodeInfo> responses, List<FailedNodeException> failures) {
return new NodesInfoResponse(clusterName, responses, failures);
}
@Override

View File

@ -38,7 +38,8 @@ public final class TransportLivenessAction implements TransportRequestHandler<Li
ClusterService clusterService, TransportService transportService) {
this.clusterService = clusterService;
this.clusterName = clusterName;
transportService.registerRequestHandler(NAME, LivenessRequest::new, ThreadPool.Names.SAME, this);
transportService.registerRequestHandler(NAME, LivenessRequest::new, ThreadPool.Names.SAME,
false, false /*can not trip circuit breaker*/, this);
}
@Override

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.common.io.stream.StreamInput;
@ -28,6 +29,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
import java.util.List;
/**
*
@ -37,34 +39,24 @@ public class NodesStatsResponse extends BaseNodesResponse<NodeStats> implements
NodesStatsResponse() {
}
public NodesStatsResponse(ClusterName clusterName, NodeStats[] nodes) {
super(clusterName, nodes);
public NodesStatsResponse(ClusterName clusterName, List<NodeStats> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeStats[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = NodeStats.readNodeStats(in);
}
protected List<NodeStats> readNodesFrom(StreamInput in) throws IOException {
return in.readList(NodeStats::readNodeStats);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeStats node : nodes) {
node.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeStats> nodes) throws IOException {
out.writeStreamableList(nodes);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("cluster_name", getClusterName().value());
builder.startObject("nodes");
for (NodeStats nodeStats : this) {
for (NodeStats nodeStats : getNodes()) {
builder.startObject(nodeStats.getNode().getId());
builder.field("timestamp", nodeStats.getTimestamp());
nodeStats.toXContent(builder, params);

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
@ -34,36 +35,31 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
*/
public class TransportNodesStatsAction extends TransportNodesAction<NodesStatsRequest, NodesStatsResponse, TransportNodesStatsAction.NodeStatsRequest, NodeStats> {
public class TransportNodesStatsAction extends TransportNodesAction<NodesStatsRequest,
NodesStatsResponse,
TransportNodesStatsAction.NodeStatsRequest,
NodeStats> {
private final NodeService nodeService;
@Inject
public TransportNodesStatsAction(Settings settings, ClusterName clusterName, ThreadPool threadPool,
ClusterService clusterService, TransportService transportService,
NodeService nodeService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesStatsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,
NodesStatsRequest::new, NodeStatsRequest::new, ThreadPool.Names.MANAGEMENT);
NodeService nodeService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesStatsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, NodesStatsRequest::new, NodeStatsRequest::new, ThreadPool.Names.MANAGEMENT, NodeStats.class);
this.nodeService = nodeService;
}
@Override
protected NodesStatsResponse newResponse(NodesStatsRequest nodesInfoRequest, AtomicReferenceArray responses) {
final List<NodeStats> nodeStats = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeStats) {
nodeStats.add((NodeStats) resp);
}
}
return new NodesStatsResponse(clusterName, nodeStats.toArray(new NodeStats[nodeStats.size()]));
protected NodesStatsResponse newResponse(NodesStatsRequest request, List<NodeStats> responses, List<FailedNodeException> failures) {
return new NodesStatsResponse(clusterName, responses, failures);
}
@Override

View File

@ -192,6 +192,7 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
builder.endObject();
builder.endObject();
}
builder.endObject();
} else if ("parents".equals(groupBy)) {
builder.startObject("tasks");
for (TaskGroup group : getTaskGroups()) {

View File

@ -19,28 +19,24 @@
package org.elasticsearch.action.admin.cluster.reroute;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand;
import org.elasticsearch.cluster.routing.allocation.command.AllocationCommandRegistry;
import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;
import org.elasticsearch.common.ParseFieldMatcher;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.Objects;
/**
* Request to submit cluster reroute allocation commands
*/
public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteRequest> {
AllocationCommands commands = new AllocationCommands();
boolean dryRun;
boolean explain;
private AllocationCommands commands = new AllocationCommands();
private boolean dryRun;
private boolean explain;
private boolean retryFailed;
public ClusterRerouteRequest() {
}
@ -81,6 +77,15 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
return this;
}
/**
* Sets the retry failed flag (defaults to <tt>false</tt>). If true, the
* request will retry allocating shards that can't currently be allocated due to too many allocation failures.
*/
public ClusterRerouteRequest setRetryFailed(boolean retryFailed) {
this.retryFailed = retryFailed;
return this;
}
/**
* Returns the current explain flag
*/
@ -88,41 +93,27 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
return this.explain;
}
/**
* Returns the current retry failed flag
*/
public boolean isRetryFailed() {
return this.retryFailed;
}
/**
* Set the allocation commands to execute.
*/
public ClusterRerouteRequest commands(AllocationCommand... commands) {
this.commands = new AllocationCommands(commands);
public ClusterRerouteRequest commands(AllocationCommands commands) {
this.commands = commands;
return this;
}
/**
* Sets the source for the request.
* Returns the allocation commands to execute
*/
public ClusterRerouteRequest source(BytesReference source, AllocationCommandRegistry registry, ParseFieldMatcher parseFieldMatcher)
throws Exception {
try (XContentParser parser = XContentHelper.createParser(source)) {
XContentParser.Token token;
String currentFieldName = null;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName();
} else if (token == XContentParser.Token.START_ARRAY) {
if ("commands".equals(currentFieldName)) {
this.commands = AllocationCommands.fromXContent(parser, parseFieldMatcher, registry);
} else {
throw new ElasticsearchParseException("failed to parse reroute request, got start array with wrong field name [{}]", currentFieldName);
}
} else if (token.isValue()) {
if ("dry_run".equals(currentFieldName) || "dryRun".equals(currentFieldName)) {
dryRun = parser.booleanValue();
} else {
throw new ElasticsearchParseException("failed to parse reroute request, got value with wrong field name [{}]", currentFieldName);
}
}
}
}
return this;
public AllocationCommands getCommands() {
return commands;
}
@Override
@ -136,6 +127,7 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
commands = AllocationCommands.readFrom(in);
dryRun = in.readBoolean();
explain = in.readBoolean();
retryFailed = in.readBoolean();
readTimeout(in);
}
@ -145,6 +137,28 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
AllocationCommands.writeTo(commands, out);
out.writeBoolean(dryRun);
out.writeBoolean(explain);
out.writeBoolean(retryFailed);
writeTimeout(out);
}
@Override
public boolean equals(Object obj) {
if (obj == null || getClass() != obj.getClass()) {
return false;
}
ClusterRerouteRequest other = (ClusterRerouteRequest) obj;
// Override equals and hashCode for testing
return Objects.equals(commands, other.commands) &&
Objects.equals(dryRun, other.dryRun) &&
Objects.equals(explain, other.explain) &&
Objects.equals(timeout, other.timeout) &&
Objects.equals(retryFailed, other.retryFailed) &&
Objects.equals(masterNodeTimeout, other.masterNodeTimeout);
}
@Override
public int hashCode() {
// Override equals and hashCode for testing
return Objects.hash(commands, dryRun, explain, timeout, retryFailed, masterNodeTimeout);
}
}

View File

@ -22,13 +22,12 @@ package org.elasticsearch.action.admin.cluster.reroute;
import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;
import org.elasticsearch.client.ElasticsearchClient;
import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand;
import org.elasticsearch.common.bytes.BytesReference;
/**
* Builder for a cluster reroute request
*/
public class ClusterRerouteRequestBuilder extends AcknowledgedRequestBuilder<ClusterRerouteRequest, ClusterRerouteResponse, ClusterRerouteRequestBuilder> {
public class ClusterRerouteRequestBuilder
extends AcknowledgedRequestBuilder<ClusterRerouteRequest, ClusterRerouteResponse, ClusterRerouteRequestBuilder> {
public ClusterRerouteRequestBuilder(ElasticsearchClient client, ClusterRerouteAction action) {
super(client, action, new ClusterRerouteRequest());
}
@ -61,10 +60,11 @@ public class ClusterRerouteRequestBuilder extends AcknowledgedRequestBuilder<Clu
}
/**
* Sets the commands for the request to execute.
* Sets the retry failed flag (defaults to <tt>false</tt>). If true, the
* request will retry allocating shards that can't currently be allocated due to too many allocation failures.
*/
public ClusterRerouteRequestBuilder setCommands(AllocationCommand... commands) throws Exception {
request.commands(commands);
public ClusterRerouteRequestBuilder setRetryFailed(boolean retryFailed) {
request.setRetryFailed(retryFailed);
return this;
}
}
}

View File

@ -33,6 +33,7 @@ import org.elasticsearch.cluster.routing.allocation.RoutingExplanations;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Priority;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
@ -68,38 +69,55 @@ public class TransportClusterRerouteAction extends TransportMasterNodeAction<Clu
@Override
protected void masterOperation(final ClusterRerouteRequest request, final ClusterState state, final ActionListener<ClusterRerouteResponse> listener) {
clusterService.submitStateUpdateTask("cluster_reroute (api)", new AckedClusterStateUpdateTask<ClusterRerouteResponse>(Priority.IMMEDIATE, request, listener) {
private volatile ClusterState clusterStateToSend;
private volatile RoutingExplanations explanations;
@Override
protected ClusterRerouteResponse newResponse(boolean acknowledged) {
return new ClusterRerouteResponse(acknowledged, clusterStateToSend, explanations);
}
@Override
public void onAckTimeout() {
listener.onResponse(new ClusterRerouteResponse(false, clusterStateToSend, new RoutingExplanations()));
}
@Override
public void onFailure(String source, Throwable t) {
logger.debug("failed to perform [{}]", t, source);
super.onFailure(source, t);
}
@Override
public ClusterState execute(ClusterState currentState) {
RoutingAllocation.Result routingResult = allocationService.reroute(currentState, request.commands, request.explain());
ClusterState newState = ClusterState.builder(currentState).routingResult(routingResult).build();
clusterStateToSend = newState;
explanations = routingResult.explanations();
if (request.dryRun) {
return currentState;
}
return newState;
}
});
clusterService.submitStateUpdateTask("cluster_reroute (api)", new ClusterRerouteResponseAckedClusterStateUpdateTask(logger,
allocationService, request, listener));
}
}
static class ClusterRerouteResponseAckedClusterStateUpdateTask extends AckedClusterStateUpdateTask<ClusterRerouteResponse> {
private final ClusterRerouteRequest request;
private final ActionListener<ClusterRerouteResponse> listener;
private final ESLogger logger;
private final AllocationService allocationService;
private volatile ClusterState clusterStateToSend;
private volatile RoutingExplanations explanations;
ClusterRerouteResponseAckedClusterStateUpdateTask(ESLogger logger, AllocationService allocationService, ClusterRerouteRequest request,
ActionListener<ClusterRerouteResponse> listener) {
super(Priority.IMMEDIATE, request, listener);
this.request = request;
this.listener = listener;
this.logger = logger;
this.allocationService = allocationService;
}
@Override
protected ClusterRerouteResponse newResponse(boolean acknowledged) {
return new ClusterRerouteResponse(acknowledged, clusterStateToSend, explanations);
}
@Override
public void onAckTimeout() {
listener.onResponse(new ClusterRerouteResponse(false, clusterStateToSend, new RoutingExplanations()));
}
@Override
public void onFailure(String source, Throwable t) {
logger.debug("failed to perform [{}]", t, source);
super.onFailure(source, t);
}
@Override
public ClusterState execute(ClusterState currentState) {
RoutingAllocation.Result routingResult = allocationService.reroute(currentState, request.getCommands(), request.explain(),
request.isRetryFailed());
ClusterState newState = ClusterState.builder(currentState).routingResult(routingResult).build();
clusterStateToSend = newState;
explanations = routingResult.explanations();
if (request.dryRun()) {
return currentState;
}
return newState;
}
}
}

View File

@ -26,6 +26,7 @@ import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.shard.ShardId;
import java.io.IOException;
@ -33,16 +34,14 @@ import java.io.IOException;
*/
public class ClusterSearchShardsGroup implements Streamable, ToXContent {
private Index index;
private int shardId;
private ShardId shardId;
ShardRouting[] shards;
ClusterSearchShardsGroup() {
}
public ClusterSearchShardsGroup(Index index, int shardId, ShardRouting[] shards) {
this.index = index;
public ClusterSearchShardsGroup(ShardId shardId, ShardRouting[] shards) {
this.shardId = shardId;
this.shards = shards;
}
@ -54,11 +53,11 @@ public class ClusterSearchShardsGroup implements Streamable, ToXContent {
}
public String getIndex() {
return index.getName();
return shardId.getIndexName();
}
public int getShardId() {
return shardId;
return shardId.id();
}
public ShardRouting[] getShards() {
@ -67,18 +66,16 @@ public class ClusterSearchShardsGroup implements Streamable, ToXContent {
@Override
public void readFrom(StreamInput in) throws IOException {
index = new Index(in);
shardId = in.readVInt();
shardId = ShardId.readShardId(in);
shards = new ShardRouting[in.readVInt()];
for (int i = 0; i < shards.length; i++) {
shards[i] = ShardRouting.readShardRoutingEntry(in, index, shardId);
shards[i] = new ShardRouting(shardId, in);
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
index.writeTo(out);
out.writeVInt(shardId);
shardId.writeTo(out);
out.writeVInt(shards.length);
for (ShardRouting shardRouting : shards) {
shardRouting.writeToThin(out);

View File

@ -34,6 +34,7 @@ import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
@ -78,8 +79,7 @@ public class TransportClusterSearchShardsAction extends TransportMasterNodeReadA
ClusterSearchShardsGroup[] groupResponses = new ClusterSearchShardsGroup[groupShardsIterator.size()];
int currentGroup = 0;
for (ShardIterator shardIt : groupShardsIterator) {
Index index = shardIt.shardId().getIndex();
int shardId = shardIt.shardId().getId();
ShardId shardId = shardIt.shardId();
ShardRouting[] shardRoutings = new ShardRouting[shardIt.size()];
int currentShard = 0;
shardIt.reset();
@ -87,7 +87,7 @@ public class TransportClusterSearchShardsAction extends TransportMasterNodeReadA
shardRoutings[currentShard++] = shard;
nodeIds.add(shard.currentNodeId());
}
groupResponses[currentGroup++] = new ClusterSearchShardsGroup(index, shardId, shardRoutings);
groupResponses[currentGroup++] = new ClusterSearchShardsGroup(shardId, shardRoutings);
}
DiscoveryNode[] nodes = new DiscoveryNode[nodeIds.size()];
int currentNode = 0;

View File

@ -57,13 +57,13 @@ public class CreateSnapshotResponse extends ActionResponse implements ToXContent
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
snapshotInfo = SnapshotInfo.readOptionalSnapshotInfo(in);
snapshotInfo = in.readOptionalWriteable(SnapshotInfo::new);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeOptionalStreamable(snapshotInfo);
out.writeOptionalWriteable(snapshotInfo);
}
/**
@ -81,18 +81,13 @@ public class CreateSnapshotResponse extends ActionResponse implements ToXContent
return snapshotInfo.status();
}
static final class Fields {
static final String SNAPSHOT = "snapshot";
static final String ACCEPTED = "accepted";
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
if (snapshotInfo != null) {
builder.field(Fields.SNAPSHOT);
builder.field("snapshot");
snapshotInfo.toXContent(builder, params);
} else {
builder.field(Fields.ACCEPTED, true);
builder.field("accepted", true);
}
return builder;
}

View File

@ -60,7 +60,7 @@ public class GetSnapshotsResponse extends ActionResponse implements ToXContent {
int size = in.readVInt();
List<SnapshotInfo> builder = new ArrayList<>();
for (int i = 0; i < size; i++) {
builder.add(SnapshotInfo.readSnapshotInfo(in));
builder.add(new SnapshotInfo(in));
}
snapshots = Collections.unmodifiableList(builder);
}
@ -74,13 +74,9 @@ public class GetSnapshotsResponse extends ActionResponse implements ToXContent {
}
}
static final class Fields {
static final String SNAPSHOTS = "snapshots";
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray(Fields.SNAPSHOTS);
builder.startArray("snapshots");
for (SnapshotInfo snapshotInfo : snapshots) {
snapshotInfo.toXContent(builder, params);
}

View File

@ -31,7 +31,6 @@ import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
@ -77,18 +76,12 @@ public class TransportGetSnapshotsAction extends TransportMasterNodeAction<GetSn
try {
List<SnapshotInfo> snapshotInfoBuilder = new ArrayList<>();
if (isAllSnapshots(request.snapshots())) {
List<Snapshot> snapshots = snapshotsService.snapshots(request.repository(), request.ignoreUnavailable());
for (Snapshot snapshot : snapshots) {
snapshotInfoBuilder.add(new SnapshotInfo(snapshot));
}
snapshotInfoBuilder.addAll(snapshotsService.snapshots(request.repository(), request.ignoreUnavailable()));
} else if (isCurrentSnapshots(request.snapshots())) {
List<Snapshot> snapshots = snapshotsService.currentSnapshots(request.repository());
for (Snapshot snapshot : snapshots) {
snapshotInfoBuilder.add(new SnapshotInfo(snapshot));
}
snapshotInfoBuilder.addAll(snapshotsService.currentSnapshots(request.repository()));
} else {
Set<String> snapshotsToGet = new LinkedHashSet<>(); // to keep insertion order
List<Snapshot> snapshots = null;
List<SnapshotInfo> snapshots = null;
for (String snapshotOrPattern : request.snapshots()) {
if (Regex.isSimpleMatchPattern(snapshotOrPattern) == false) {
snapshotsToGet.add(snapshotOrPattern);
@ -96,7 +89,7 @@ public class TransportGetSnapshotsAction extends TransportMasterNodeAction<GetSn
if (snapshots == null) { // lazily load snapshots
snapshots = snapshotsService.snapshots(request.repository(), request.ignoreUnavailable());
}
for (Snapshot snapshot : snapshots) {
for (SnapshotInfo snapshot : snapshots) {
if (Regex.simpleMatch(snapshotOrPattern, snapshot.name())) {
snapshotsToGet.add(snapshot.name());
}
@ -105,7 +98,7 @@ public class TransportGetSnapshotsAction extends TransportMasterNodeAction<GetSn
}
for (String snapshot : snapshotsToGet) {
SnapshotId snapshotId = new SnapshotId(request.repository(), snapshot);
snapshotInfoBuilder.add(new SnapshotInfo(snapshotsService.snapshot(snapshotId)));
snapshotInfoBuilder.add(snapshotsService.snapshot(snapshotId));
}
}
listener.onResponse(new GetSnapshotsResponse(Collections.unmodifiableList(snapshotInfoBuilder)));

View File

@ -73,18 +73,13 @@ public class RestoreSnapshotResponse extends ActionResponse implements ToXConten
return restoreInfo.status();
}
static final class Fields {
static final String SNAPSHOT = "snapshot";
static final String ACCEPTED = "accepted";
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
if (restoreInfo != null) {
builder.field(Fields.SNAPSHOT);
builder.field("snapshot");
restoreInfo.toXContent(builder, params);
} else {
builder.field(Fields.ACCEPTED, true);
builder.field("accepted", true);
}
return builder;
}

View File

@ -73,13 +73,9 @@ public class SnapshotsStatusResponse extends ActionResponse implements ToXConten
}
}
static final class Fields {
static final String SNAPSHOTS = "snapshots";
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startArray(Fields.SNAPSHOTS);
builder.startArray("snapshots");
for (SnapshotStatus snapshot : snapshots) {
snapshot.toXContent(builder, params);
}

View File

@ -43,18 +43,20 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.atomic.AtomicReferenceArray;
import static java.util.Collections.unmodifiableMap;
/**
* Transport client that collects snapshot shard statuses from data nodes
*/
public class TransportNodesSnapshotsStatus extends TransportNodesAction<TransportNodesSnapshotsStatus.Request, TransportNodesSnapshotsStatus.NodesSnapshotStatus, TransportNodesSnapshotsStatus.NodeRequest, TransportNodesSnapshotsStatus.NodeSnapshotStatus> {
public class TransportNodesSnapshotsStatus extends TransportNodesAction<TransportNodesSnapshotsStatus.Request,
TransportNodesSnapshotsStatus.NodesSnapshotStatus,
TransportNodesSnapshotsStatus.NodeRequest,
TransportNodesSnapshotsStatus.NodeSnapshotStatus> {
public static final String ACTION_NAME = SnapshotsStatusAction.NAME + "[nodes]";
@ -66,7 +68,7 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
SnapshotShardsService snapshotShardsService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ACTION_NAME, clusterName, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,
Request::new, NodeRequest::new, ThreadPool.Names.GENERIC);
Request::new, NodeRequest::new, ThreadPool.Names.GENERIC, NodeSnapshotStatus.class);
this.snapshotShardsService = snapshotShardsService;
}
@ -86,21 +88,8 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
}
@Override
protected NodesSnapshotStatus newResponse(Request request, AtomicReferenceArray responses) {
final List<NodeSnapshotStatus> nodesList = new ArrayList<>();
final List<FailedNodeException> failures = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeSnapshotStatus) { // will also filter out null response for unallocated ones
nodesList.add((NodeSnapshotStatus) resp);
} else if (resp instanceof FailedNodeException) {
failures.add((FailedNodeException) resp);
} else {
logger.warn("unknown response type [{}], expected NodeSnapshotStatus or FailedNodeException", resp);
}
}
return new NodesSnapshotStatus(clusterName, nodesList.toArray(new NodeSnapshotStatus[nodesList.size()]),
failures.toArray(new FailedNodeException[failures.size()]));
protected NodesSnapshotStatus newResponse(Request request, List<NodeSnapshotStatus> responses, List<FailedNodeException> failures) {
return new NodesSnapshotStatus(clusterName, responses, failures);
}
@Override
@ -169,75 +158,47 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
public static class NodesSnapshotStatus extends BaseNodesResponse<NodeSnapshotStatus> {
private FailedNodeException[] failures;
NodesSnapshotStatus() {
}
public NodesSnapshotStatus(ClusterName clusterName, NodeSnapshotStatus[] nodes, FailedNodeException[] failures) {
super(clusterName, nodes);
this.failures = failures;
public NodesSnapshotStatus(ClusterName clusterName, List<NodeSnapshotStatus> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public FailedNodeException[] failures() {
return failures;
protected List<NodeSnapshotStatus> readNodesFrom(StreamInput in) throws IOException {
return in.readStreamableList(NodeSnapshotStatus::new);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeSnapshotStatus[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = new NodeSnapshotStatus();
nodes[i].readFrom(in);
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeSnapshotStatus response : nodes) {
response.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeSnapshotStatus> nodes) throws IOException {
out.writeStreamableList(nodes);
}
}
public static class NodeRequest extends BaseNodeRequest {
private SnapshotId[] snapshotIds;
private List<SnapshotId> snapshotIds;
public NodeRequest() {
}
NodeRequest(String nodeId, TransportNodesSnapshotsStatus.Request request) {
super(nodeId);
snapshotIds = request.snapshotIds;
snapshotIds = Arrays.asList(request.snapshotIds);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
int n = in.readVInt();
snapshotIds = new SnapshotId[n];
for (int i = 0; i < n; i++) {
snapshotIds[i] = SnapshotId.readSnapshotId(in);
}
snapshotIds = in.readList(SnapshotId::readSnapshotId);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
if (snapshotIds != null) {
out.writeVInt(snapshotIds.length);
for (int i = 0; i < snapshotIds.length; i++) {
snapshotIds[i].writeTo(out);
}
} else {
out.writeVInt(0);
}
out.writeStreamableList(snapshotIds);
}
}

View File

@ -36,7 +36,7 @@ import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
@ -202,7 +202,7 @@ public class TransportSnapshotsStatusAction extends TransportMasterNodeAction<Sn
// This is a snapshot the is currently running - skipping
continue;
}
Snapshot snapshot = snapshotsService.snapshot(snapshotId);
SnapshotInfo snapshot = snapshotsService.snapshot(snapshotId);
List<SnapshotIndexShardStatus> shardStatusBuilder = new ArrayList<>();
if (snapshot.state().completed()) {
Map<ShardId, IndexShardSnapshotStatus> shardStatues = snapshotsService.snapshotShards(snapshotId);

View File

@ -47,7 +47,7 @@ public class TransportClusterStateAction extends TransportMasterNodeReadAction<C
@Inject
public TransportClusterStateAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
ClusterName clusterName, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ClusterStateAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterStateRequest::new);
super(settings, ClusterStateAction.NAME, false, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterStateRequest::new);
this.clusterName = clusterName;
}

View File

@ -22,22 +22,19 @@ package org.elasticsearch.action.admin.cluster.stats;
import com.carrotsearch.hppc.ObjectObjectHashMap;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import org.elasticsearch.action.admin.indices.stats.CommonStats;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.index.cache.query.QueryCacheStats;
import org.elasticsearch.index.engine.SegmentsStats;
import org.elasticsearch.index.fielddata.FieldDataStats;
import org.elasticsearch.index.percolator.PercolatorQueryCacheStats;
import org.elasticsearch.index.shard.DocsStats;
import org.elasticsearch.index.store.StoreStats;
import org.elasticsearch.search.suggest.completion.CompletionStats;
import java.io.IOException;
import java.util.List;
public class ClusterStatsIndices implements ToXContent, Streamable {
public class ClusterStatsIndices implements ToXContent {
private int indexCount;
private ShardStats shards;
@ -47,12 +44,8 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
private QueryCacheStats queryCache;
private CompletionStats completion;
private SegmentsStats segments;
private PercolatorQueryCacheStats percolatorCache;
private ClusterStatsIndices() {
}
public ClusterStatsIndices(ClusterStatsNodeResponse[] nodeResponses) {
public ClusterStatsIndices(List<ClusterStatsNodeResponse> nodeResponses) {
ObjectObjectHashMap<String, ShardStats> countsPerIndex = new ObjectObjectHashMap<>();
this.docs = new DocsStats();
@ -61,7 +54,6 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
this.queryCache = new QueryCacheStats();
this.completion = new CompletionStats();
this.segments = new SegmentsStats();
this.percolatorCache = new PercolatorQueryCacheStats();
for (ClusterStatsNodeResponse r : nodeResponses) {
for (org.elasticsearch.action.admin.indices.stats.ShardStats shardStats : r.shardsStats()) {
@ -84,7 +76,6 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
queryCache.add(shardCommonStats.queryCache);
completion.add(shardCommonStats.completion);
segments.add(shardCommonStats.segments);
percolatorCache.add(shardCommonStats.percolatorCache);
}
}
@ -127,42 +118,6 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
return segments;
}
public PercolatorQueryCacheStats getPercolatorCache() {
return percolatorCache;
}
@Override
public void readFrom(StreamInput in) throws IOException {
indexCount = in.readVInt();
shards = ShardStats.readShardStats(in);
docs = DocsStats.readDocStats(in);
store = StoreStats.readStoreStats(in);
fieldData = FieldDataStats.readFieldDataStats(in);
queryCache = QueryCacheStats.readQueryCacheStats(in);
completion = CompletionStats.readCompletionStats(in);
segments = SegmentsStats.readSegmentsStats(in);
percolatorCache = PercolatorQueryCacheStats.readPercolateStats(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(indexCount);
shards.writeTo(out);
docs.writeTo(out);
store.writeTo(out);
fieldData.writeTo(out);
queryCache.writeTo(out);
completion.writeTo(out);
segments.writeTo(out);
percolatorCache.writeTo(out);
}
public static ClusterStatsIndices readIndicesStats(StreamInput in) throws IOException {
ClusterStatsIndices indicesStats = new ClusterStatsIndices();
indicesStats.readFrom(in);
return indicesStats;
}
static final class Fields {
static final String COUNT = "count";
}
@ -177,11 +132,10 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
queryCache.toXContent(builder, params);
completion.toXContent(builder, params);
segments.toXContent(builder, params);
percolatorCache.toXContent(builder, params);
return builder;
}
public static class ShardStats implements ToXContent, Streamable {
public static class ShardStats implements ToXContent {
int indices;
int total;
@ -326,40 +280,6 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
}
}
public static ShardStats readShardStats(StreamInput in) throws IOException {
ShardStats c = new ShardStats();
c.readFrom(in);
return c;
}
@Override
public void readFrom(StreamInput in) throws IOException {
indices = in.readVInt();
total = in.readVInt();
primaries = in.readVInt();
minIndexShards = in.readVInt();
maxIndexShards = in.readVInt();
minIndexPrimaryShards = in.readVInt();
maxIndexPrimaryShards = in.readVInt();
minIndexReplication = in.readDouble();
totalIndexReplication = in.readDouble();
maxIndexReplication = in.readDouble();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(indices);
out.writeVInt(total);
out.writeVInt(primaries);
out.writeVInt(minIndexShards);
out.writeVInt(maxIndexShards);
out.writeVInt(minIndexPrimaryShards);
out.writeVInt(maxIndexPrimaryShards);
out.writeDouble(minIndexReplication);
out.writeDouble(totalIndexReplication);
out.writeDouble(maxIndexReplication);
}
static final class Fields {
static final String SHARDS = "shards";
static final String TOTAL = "total";

View File

@ -26,9 +26,6 @@ import org.elasticsearch.Version;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.ByteSizeValue;
@ -48,7 +45,7 @@ import java.util.List;
import java.util.Map;
import java.util.Set;
public class ClusterStatsNodes implements ToXContent, Writeable {
public class ClusterStatsNodes implements ToXContent {
private final Counts counts;
private final Set<Version> versions;
@ -58,33 +55,12 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
private final FsInfo.Path fs;
private final Set<PluginInfo> plugins;
ClusterStatsNodes(StreamInput in) throws IOException {
this.counts = new Counts(in);
int size = in.readVInt();
this.versions = new HashSet<>(size);
for (int i = 0; i < size; i++) {
this.versions.add(Version.readVersion(in));
}
this.os = new OsStats(in);
this.process = new ProcessStats(in);
this.jvm = new JvmStats(in);
this.fs = new FsInfo.Path(in);
size = in.readVInt();
this.plugins = new HashSet<>(size);
for (int i = 0; i < size; i++) {
this.plugins.add(PluginInfo.readFromStream(in));
}
}
ClusterStatsNodes(ClusterStatsNodeResponse[] nodeResponses) {
ClusterStatsNodes(List<ClusterStatsNodeResponse> nodeResponses) {
this.versions = new HashSet<>();
this.fs = new FsInfo.Path();
this.plugins = new HashSet<>();
Set<InetAddress> seenAddresses = new HashSet<>(nodeResponses.length);
Set<InetAddress> seenAddresses = new HashSet<>(nodeResponses.size());
List<NodeInfo> nodeInfos = new ArrayList<>();
List<NodeStats> nodeStats = new ArrayList<>();
for (ClusterStatsNodeResponse nodeResponse : nodeResponses) {
@ -140,21 +116,6 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
return plugins;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
counts.writeTo(out);
out.writeVInt(versions.size());
for (Version v : versions) Version.writeVersion(v, out);
os.writeTo(out);
process.writeTo(out);
jvm.writeTo(out);
fs.writeTo(out);
out.writeVInt(plugins.size());
for (PluginInfo p : plugins) {
p.writeTo(out);
}
}
static final class Fields {
static final String COUNT = "count";
static final String VERSIONS = "versions";
@ -200,18 +161,12 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
return builder;
}
public static class Counts implements Writeable, ToXContent {
public static class Counts implements ToXContent {
static final String COORDINATING_ONLY = "coordinating_only";
private final int total;
private final Map<String, Integer> roles;
@SuppressWarnings("unchecked")
private Counts(StreamInput in) throws IOException {
this.total = in.readVInt();
this.roles = (Map<String, Integer>)in.readGenericValue();
}
private Counts(List<NodeInfo> nodeInfos) {
this.roles = new HashMap<>();
for (DiscoveryNode.Role role : DiscoveryNode.Role.values()) {
@ -243,12 +198,6 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
return roles;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(total);
out.writeGenericValue(roles);
}
static final class Fields {
static final String TOTAL = "total";
}
@ -263,7 +212,7 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
}
}
public static class OsStats implements ToXContent, Writeable {
public static class OsStats implements ToXContent {
final int availableProcessors;
final int allocatedProcessors;
final ObjectIntHashMap<String> names;
@ -287,30 +236,6 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
this.allocatedProcessors = allocatedProcessors;
}
/**
* Read from a stream.
*/
private OsStats(StreamInput in) throws IOException {
this.availableProcessors = in.readVInt();
this.allocatedProcessors = in.readVInt();
int size = in.readVInt();
this.names = new ObjectIntHashMap<>();
for (int i = 0; i < size; i++) {
names.addTo(in.readString(), in.readVInt());
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(availableProcessors);
out.writeVInt(allocatedProcessors);
out.writeVInt(names.size());
for (ObjectIntCursor<String> name : names) {
out.writeString(name.key);
out.writeVInt(name.value);
}
}
public int getAvailableProcessors() {
return availableProcessors;
}
@ -343,7 +268,7 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
}
}
public static class ProcessStats implements ToXContent, Writeable {
public static class ProcessStats implements ToXContent {
final int count;
final int cpuPercent;
@ -384,27 +309,6 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
this.maxOpenFileDescriptors = maxOpenFileDescriptors;
}
/**
* Read from a stream.
*/
private ProcessStats(StreamInput in) throws IOException {
this.count = in.readVInt();
this.cpuPercent = in.readVInt();
this.totalOpenFileDescriptors = in.readVLong();
this.minOpenFileDescriptors = in.readLong();
this.maxOpenFileDescriptors = in.readLong();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(count);
out.writeVInt(cpuPercent);
out.writeVLong(totalOpenFileDescriptors);
out.writeLong(minOpenFileDescriptors);
out.writeLong(maxOpenFileDescriptors);
}
/**
* Cpu usage in percentages - 100 is 1 core.
*/
@ -456,7 +360,7 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
}
}
public static class JvmStats implements Writeable, ToXContent {
public static class JvmStats implements ToXContent {
private final ObjectIntHashMap<JvmVersion> versions;
private final long threads;
@ -497,34 +401,6 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
this.heapMax = heapMax;
}
/**
* Read from a stream.
*/
private JvmStats(StreamInput in) throws IOException {
int size = in.readVInt();
this.versions = new ObjectIntHashMap<>(size);
for (int i = 0; i < size; i++) {
this.versions.addTo(new JvmVersion(in), in.readVInt());
}
this.threads = in.readVLong();
this.maxUptime = in.readVLong();
this.heapUsed = in.readVLong();
this.heapMax = in.readVLong();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(versions.size());
for (ObjectIntCursor<JvmVersion> v : versions) {
v.key.writeTo(out);
out.writeVInt(v.value);
}
out.writeVLong(threads);
out.writeVLong(maxUptime);
out.writeVLong(heapUsed);
out.writeVLong(heapMax);
}
public ObjectIntHashMap<JvmVersion> getVersions() {
return versions;
}
@ -598,7 +474,7 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
}
}
public static class JvmVersion implements Writeable {
public static class JvmVersion {
String version;
String vmName;
String vmVersion;
@ -611,27 +487,6 @@ public class ClusterStatsNodes implements ToXContent, Writeable {
vmVendor = jvmInfo.getVmVendor();
}
/**
* Read from a stream.
*/
JvmVersion(StreamInput in) throws IOException {
version = in.readString();
vmName = in.readString();
vmVersion = in.readString();
vmVendor = in.readString();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(version);
out.writeString(vmName);
out.writeString(vmVersion);
out.writeString(vmVendor);
}
JvmVersion() {
}
@Override
public boolean equals(Object o) {
if (this == o) {

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
@ -29,9 +30,8 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
import java.util.Iterator;
import java.util.List;
import java.util.Locale;
import java.util.Map;
/**
*
@ -48,8 +48,9 @@ public class ClusterStatsResponse extends BaseNodesResponse<ClusterStatsNodeResp
ClusterStatsResponse() {
}
public ClusterStatsResponse(long timestamp, ClusterName clusterName, String clusterUUID, ClusterStatsNodeResponse[] nodes) {
super(clusterName, null);
public ClusterStatsResponse(long timestamp, ClusterName clusterName, String clusterUUID,
List<ClusterStatsNodeResponse> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
this.timestamp = timestamp;
this.clusterUUID = clusterUUID;
nodesStats = new ClusterStatsNodes(nodes);
@ -79,77 +80,53 @@ public class ClusterStatsResponse extends BaseNodesResponse<ClusterStatsNodeResp
return indicesStats;
}
@Override
public ClusterStatsNodeResponse[] getNodes() {
throw new UnsupportedOperationException();
}
@Override
public Map<String, ClusterStatsNodeResponse> getNodesMap() {
throw new UnsupportedOperationException();
}
@Override
public ClusterStatsNodeResponse getAt(int position) {
throw new UnsupportedOperationException();
}
@Override
public Iterator<ClusterStatsNodeResponse> iterator() {
throw new UnsupportedOperationException();
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
timestamp = in.readVLong();
status = null;
if (in.readBoolean()) {
// it may be that the master switched on us while doing the operation. In this case the status may be null.
status = ClusterHealthStatus.fromValue(in.readByte());
}
clusterUUID = in.readString();
nodesStats = new ClusterStatsNodes(in);
indicesStats = ClusterStatsIndices.readIndicesStats(in);
// it may be that the master switched on us while doing the operation. In this case the status may be null.
status = in.readOptionalWriteable(ClusterHealthStatus::readFrom);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVLong(timestamp);
if (status == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
out.writeByte(status.value());
}
out.writeString(clusterUUID);
nodesStats.writeTo(out);
indicesStats.writeTo(out);
out.writeOptionalWriteable(status);
}
static final class Fields {
static final String NODES = "nodes";
static final String INDICES = "indices";
static final String UUID = "uuid";
static final String CLUSTER_NAME = "cluster_name";
static final String STATUS = "status";
@Override
protected List<ClusterStatsNodeResponse> readNodesFrom(StreamInput in) throws IOException {
List<ClusterStatsNodeResponse> nodes = in.readList(ClusterStatsNodeResponse::readNodeResponse);
// built from nodes rather than from the stream directly
nodesStats = new ClusterStatsNodes(nodes);
indicesStats = new ClusterStatsIndices(nodes);
return nodes;
}
@Override
protected void writeNodesTo(StreamOutput out, List<ClusterStatsNodeResponse> nodes) throws IOException {
// nodeStats and indicesStats are rebuilt from nodes
out.writeStreamableList(nodes);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("timestamp", getTimestamp());
builder.field(Fields.CLUSTER_NAME, getClusterName().value());
if (params.paramAsBoolean("output_uuid", false)) {
builder.field(Fields.UUID, clusterUUID);
builder.field("uuid", clusterUUID);
}
if (status != null) {
builder.field(Fields.STATUS, status.name().toLowerCase(Locale.ROOT));
builder.field("status", status.name().toLowerCase(Locale.ROOT));
}
builder.startObject(Fields.INDICES);
builder.startObject("indices");
indicesStats.toXContent(builder, params);
builder.endObject();
builder.startObject(Fields.NODES);
builder.startObject("nodes");
nodesStats.toXContent(builder, params);
builder.endObject();
return builder;

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;
import org.elasticsearch.action.admin.indices.stats.CommonStats;
@ -46,7 +47,6 @@ import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
@ -55,8 +55,7 @@ public class TransportClusterStatsAction extends TransportNodesAction<ClusterSta
TransportClusterStatsAction.ClusterStatsNodeRequest, ClusterStatsNodeResponse> {
private static final CommonStatsFlags SHARD_STATS_FLAGS = new CommonStatsFlags(CommonStatsFlags.Flag.Docs, CommonStatsFlags.Flag.Store,
CommonStatsFlags.Flag.FieldData, CommonStatsFlags.Flag.QueryCache, CommonStatsFlags.Flag.Completion, CommonStatsFlags.Flag.Segments,
CommonStatsFlags.Flag.PercolatorCache);
CommonStatsFlags.Flag.FieldData, CommonStatsFlags.Flag.QueryCache, CommonStatsFlags.Flag.Completion, CommonStatsFlags.Flag.Segments);
private final NodeService nodeService;
private final IndicesService indicesService;
@ -68,22 +67,17 @@ public class TransportClusterStatsAction extends TransportNodesAction<ClusterSta
NodeService nodeService, IndicesService indicesService,
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ClusterStatsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, ClusterStatsRequest::new, ClusterStatsNodeRequest::new, ThreadPool.Names.MANAGEMENT);
indexNameExpressionResolver, ClusterStatsRequest::new, ClusterStatsNodeRequest::new, ThreadPool.Names.MANAGEMENT,
ClusterStatsNodeResponse.class);
this.nodeService = nodeService;
this.indicesService = indicesService;
}
@Override
protected ClusterStatsResponse newResponse(ClusterStatsRequest clusterStatsRequest, AtomicReferenceArray responses) {
final List<ClusterStatsNodeResponse> nodeStats = new ArrayList<>(responses.length());
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof ClusterStatsNodeResponse) {
nodeStats.add((ClusterStatsNodeResponse) resp);
}
}
return new ClusterStatsResponse(System.currentTimeMillis(), clusterName,
clusterService.state().metaData().clusterUUID(), nodeStats.toArray(new ClusterStatsNodeResponse[nodeStats.size()]));
protected ClusterStatsResponse newResponse(ClusterStatsRequest request,
List<ClusterStatsNodeResponse> responses, List<FailedNodeException> failures) {
return new ClusterStatsResponse(System.currentTimeMillis(), clusterName, clusterService.state().metaData().clusterUUID(),
responses, failures);
}
@Override
@ -105,7 +99,7 @@ public class TransportClusterStatsAction extends TransportNodesAction<ClusterSta
for (IndexShard indexShard : indexService) {
if (indexShard.routingEntry() != null && indexShard.routingEntry().active()) {
// only report on fully started shards
shardsStats.add(new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesService.getIndicesQueryCache(), indexService.cache().getPercolatorQueryCache(), indexShard, SHARD_STATS_FLAGS), indexShard.commitStats()));
shardsStats.add(new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesService.getIndicesQueryCache(), indexShard, SHARD_STATS_FLAGS), indexShard.commitStats()));
}
}
}

View File

@ -54,7 +54,7 @@ public class TransportClearIndicesCacheAction extends TransportBroadcastByNodeAc
TransportService transportService, IndicesService indicesService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ClearIndicesCacheAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,
ClearIndicesCacheRequest::new, ThreadPool.Names.MANAGEMENT);
ClearIndicesCacheRequest::new, ThreadPool.Names.MANAGEMENT, false);
this.indicesService = indicesService;
}

View File

@ -24,6 +24,7 @@ import org.elasticsearch.cluster.ack.ClusterStateUpdateRequest;
import org.elasticsearch.cluster.block.ClusterBlock;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.Index;
import org.elasticsearch.transport.TransportMessage;
import java.util.HashMap;
@ -40,6 +41,7 @@ public class CreateIndexClusterStateUpdateRequest extends ClusterStateUpdateRequ
private final String cause;
private final String index;
private final boolean updateAllTypes;
private Index shrinkFrom;
private IndexMetaData.State state = IndexMetaData.State.OPEN;
@ -54,7 +56,7 @@ public class CreateIndexClusterStateUpdateRequest extends ClusterStateUpdateRequ
private final Set<ClusterBlock> blocks = new HashSet<>();
CreateIndexClusterStateUpdateRequest(TransportMessage originalMessage, String cause, String index, boolean updateAllTypes) {
public CreateIndexClusterStateUpdateRequest(TransportMessage originalMessage, String cause, String index, boolean updateAllTypes) {
this.originalMessage = originalMessage;
this.cause = cause;
this.index = index;
@ -91,6 +93,11 @@ public class CreateIndexClusterStateUpdateRequest extends ClusterStateUpdateRequ
return this;
}
public CreateIndexClusterStateUpdateRequest shrinkFrom(Index shrinkFrom) {
this.shrinkFrom = shrinkFrom;
return this;
}
public TransportMessage originalMessage() {
return originalMessage;
}
@ -127,6 +134,10 @@ public class CreateIndexClusterStateUpdateRequest extends ClusterStateUpdateRequ
return blocks;
}
public Index shrinkFrom() {
return shrinkFrom;
}
/** True if all fields that span multiple types should be updated, false otherwise */
public boolean updateAllTypes() {
return updateAllTypes;

View File

@ -30,10 +30,10 @@ import java.io.IOException;
*/
public class CreateIndexResponse extends AcknowledgedResponse {
CreateIndexResponse() {
protected CreateIndexResponse() {
}
CreateIndexResponse(boolean acknowledged) {
protected CreateIndexResponse(boolean acknowledged) {
super(acknowledged);
}

View File

@ -0,0 +1,31 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.indices.delete;
import org.elasticsearch.cluster.ack.IndicesClusterStateUpdateRequest;
/**
* Cluster state update request that allows to close one or more indices
*/
public class DeleteIndexClusterStateUpdateRequest extends IndicesClusterStateUpdateRequest<DeleteIndexClusterStateUpdateRequest> {
DeleteIndexClusterStateUpdateRequest() {
}
}

View File

@ -23,26 +23,22 @@ import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.IndicesRequest;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.action.support.master.MasterNodeRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.CollectionUtils;
import java.io.IOException;
import static org.elasticsearch.action.ValidateActions.addValidationError;
import static org.elasticsearch.common.unit.TimeValue.readTimeValue;
/**
* A request to delete an index. Best created with {@link org.elasticsearch.client.Requests#deleteIndexRequest(String)}.
*/
public class DeleteIndexRequest extends MasterNodeRequest<DeleteIndexRequest> implements IndicesRequest.Replaceable {
public class DeleteIndexRequest extends AcknowledgedRequest<DeleteIndexRequest> implements IndicesRequest.Replaceable {
private String[] indices;
// Delete index should work by default on both open and closed indices.
private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, true, true, true);
private TimeValue timeout = AcknowledgedRequest.DEFAULT_ACK_TIMEOUT;
public DeleteIndexRequest() {
}
@ -98,37 +94,11 @@ public class DeleteIndexRequest extends MasterNodeRequest<DeleteIndexRequest> im
return indices;
}
/**
* Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults
* to <tt>10s</tt>.
*/
public TimeValue timeout() {
return timeout;
}
/**
* Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults
* to <tt>10s</tt>.
*/
public DeleteIndexRequest timeout(TimeValue timeout) {
this.timeout = timeout;
return this;
}
/**
* Timeout to wait for the index deletion to be acknowledged by current cluster nodes. Defaults
* to <tt>10s</tt>.
*/
public DeleteIndexRequest timeout(String timeout) {
return timeout(TimeValue.parseTimeValue(timeout, null, getClass().getSimpleName() + ".timeout"));
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
indices = in.readStringArray();
indicesOptions = IndicesOptions.readIndicesOptions(in);
timeout = readTimeValue(in);
}
@Override
@ -136,6 +106,5 @@ public class DeleteIndexRequest extends MasterNodeRequest<DeleteIndexRequest> im
super.writeTo(out);
out.writeStringArray(indices);
indicesOptions.writeIndicesOptions(out);
timeout.writeTo(out);
}
}

View File

@ -24,6 +24,7 @@ import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.DestructiveOperations;
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
@ -85,15 +86,21 @@ public class TransportDeleteIndexAction extends TransportMasterNodeAction<Delete
listener.onResponse(new DeleteIndexResponse(true));
return;
}
deleteIndexService.deleteIndices(new MetaDataDeleteIndexService.Request(concreteIndices).timeout(request.timeout()).masterTimeout(request.masterNodeTimeout()), new MetaDataDeleteIndexService.Listener() {
DeleteIndexClusterStateUpdateRequest deleteRequest = new DeleteIndexClusterStateUpdateRequest()
.ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())
.indices(concreteIndices.toArray(new Index[concreteIndices.size()]));
deleteIndexService.deleteIndices(deleteRequest, new ActionListener<ClusterStateUpdateResponse>() {
@Override
public void onResponse(MetaDataDeleteIndexService.Response response) {
listener.onResponse(new DeleteIndexResponse(response.acknowledged()));
public void onResponse(ClusterStateUpdateResponse response) {
listener.onResponse(new DeleteIndexResponse(response.isAcknowledged()));
}
@Override
public void onFailure(Throwable t) {
logger.debug("failed to delete indices [{}]", t, concreteIndices);
listener.onFailure(t);
}
});

View File

@ -31,8 +31,6 @@ import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import static org.elasticsearch.cluster.routing.ShardRouting.readShardRoutingEntry;
public class ShardSegments implements Streamable, Iterable<Segment> {
private ShardRouting shardRouting;
@ -88,7 +86,7 @@ public class ShardSegments implements Streamable, Iterable<Segment> {
@Override
public void readFrom(StreamInput in) throws IOException {
shardRouting = readShardRoutingEntry(in);
shardRouting = new ShardRouting(in);
int size = in.readVInt();
if (size == 0) {
segments = Collections.emptyList();
@ -108,4 +106,4 @@ public class ShardSegments implements Streamable, Iterable<Segment> {
segment.writeTo(out);
}
}
}
}

View File

@ -53,6 +53,7 @@ import org.elasticsearch.transport.TransportService;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Queue;
import java.util.Set;
import java.util.concurrent.ConcurrentLinkedQueue;
@ -152,7 +153,7 @@ public class TransportIndicesShardStoresAction extends TransportMasterNodeReadAc
}
@Override
protected synchronized void processAsyncFetch(ShardId shardId, NodeGatewayStartedShards[] responses, FailedNodeException[] failures) {
protected synchronized void processAsyncFetch(ShardId shardId, List<NodeGatewayStartedShards> responses, List<FailedNodeException> failures) {
fetchResponses.add(new Response(shardId, responses, failures));
if (expectedOps.countDown()) {
finish();
@ -220,10 +221,10 @@ public class TransportIndicesShardStoresAction extends TransportMasterNodeReadAc
public class Response {
private final ShardId shardId;
private final NodeGatewayStartedShards[] responses;
private final FailedNodeException[] failures;
private final List<NodeGatewayStartedShards> responses;
private final List<FailedNodeException> failures;
public Response(ShardId shardId, NodeGatewayStartedShards[] responses, FailedNodeException[] failures) {
public Response(ShardId shardId, List<NodeGatewayStartedShards> responses, List<FailedNodeException> failures) {
this.shardId = shardId;
this.responses = responses;
this.failures = failures;

View File

@ -0,0 +1,45 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.indices.shrink;
import org.elasticsearch.action.Action;
import org.elasticsearch.client.ElasticsearchClient;
/**
*/
public class ShrinkAction extends Action<ShrinkRequest, ShrinkResponse, ShrinkRequestBuilder> {
public static final ShrinkAction INSTANCE = new ShrinkAction();
public static final String NAME = "indices:admin/shrink";
private ShrinkAction() {
super(NAME);
}
@Override
public ShrinkResponse newResponse() {
return new ShrinkResponse();
}
@Override
public ShrinkRequestBuilder newRequestBuilder(ElasticsearchClient client) {
return new ShrinkRequestBuilder(client, this);
}
}

View File

@ -0,0 +1,107 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.indices.shrink;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.IndicesRequest;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import java.util.Objects;
import static org.elasticsearch.action.ValidateActions.addValidationError;
/**
* Request class to shrink an index into a single shard
*/
public class ShrinkRequest extends AcknowledgedRequest<ShrinkRequest> implements IndicesRequest {
private CreateIndexRequest shrinkIndexRequest;
private String sourceIndex;
ShrinkRequest() {}
public ShrinkRequest(String targetIndex, String sourceindex) {
this.shrinkIndexRequest = new CreateIndexRequest(targetIndex);
this.sourceIndex = sourceindex;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = shrinkIndexRequest == null ? null : shrinkIndexRequest.validate();
if (sourceIndex == null) {
validationException = addValidationError("source index is missing", validationException);
}
if (shrinkIndexRequest == null) {
validationException = addValidationError("shrink index request is missing", validationException);
}
return validationException;
}
public void setSourceIndex(String index) {
this.sourceIndex = index;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
shrinkIndexRequest = new CreateIndexRequest();
shrinkIndexRequest.readFrom(in);
sourceIndex = in.readString();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
shrinkIndexRequest.writeTo(out);
out.writeString(sourceIndex);
}
@Override
public String[] indices() {
return new String[] {sourceIndex};
}
@Override
public IndicesOptions indicesOptions() {
return IndicesOptions.lenientExpandOpen();
}
public void setShrinkIndex(CreateIndexRequest shrinkIndexRequest) {
this.shrinkIndexRequest = Objects.requireNonNull(shrinkIndexRequest, "shrink index request must not be null");
}
/**
* Returns the {@link CreateIndexRequest} for the shrink index
*/
public CreateIndexRequest getShrinkIndexReqeust() {
return shrinkIndexRequest;
}
/**
* Returns the source index name
*/
public String getSourceIndex() {
return sourceIndex;
}
}

View File

@ -0,0 +1,47 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.indices.shrink;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;
import org.elasticsearch.client.ElasticsearchClient;
import org.elasticsearch.common.settings.Settings;
public class ShrinkRequestBuilder extends AcknowledgedRequestBuilder<ShrinkRequest, ShrinkResponse,
ShrinkRequestBuilder> {
public ShrinkRequestBuilder(ElasticsearchClient client, ShrinkAction action) {
super(client, action, new ShrinkRequest());
}
public ShrinkRequestBuilder setTargetIndex(CreateIndexRequest request) {
this.request.setShrinkIndex(request);
return this;
}
public ShrinkRequestBuilder setSourceIndex(String index) {
this.request.setSourceIndex(index);
return this;
}
public ShrinkRequestBuilder setSettings(Settings settings) {
this.request.getShrinkIndexReqeust().settings(settings);
return this;
}
}

View File

@ -0,0 +1,31 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.indices.shrink;
import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;
public final class ShrinkResponse extends CreateIndexResponse {
ShrinkResponse() {
}
ShrinkResponse(boolean acknowledged) {
super(acknowledged);
}
}

View File

@ -0,0 +1,162 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.indices.shrink;
import org.apache.lucene.index.IndexWriter;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.indices.create.CreateIndexClusterStateUpdateRequest;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;
import org.elasticsearch.cluster.routing.IndexRoutingTable;
import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.routing.ShardRoutingState;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.index.shard.DocsStats;
import org.elasticsearch.indices.IndexAlreadyExistsException;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Predicate;
/**
* Main class to initiate shrinking an index into a new index with a single shard
*/
public class TransportShrinkAction extends TransportMasterNodeAction<ShrinkRequest, ShrinkResponse> {
private final MetaDataCreateIndexService createIndexService;
private final Client client;
@Inject
public TransportShrinkAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, MetaDataCreateIndexService createIndexService,
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, Client client) {
super(settings, ShrinkAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,
ShrinkRequest::new);
this.createIndexService = createIndexService;
this.client = client;
}
@Override
protected String executor() {
// we go async right away
return ThreadPool.Names.SAME;
}
@Override
protected ShrinkResponse newResponse() {
return new ShrinkResponse();
}
@Override
protected ClusterBlockException checkBlock(ShrinkRequest request, ClusterState state) {
return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.getShrinkIndexReqeust().index());
}
@Override
protected void masterOperation(final ShrinkRequest shrinkRequest, final ClusterState state,
final ActionListener<ShrinkResponse> listener) {
final String sourceIndex = indexNameExpressionResolver.resolveDateMathExpression(shrinkRequest.getSourceIndex());
client.admin().indices().prepareStats(sourceIndex).clear().setDocs(true).execute(new ActionListener<IndicesStatsResponse>() {
@Override
public void onResponse(IndicesStatsResponse indicesStatsResponse) {
CreateIndexClusterStateUpdateRequest updateRequest = prepareCreateIndexRequest(shrinkRequest, state,
indicesStatsResponse.getTotal().getDocs(), indexNameExpressionResolver);
createIndexService.createIndex(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {
@Override
public void onResponse(ClusterStateUpdateResponse response) {
listener.onResponse(new ShrinkResponse(response.isAcknowledged()));
}
@Override
public void onFailure(Throwable t) {
if (t instanceof IndexAlreadyExistsException) {
logger.trace("[{}] failed to create shrink index", t, updateRequest.index());
} else {
logger.debug("[{}] failed to create shrink index", t, updateRequest.index());
}
listener.onFailure(t);
}
});
}
@Override
public void onFailure(Throwable e) {
listener.onFailure(e);
}
});
}
// static for unittesting this method
static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final ShrinkRequest shrinkReqeust, final ClusterState state
, final DocsStats docsStats, IndexNameExpressionResolver indexNameExpressionResolver) {
final String sourceIndex = indexNameExpressionResolver.resolveDateMathExpression(shrinkReqeust.getSourceIndex());
final CreateIndexRequest targetIndex = shrinkReqeust.getShrinkIndexReqeust();
final String targetIndexName = indexNameExpressionResolver.resolveDateMathExpression(targetIndex.index());
final IndexMetaData metaData = state.metaData().index(sourceIndex);
final Settings targetIndexSettings = Settings.builder().put(targetIndex.settings())
.normalizePrefix(IndexMetaData.INDEX_SETTING_PREFIX).build();
long count = docsStats.getCount();
if (count >= IndexWriter.MAX_DOCS) {
throw new IllegalStateException("Can't merge index with more than [" + IndexWriter.MAX_DOCS
+ "] docs - too many documents");
}
targetIndex.cause("shrink_index");
targetIndex.settings(Settings.builder()
.put(targetIndexSettings)
// we can only shrink to 1 index so far!
.put("index.number_of_shards", 1)
);
return new CreateIndexClusterStateUpdateRequest(targetIndex,
"shrink_index", targetIndexName, true)
// mappings are updated on the node when merging in the shards, this prevents race-conditions since all mapping must be
// applied once we took the snapshot and if somebody fucks things up and switches the index read/write and adds docs we miss
// the mappings for everything is corrupted and hard to debug
.ackTimeout(targetIndex.timeout())
.masterNodeTimeout(targetIndex.masterNodeTimeout())
.settings(targetIndex.settings())
.aliases(targetIndex.aliases())
.customs(targetIndex.customs())
.shrinkFrom(metaData.getIndex());
}
}

View File

@ -32,10 +32,8 @@ import org.elasticsearch.index.engine.SegmentsStats;
import org.elasticsearch.index.fielddata.FieldDataStats;
import org.elasticsearch.index.flush.FlushStats;
import org.elasticsearch.index.get.GetStats;
import org.elasticsearch.index.percolator.PercolatorQueryCache;
import org.elasticsearch.index.shard.IndexingStats;
import org.elasticsearch.index.merge.MergeStats;
import org.elasticsearch.index.percolator.PercolatorQueryCacheStats;
import org.elasticsearch.index.recovery.RecoveryStats;
import org.elasticsearch.index.refresh.RefreshStats;
import org.elasticsearch.index.search.stats.SearchStats;
@ -101,9 +99,6 @@ public class CommonStats implements Streamable, ToXContent {
case Segments:
segments = new SegmentsStats();
break;
case PercolatorCache:
percolatorCache = new PercolatorQueryCacheStats();
break;
case Translog:
translog = new TranslogStats();
break;
@ -123,8 +118,7 @@ public class CommonStats implements Streamable, ToXContent {
}
public CommonStats(IndicesQueryCache indicesQueryCache, PercolatorQueryCache percolatorQueryCache,
IndexShard indexShard, CommonStatsFlags flags) {
public CommonStats(IndicesQueryCache indicesQueryCache, IndexShard indexShard, CommonStatsFlags flags) {
CommonStatsFlags.Flag[] setFlags = flags.getFlags();
@ -169,9 +163,6 @@ public class CommonStats implements Streamable, ToXContent {
case Segments:
segments = indexShard.segmentStats(flags.includeSegmentFileSizes());
break;
case PercolatorCache:
percolatorCache = percolatorQueryCache.getStats(indexShard.shardId());
break;
case Translog:
translog = indexShard.translogStats();
break;
@ -223,9 +214,6 @@ public class CommonStats implements Streamable, ToXContent {
@Nullable
public FieldDataStats fieldData;
@Nullable
public PercolatorQueryCacheStats percolatorCache;
@Nullable
public CompletionStats completion;
@ -331,14 +319,6 @@ public class CommonStats implements Streamable, ToXContent {
} else {
fieldData.add(stats.getFieldData());
}
if (percolatorCache == null) {
if (stats.getPercolatorCache() != null) {
percolatorCache = new PercolatorQueryCacheStats();
percolatorCache.add(stats.getPercolatorCache());
}
} else {
percolatorCache.add(stats.getPercolatorCache());
}
if (completion == null) {
if (stats.getCompletion() != null) {
completion = new CompletionStats();
@ -436,11 +416,6 @@ public class CommonStats implements Streamable, ToXContent {
return this.fieldData;
}
@Nullable
public PercolatorQueryCacheStats getPercolatorCache() {
return percolatorCache;
}
@Nullable
public CompletionStats getCompletion() {
return completion;
@ -528,9 +503,6 @@ public class CommonStats implements Streamable, ToXContent {
if (in.readBoolean()) {
fieldData = FieldDataStats.readFieldDataStats(in);
}
if (in.readBoolean()) {
percolatorCache = PercolatorQueryCacheStats.readPercolateStats(in);
}
if (in.readBoolean()) {
completion = CompletionStats.readCompletionStats(in);
}
@ -610,12 +582,6 @@ public class CommonStats implements Streamable, ToXContent {
out.writeBoolean(true);
fieldData.writeTo(out);
}
if (percolatorCache == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
percolatorCache.writeTo(out);
}
if (completion == null) {
out.writeBoolean(false);
} else {
@ -669,9 +635,6 @@ public class CommonStats implements Streamable, ToXContent {
if (fieldData != null) {
fieldData.toXContent(builder, params);
}
if (percolatorCache != null) {
percolatorCache.toXContent(builder, params);
}
if (completion != null) {
completion.toXContent(builder, params);
}

View File

@ -240,7 +240,6 @@ public class CommonStatsFlags implements Streamable, Cloneable {
FieldData("fielddata"),
Docs("docs"),
Warmer("warmer"),
PercolatorCache("percolator_cache"),
Completion("completion"),
Segments("segments"),
Translog("translog"),

View File

@ -184,15 +184,6 @@ public class IndicesStatsRequest extends BroadcastRequest<IndicesStatsRequest> {
return flags.isSet(Flag.FieldData);
}
public IndicesStatsRequest percolate(boolean percolate) {
flags.set(Flag.PercolatorCache, percolate);
return this;
}
public boolean percolate() {
return flags.isSet(Flag.PercolatorCache);
}
public IndicesStatsRequest segments(boolean segments) {
flags.set(Flag.Segments, segments);
return this;

View File

@ -127,11 +127,6 @@ public class IndicesStatsRequestBuilder extends BroadcastOperationRequestBuilder
return this;
}
public IndicesStatsRequestBuilder setPercolate(boolean percolate) {
request.percolate(percolate);
return this;
}
public IndicesStatsRequestBuilder setSegments(boolean segments) {
request.segments(segments);
return this;

View File

@ -31,8 +31,6 @@ import org.elasticsearch.index.shard.ShardPath;
import java.io.IOException;
import static org.elasticsearch.cluster.routing.ShardRouting.readShardRoutingEntry;
/**
*/
public class ShardStats implements Streamable, ToXContent {
@ -91,7 +89,7 @@ public class ShardStats implements Streamable, ToXContent {
@Override
public void readFrom(StreamInput in) throws IOException {
shardRouting = readShardRoutingEntry(in);
shardRouting = new ShardRouting(in);
commonStats = CommonStats.readCommonStats(in);
commitStats = CommitStats.readOptionalCommitStatsFrom(in);
statePath = in.readString();

View File

@ -139,9 +139,6 @@ public class TransportIndicesStatsAction extends TransportBroadcastByNodeAction<
flags.set(CommonStatsFlags.Flag.FieldData);
flags.fieldDataFields(request.fieldDataFields());
}
if (request.percolate()) {
flags.set(CommonStatsFlags.Flag.PercolatorCache);
}
if (request.segments()) {
flags.set(CommonStatsFlags.Flag.Segments);
flags.includeSegmentFileSizes(request.includeSegmentFileSizes());
@ -163,6 +160,6 @@ public class TransportIndicesStatsAction extends TransportBroadcastByNodeAction<
flags.set(CommonStatsFlags.Flag.Recovery);
}
return new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesService.getIndicesQueryCache(), indexService.cache().getPercolatorQueryCache(), indexShard, flags), indexShard.commitStats());
return new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesService.getIndicesQueryCache(), indexShard, flags), indexShard.commitStats());
}
}

View File

@ -26,8 +26,6 @@ import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import static org.elasticsearch.cluster.routing.ShardRouting.readShardRoutingEntry;
public class ShardUpgradeStatus extends BroadcastShardResponse {
private ShardRouting shardRouting;
@ -75,7 +73,7 @@ public class ShardUpgradeStatus extends BroadcastShardResponse {
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
shardRouting = readShardRoutingEntry(in);
shardRouting = new ShardRouting(in);
totalBytes = in.readLong();
toUpgradeBytes = in.readLong();
toUpgradeBytesAncient = in.readLong();
@ -89,4 +87,4 @@ public class ShardUpgradeStatus extends BroadcastShardResponse {
out.writeLong(toUpgradeBytes);
out.writeLong(toUpgradeBytesAncient);
}
}
}

View File

@ -34,7 +34,7 @@ import java.io.IOException;
*/
public class ShardValidateQueryRequest extends BroadcastShardRequest {
private QueryBuilder<?> query;
private QueryBuilder query;
private String[] types = Strings.EMPTY_ARRAY;
private boolean explain;
private boolean rewrite;
@ -57,7 +57,7 @@ public class ShardValidateQueryRequest extends BroadcastShardRequest {
this.nowInMillis = request.nowInMillis;
}
public QueryBuilder<?> query() {
public QueryBuilder query() {
return query;
}

View File

@ -28,7 +28,6 @@ import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.DefaultShardOperationFailedException;
import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;
import org.elasticsearch.action.support.broadcast.TransportBroadcastAction;
import org.elasticsearch.cache.recycler.PageCacheRecycler;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
@ -43,7 +42,6 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.index.engine.Engine;
import org.elasticsearch.index.query.QueryShardContext;
import org.elasticsearch.index.query.QueryShardException;
import org.elasticsearch.index.shard.IndexShard;
import org.elasticsearch.indices.IndicesService;
@ -73,8 +71,6 @@ public class TransportValidateQueryAction extends TransportBroadcastAction<Valid
private final ScriptService scriptService;
private final PageCacheRecycler pageCacheRecycler;
private final BigArrays bigArrays;
private final FetchPhase fetchPhase;
@ -82,13 +78,12 @@ public class TransportValidateQueryAction extends TransportBroadcastAction<Valid
@Inject
public TransportValidateQueryAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,
TransportService transportService, IndicesService indicesService, ScriptService scriptService,
PageCacheRecycler pageCacheRecycler, BigArrays bigArrays, ActionFilters actionFilters,
BigArrays bigArrays, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver, FetchPhase fetchPhase) {
super(settings, ValidateQueryAction.NAME, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, ValidateQueryRequest::new, ShardValidateQueryRequest::new, ThreadPool.Names.SEARCH);
this.indicesService = indicesService;
this.scriptService = scriptService;
this.pageCacheRecycler = pageCacheRecycler;
this.bigArrays = bigArrays;
this.fetchPhase = fetchPhase;
}
@ -176,7 +171,7 @@ public class TransportValidateQueryAction extends TransportBroadcastAction<Valid
DefaultSearchContext searchContext = new DefaultSearchContext(0,
new ShardSearchLocalRequest(request.types(), request.nowInMillis(), request.filteringAliases()), null, searcher,
indexService, indexShard, scriptService, pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter(),
indexService, indexShard, scriptService, bigArrays, threadPool.estimatedTimeInMillisCounter(),
parseFieldMatcher, SearchService.NO_TIMEOUT, fetchPhase);
SearchContext.setCurrent(searchContext);
try {

View File

@ -39,7 +39,7 @@ import java.util.Arrays;
*/
public class ValidateQueryRequest extends BroadcastRequest<ValidateQueryRequest> {
private QueryBuilder<?> query = new MatchAllQueryBuilder();
private QueryBuilder query = new MatchAllQueryBuilder();
private boolean explain;
private boolean rewrite;
@ -73,11 +73,11 @@ public class ValidateQueryRequest extends BroadcastRequest<ValidateQueryRequest>
/**
* The query to validate.
*/
public QueryBuilder<?> query() {
public QueryBuilder query() {
return query;
}
public ValidateQueryRequest query(QueryBuilder<?> query) {
public ValidateQueryRequest query(QueryBuilder query) {
this.query = query;
return this;
}

View File

@ -43,6 +43,7 @@ import org.elasticsearch.index.VersionType;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import static org.elasticsearch.action.ValidateActions.addValidationError;
@ -125,6 +126,7 @@ public class BulkRequest extends ActionRequest<BulkRequest> implements Composite
}
BulkRequest internalAdd(IndexRequest request, @Nullable Object payload) {
Objects.requireNonNull(request, "'request' must not be null");
requests.add(request);
addPayload(payload);
// lack of source is validated in validate() method
@ -144,6 +146,7 @@ public class BulkRequest extends ActionRequest<BulkRequest> implements Composite
}
BulkRequest internalAdd(UpdateRequest request, @Nullable Object payload) {
Objects.requireNonNull(request, "'request' must not be null");
requests.add(request);
addPayload(payload);
if (request.doc() != null) {
@ -166,6 +169,7 @@ public class BulkRequest extends ActionRequest<BulkRequest> implements Composite
}
public BulkRequest add(DeleteRequest request, @Nullable Object payload) {
Objects.requireNonNull(request, "'request' must not be null");
requests.add(request);
addPayload(payload);
sizeInBytes += REQUEST_OVERHEAD;

View File

@ -39,7 +39,7 @@ public class ExplainRequest extends SingleShardRequest<ExplainRequest> {
private String id;
private String routing;
private String preference;
private QueryBuilder<?> query;
private QueryBuilder query;
private String[] fields;
private FetchSourceContext fetchSourceContext;
@ -100,11 +100,11 @@ public class ExplainRequest extends SingleShardRequest<ExplainRequest> {
return this;
}
public QueryBuilder<?> query() {
public QueryBuilder query() {
return query;
}
public ExplainRequest query(QueryBuilder<?> query) {
public ExplainRequest query(QueryBuilder query) {
this.query = query;
return this;
}

View File

@ -26,7 +26,6 @@ import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.RoutingMissingException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.single.shard.TransportSingleShardAction;
import org.elasticsearch.cache.recycler.PageCacheRecycler;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.routing.ShardIterator;
@ -65,7 +64,6 @@ public class TransportExplainAction extends TransportSingleShardAction<ExplainRe
private final ScriptService scriptService;
private final PageCacheRecycler pageCacheRecycler;
private final BigArrays bigArrays;
@ -74,13 +72,12 @@ public class TransportExplainAction extends TransportSingleShardAction<ExplainRe
@Inject
public TransportExplainAction(Settings settings, ThreadPool threadPool, ClusterService clusterService,
TransportService transportService, IndicesService indicesService, ScriptService scriptService,
PageCacheRecycler pageCacheRecycler, BigArrays bigArrays, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver, FetchPhase fetchPhase) {
BigArrays bigArrays, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
FetchPhase fetchPhase) {
super(settings, ExplainAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,
ExplainRequest::new, ThreadPool.Names.GET);
this.indicesService = indicesService;
this.scriptService = scriptService;
this.pageCacheRecycler = pageCacheRecycler;
this.bigArrays = bigArrays;
this.fetchPhase = fetchPhase;
}
@ -117,7 +114,7 @@ public class TransportExplainAction extends TransportSingleShardAction<ExplainRe
SearchContext context = new DefaultSearchContext(0,
new ShardSearchLocalRequest(new String[] { request.type() }, request.nowInMillis, request.filteringAlias()), null,
result.searcher(), indexService, indexShard, scriptService, pageCacheRecycler, bigArrays,
result.searcher(), indexService, indexShard, scriptService, bigArrays,
threadPool.estimatedTimeInMillisCounter(), parseFieldMatcher, SearchService.NO_TIMEOUT, fetchPhase);
SearchContext.setCurrent(context);

View File

@ -117,11 +117,13 @@ public class TransportFieldStatsTransportAction extends
if (existing != null) {
if (existing.getType() != entry.getValue().getType()) {
if (conflicts.containsKey(entry.getKey()) == false) {
FieldStats[] fields = new FieldStats[] {entry.getValue(), existing};
Arrays.sort(fields, (o1, o2) -> Byte.compare(o1.getType(), o2.getType()));
conflicts.put(entry.getKey(),
"Field [" + entry.getKey() + "] of type [" +
FieldStats.typeName(entry.getValue().getType()) +
FieldStats.typeName(fields[0].getType()) +
"] conflicts with existing field of type [" +
FieldStats.typeName(existing.getType()) +
FieldStats.typeName(fields[1].getType()) +
"] in other index.");
}
} else {

View File

@ -21,6 +21,7 @@ package org.elasticsearch.action.get;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
@ -168,4 +169,9 @@ public class GetResponse extends ActionResponse implements Iterable<GetField>, T
super.writeTo(out);
getResult.writeTo(out);
}
@Override
public String toString() {
return Strings.toString(this, true);
}
}

View File

@ -169,7 +169,7 @@ public class TransportIndexAction extends TransportReplicationAction<IndexReques
*/
public static Engine.Index executeIndexRequestOnReplica(IndexRequest request, IndexShard indexShard) {
final ShardId shardId = indexShard.shardId();
SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.REPLICA, request.source()).index(shardId.getIndexName()).type(request.type()).id(request.id())
SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.REPLICA, shardId.getIndexName(), request.type(), request.id(), request.source())
.routing(request.routing()).parent(request.parent()).timestamp(request.timestamp()).ttl(request.ttl());
final Engine.Index operation = indexShard.prepareIndexOnReplica(sourceToParse, request.version(), request.versionType());
@ -183,7 +183,7 @@ public class TransportIndexAction extends TransportReplicationAction<IndexReques
/** Utility method to prepare an index operation on primary shards */
public static Engine.Index prepareIndexOperationOnPrimary(IndexRequest request, IndexShard indexShard) {
SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.PRIMARY, request.source()).index(request.index()).type(request.type()).id(request.id())
SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.PRIMARY, request.index(), request.type(), request.id(), request.source())
.routing(request.routing()).parent(request.parent()).timestamp(request.timestamp()).ttl(request.ttl());
return indexShard.prepareIndexOnPrimary(sourceToParse, request.version(), request.versionType());
}

View File

@ -80,7 +80,7 @@ public class PutPipelineTransportAction extends TransportMasterNodeAction<PutPip
public void onResponse(NodesInfoResponse nodeInfos) {
try {
Map<DiscoveryNode, IngestInfo> ingestInfos = new HashMap<>();
for (NodeInfo nodeInfo : nodeInfos) {
for (NodeInfo nodeInfo : nodeInfos.getNodes()) {
ingestInfos.put(nodeInfo.getNode(), nodeInfo.getIngest());
}
pipelineStore.put(clusterService, ingestInfos, request, listener);

View File

@ -132,6 +132,9 @@ public class SimulatePipelineRequest extends ActionRequest<SimulatePipelineReque
throw new IllegalArgumentException("param [pipeline] is null");
}
Pipeline pipeline = pipelineStore.get(pipelineId);
if (pipeline == null) {
throw new IllegalArgumentException("pipeline [" + pipelineId + "] does not exist");
}
List<IngestDocument> ingestDocumentList = parseDocs(config);
return new Parsed(pipeline, ingestDocumentList, verbose);
}

View File

@ -19,8 +19,6 @@
package org.elasticsearch.action.search;
import java.util.Map;
/**
*
*/
@ -36,13 +34,10 @@ class ParsedScrollId {
private final ScrollIdForNode[] context;
private final Map<String, String> attributes;
public ParsedScrollId(String source, String type, ScrollIdForNode[] context, Map<String, String> attributes) {
public ParsedScrollId(String source, String type, ScrollIdForNode[] context) {
this.source = source;
this.type = type;
this.context = context;
this.attributes = attributes;
}
public String getSource() {
@ -56,8 +51,4 @@ class ParsedScrollId {
public ScrollIdForNode[] getContext() {
return context;
}
public Map<String, String> getAttributes() {
return this.attributes;
}
}

View File

@ -123,7 +123,7 @@ class SearchDfsQueryAndFetchAsyncAction extends AbstractSearchAsyncAction<DfsSea
queryFetchResults);
String scrollId = null;
if (request.scroll() != null) {
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults);
}
listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(),
buildTookInMillis(), buildShardFailures()));

View File

@ -200,7 +200,7 @@ class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<DfsSe
fetchResults);
String scrollId = null;
if (request.scroll() != null) {
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults);
}
listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(),
buildTookInMillis(), buildShardFailures()));

View File

@ -66,7 +66,7 @@ class SearchQueryAndFetchAsyncAction extends AbstractSearchAsyncAction<QueryFetc
firstResults);
String scrollId = null;
if (request.scroll() != null) {
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults);
}
listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(),
buildTookInMillis(), buildShardFailures()));

View File

@ -133,7 +133,7 @@ class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction<QuerySea
fetchResults);
String scrollId = null;
if (request.scroll() != null) {
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);
scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults);
}
listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps,
successfulOps.get(), buildTookInMillis(), buildShardFailures()));

Some files were not shown because too many files have changed in this diff Show More