Merge branch 'master' into rjernst-placeholder

* master: (911 commits)
  [TEST] wait for yellow after setup doc tests (#18726)
  Fix recovery throttling to properly handle relocating non-primary shards (#18701)
  Fix merge stats rendering in RestIndicesAction (#18720)
  [TEST] mute RandomAllocationDeciderTests.testRandomDecisions
  Reworked docs for index-shrink API (#18705)
  Improve painless compile-time exceptions
  Adds UUIDs to snapshots
  Add test rethrottle test case for delete-by-query
  Do not start scheduled pings until transport start
  Adressing review comments
  Only filter intial recovery (post API) when shrinking an index (#18661)
  Add tests to check that toQuery() doesn't return null
  Removing handling of null lucene query where we catch this at parse time
  Handle empty query bodies at parse time and remove EmptyQueryBuilder
  Mute failing assertions in IndexWithShadowReplicasIT until fix
  Remove allow running as root
  Add upgrade-not-supported warning to alpha release notes
  remove unrecognized javadoc tag from matrix aggregation module
  set ValuesSourceConfig fields as private
  Adding MultiValuesSource support classes and documentation to matrix stats agg module
  ...
This commit is contained in:
Jason Tedor 2016-06-03 13:24:44 -04:00
commit bbd5f26d45
2806 changed files with 130204 additions and 74066 deletions

13
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,13 @@
<!--
Thank you for your interest in and contributing to Elasticsearch! There
are a few simple things to check before submitting your pull request
that can help with the review process. You should delete these items
from your submission, but they are here to help bring them to your
attention.
-->
- Have you signed the [contributor license agreement](https://www.elastic.co/contributor-agreement)?
- Have you followed the [contributor guidelines](https://github.com/elastic/elasticsearch/blob/master/.github/CONTRIBUTING.md)?
- If submitting code, have you built your formula locally prior to submission with `gradle check`?
- If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed.
- If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)?

View File

@ -1,10 +0,0 @@
language: java
jdk:
- openjdk7
env:
- ES_TEST_LOCAL=true
- ES_TEST_LOCAL=false
notifications:
email: false

View File

@ -71,12 +71,47 @@ Once your changes and tests are ready to submit for review:
Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Elasticsearch.
Please adhere to the general guideline that you should never force push
to a publicly shared branch. Once you have opened your pull request, you
should consider your branch publicly shared. Instead of force pushing
you can just add incremental commits; this is generally easier on your
reviewers. If you need to pick up changes from master, you can merge
master into your branch. A reviewer might ask you to rebase a
long-running pull request in which case force pushing is okay for that
request. Note that squashing at the end of the review process should
also not be done, that can be done when the pull request is [integrated
via GitHub](https://github.com/blog/2141-squash-your-commits).
Contributing to the Elasticsearch codebase
------------------------------------------
**Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)
Make sure you have [Gradle](http://gradle.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE: `gradle eclipse` then `File: Import: Existing Projects into Workspace`. Select the option `Search for nested projects`. Additionally you will want to ensure that Eclipse is using 2048m of heap by modifying `eclipse.ini` accordingly to avoid GC overhead errors.
Make sure you have [Gradle](http://gradle.org) installed, as
Elasticsearch uses it as its build system.
Eclipse users can automatically configure their IDE: `gradle eclipse`
then `File: Import: Existing Projects into Workspace`. Select the
option `Search for nested projects`. Additionally you will want to
ensure that Eclipse is using 2048m of heap by modifying `eclipse.ini`
accordingly to avoid GC overhead errors.
IntelliJ users can automatically configure their IDE: `gradle idea`
then `File->New Project From Existing Sources`. Point to the root of
the source directory, select
`Import project from external model->Gradle`, enable
`Use auto-import`.
The Elasticsearch codebase makes heavy use of Java `assert`s and the
test runner requires that assertions be enabled within the JVM. This
can be accomplished by passing the flag `-ea` to the JVM on startup.
For IntelliJ, go to
`Run->Edit Configurations...->Defaults->JUnit->VM options` and input
`-ea`.
For Eclipse, go to `Preferences->Java->Installed JREs` and add `-ea` to
`VM Arguments`.
Please follow these formatting guidelines:
@ -95,7 +130,7 @@ cd elasticsearch/
gradle assemble
```
You will find the newly built packages under: `./distribution/build/distributions/`.
You will find the newly built packages under: `./distribution/(deb|rpm|tar|zip)/build/distributions/`.
Before submitting your changes, run the test suite to make sure that nothing is broken, with:

View File

@ -1,5 +1,5 @@
Elasticsearch
Copyright 2009-2015 Elasticsearch
Copyright 2009-2016 Elasticsearch
This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).

View File

@ -9,7 +9,7 @@ Elasticsearch is a distributed RESTful search engine built for the cloud. Featur
* Distributed and Highly Available Search Engine.
** Each index is fully sharded with a configurable number of shards.
** Each shard can have one or more replicas.
** Read / Search operations performed on either one of the replica shard.
** Read / Search operations performed on any of the replica shards.
* Multi Tenant with Multi Types.
** Support for more than one index.
** Support for more than one type per index.
@ -50,19 +50,19 @@ h3. Indexing
Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
@ -101,7 +101,7 @@ Just for kicks, let's get all the documents stored (we should see the user as we
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -113,7 +113,7 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"range" : {
"postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
"post_date" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
}
}
}'
@ -130,19 +130,19 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"post_date": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"post_date": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
@ -152,11 +152,11 @@ The above will index information into the @kimchy@ index, with two types, @info@
Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
<pre>
curl -XPUT http://localhost:9200/another_user/ -d '
curl -XPUT http://localhost:9200/another_user?pretty -d '
{
"index" : {
"numberOfShards" : 1,
"numberOfReplicas" : 1
"number_of_shards" : 1,
"number_of_replicas" : 1
}
}'
</pre>
@ -168,7 +168,7 @@ index (twitter user), for example:
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -179,7 +179,7 @@ Or on all the indices:
curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
"match_all" : {}
}
}'
</pre>
@ -222,7 +222,7 @@ h1. License
<pre>
This software is licensed under the Apache License, version 2 ("ALv2"), quoted below.
Copyright 2009-2015 Elasticsearch <https://www.elastic.co>
Copyright 2009-2016 Elasticsearch <https://www.elastic.co>
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of

View File

@ -18,24 +18,18 @@ gradle assemble
== Other test options
To disable and enable network transport, set the `Des.node.mode`.
To disable and enable network transport, set the `tests.es.node.mode` system property.
Use network transport:
------------------------------------
-Des.node.mode=network
-Dtests.es.node.mode=network
------------------------------------
Use local transport (default since 1.3):
-------------------------------------
-Des.node.mode=local
-------------------------------------
Alternatively, you can set the `ES_TEST_LOCAL` environment variable:
-------------------------------------
export ES_TEST_LOCAL=true && gradle test
-Dtests.es.node.mode=local
-------------------------------------
=== Running Elasticsearch from a checkout
@ -201,7 +195,7 @@ gradle test -Dtests.timeoutSuite=5000! ...
Change the logging level of ES (not gradle)
--------------------------------
gradle test -Des.logger.level=DEBUG
gradle test -Dtests.es.logger.level=DEBUG
--------------------------------
Print all the logging output from the test runs to the commandline

View File

@ -18,12 +18,40 @@
*/
import com.bmuschko.gradle.nexus.NexusPlugin
import org.eclipse.jgit.lib.Repository
import org.eclipse.jgit.lib.RepositoryBuilder
import org.gradle.plugins.ide.eclipse.model.SourceFolder
import org.apache.tools.ant.taskdefs.condition.Os
// common maven publishing configuration
subprojects {
group = 'org.elasticsearch'
version = org.elasticsearch.gradle.VersionProperties.elasticsearch
description = "Elasticsearch subproject ${project.path}"
// we only use maven publish to add tasks for pom generation
plugins.withType(MavenPublishPlugin).whenPluginAdded {
publishing {
publications {
// add license information to generated poms
all {
pom.withXml { XmlProvider xml ->
Node node = xml.asNode()
node.appendNode('inceptionYear', '2009')
Node license = node.appendNode('licenses').appendNode('license')
license.appendNode('name', 'The Apache Software License, Version 2.0')
license.appendNode('url', 'http://www.apache.org/licenses/LICENSE-2.0.txt')
license.appendNode('distribution', 'repo')
Node developer = node.appendNode('developers').appendNode('developer')
developer.appendNode('name', 'Elastic')
developer.appendNode('url', 'http://www.elastic.co')
}
}
}
}
}
plugins.withType(NexusPlugin).whenPluginAdded {
modifyPom {
@ -50,21 +78,37 @@ subprojects {
javadoc = true
tests = false
}
nexus {
String buildSnapshot = System.getProperty('build.snapshot', 'true')
if (buildSnapshot == 'false') {
Repository repo = new RepositoryBuilder().findGitDir(project.rootDir).build()
String shortHash = repo.resolve('HEAD')?.name?.substring(0,7)
repositoryUrl = project.hasProperty('build.repository') ? project.property('build.repository') : "file://${System.getenv('HOME')}/elasticsearch-releases/${version}-${shortHash}/"
}
}
// we have our own username/password prompts so that they only happen once
// TODO: add gpg signing prompts
// TODO: add gpg signing prompts, which is tricky, as the buildDeb/buildRpm tasks are executed before this code block
project.gradle.taskGraph.whenReady { taskGraph ->
if (taskGraph.allTasks.any { it.name == 'uploadArchives' }) {
Console console = System.console()
if (project.hasProperty('nexusUsername') == false) {
String nexusUsername = console.readLine('\nNexus username: ')
// no need for username/password on local deploy
if (project.nexus.repositoryUrl.startsWith('file://')) {
project.rootProject.allprojects.each {
it.ext.nexusUsername = nexusUsername
it.ext.nexusUsername = 'foo'
it.ext.nexusPassword = 'bar'
}
}
if (project.hasProperty('nexusPassword') == false) {
String nexusPassword = new String(console.readPassword('\nNexus password: '))
project.rootProject.allprojects.each {
it.ext.nexusPassword = nexusPassword
} else {
if (project.hasProperty('nexusUsername') == false) {
String nexusUsername = console.readLine('\nNexus username: ')
project.rootProject.allprojects.each {
it.ext.nexusUsername = nexusUsername
}
}
if (project.hasProperty('nexusPassword') == false) {
String nexusPassword = new String(console.readPassword('\nNexus password: '))
project.rootProject.allprojects.each {
it.ext.nexusPassword = nexusPassword
}
}
}
}
@ -100,6 +144,14 @@ subprojects {
// see https://discuss.gradle.org/t/add-custom-javadoc-option-that-does-not-take-an-argument/5959
javadoc.options.encoding='UTF8'
javadoc.options.addStringOption('Xdoclint:all,-missing', '-quiet')
/*
TODO: building javadocs with java 9 b118 is currently broken with weird errors, so
for now this is commented out...try again with the next ea build...
javadoc.executable = new File(project.javaHome, 'bin/javadoc')
if (project.javaVersion == JavaVersion.VERSION_1_9) {
// TODO: remove this hack! gradle should be passing this...
javadoc.options.addStringOption('source', '8')
}*/
}
}
@ -181,6 +233,10 @@ allprojects {
outputDir = file('build-idea/classes/main')
testOutputDir = file('build-idea/classes/test')
// also ignore other possible build dirs
excludeDirs += file('build')
excludeDirs += file('build-eclipse')
iml {
// fix so that Gradle idea plugin properly generates support for resource folders
// see also https://issues.gradle.org/browse/GRADLE-2975
@ -201,7 +257,6 @@ allprojects {
idea {
project {
languageLevel = org.elasticsearch.gradle.BuildPlugin.minimumJava.toString()
vcs = 'Git'
}
}
@ -213,13 +268,6 @@ tasks.idea.doLast {
if (System.getProperty('idea.active') != null && ideaMarker.exists() == false) {
throw new GradleException('You must run gradle idea from the root of elasticsearch before importing into IntelliJ')
}
// add buildSrc itself as a groovy project
task buildSrcIdea(type: GradleBuild) {
buildFile = 'buildSrc/build.gradle'
tasks = ['cleanIdea', 'ideaModule']
}
tasks.idea.dependsOn(buildSrcIdea)
// eclipse configuration
allprojects {
@ -227,6 +275,9 @@ allprojects {
// Name all the non-root projects after their path so that paths get grouped together when imported into eclipse.
if (path != ':') {
eclipse.project.name = path
if (Os.isFamily(Os.FAMILY_WINDOWS)) {
eclipse.project.name = eclipse.project.name.replace(':', '_')
}
}
plugins.withType(JavaBasePlugin) {
@ -252,20 +303,14 @@ allprojects {
into '.settings'
}
// otherwise .settings is not nuked entirely
tasks.cleanEclipse {
task wipeEclipseSettings(type: Delete) {
delete '.settings'
}
tasks.cleanEclipse.dependsOn(wipeEclipseSettings)
// otherwise the eclipse merging is *super confusing*
tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)
}
// add buildSrc itself as a groovy project
task buildSrcEclipse(type: GradleBuild) {
buildFile = 'buildSrc/build.gradle'
tasks = ['cleanEclipse', 'eclipse']
}
tasks.eclipse.dependsOn(buildSrcEclipse)
// we need to add the same --debug-jvm option as
// the real RunTask has, so we can pass it through
class Run extends DefaultTask {

1
buildSrc/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
build-bootstrap/

View File

@ -1,5 +1,3 @@
import java.nio.file.Files
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
@ -19,25 +17,21 @@ import java.nio.file.Files
* under the License.
*/
// we must use buildscript + apply so that an external plugin
// can apply this file, since the plugins directive is not
// supported through file includes
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.bmuschko:gradle-nexus-plugin:2.3.1'
}
}
import java.nio.file.Files
apply plugin: 'groovy'
apply plugin: 'com.bmuschko.nexus'
// TODO: move common IDE configuration to a common file to include
apply plugin: 'idea'
apply plugin: 'eclipse'
group = 'org.elasticsearch.gradle'
archivesBaseName = 'build-tools'
if (project == rootProject) {
// change the build dir used during build init, so that doing a clean
// won't wipe out the buildscript jar
buildDir = 'build-bootstrap'
}
/*****************************************************************************
* Propagating version.properties to the rest of the build *
*****************************************************************************/
Properties props = new Properties()
props.load(project.file('version.properties').newDataInputStream())
@ -51,35 +45,10 @@ if (snapshot) {
props.put("elasticsearch", version);
}
repositories {
mavenCentral()
maven {
name 'sonatype-snapshots'
url "https://oss.sonatype.org/content/repositories/snapshots/"
}
jcenter()
}
dependencies {
compile gradleApi()
compile localGroovy()
compile "com.carrotsearch.randomizedtesting:junit4-ant:${props.getProperty('randomizedrunner')}"
compile("junit:junit:${props.getProperty('junit')}") {
transitive = false
}
compile 'com.netflix.nebula:gradle-extra-configurations-plugin:3.0.3'
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
compile 'de.thetaphi:forbiddenapis:2.0'
compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
compile 'org.apache.rat:apache-rat:0.11'
}
File tempPropertiesFile = new File(project.buildDir, "version.properties")
task writeVersionProperties {
inputs.properties(props)
outputs.file(tempPropertiesFile)
doLast {
OutputStream stream = Files.newOutputStream(tempPropertiesFile.toPath());
try {
@ -95,31 +64,77 @@ processResources {
from tempPropertiesFile
}
extraArchive {
javadoc = false
tests = false
/*****************************************************************************
* Dependencies used by the entire build *
*****************************************************************************/
repositories {
jcenter()
}
idea {
module {
inheritOutputDirs = false
outputDir = file('build-idea/classes/main')
testOutputDir = file('build-idea/classes/test')
dependencies {
compile gradleApi()
compile localGroovy()
compile "com.carrotsearch.randomizedtesting:junit4-ant:${props.getProperty('randomizedrunner')}"
compile("junit:junit:${props.getProperty('junit')}") {
transitive = false
}
compile 'com.netflix.nebula:gradle-extra-configurations-plugin:3.0.3'
compile 'com.netflix.nebula:nebula-publishing-plugin:4.4.4'
compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
compile 'de.thetaphi:forbiddenapis:2.1'
compile 'com.bmuschko:gradle-nexus-plugin:2.3.1'
compile 'org.apache.rat:apache-rat:0.11'
}
/*****************************************************************************
* Bootstrap repositories *
*****************************************************************************/
// this will only happen when buildSrc is built on its own during build init
if (project == rootProject) {
repositories {
mavenCentral()
maven {
name 'sonatype-snapshots'
url "https://oss.sonatype.org/content/repositories/snapshots/"
}
}
}
eclipse {
classpath {
defaultOutputDir = file('build-eclipse')
/*****************************************************************************
* Normal project checks *
*****************************************************************************/
// this happens when included as a normal project in the build, which we do
// to enforce precommit checks like forbidden apis, as well as setup publishing
if (project != rootProject) {
apply plugin: 'elasticsearch.build'
apply plugin: 'nebula.maven-base-publish'
apply plugin: 'nebula.maven-scm'
// groovydoc succeeds, but has some weird internal exception...
groovydoc.enabled = false
// build-tools is not ready for primetime with these...
dependencyLicenses.enabled = false
forbiddenApisMain.enabled = false
jarHell.enabled = false
loggerUsageCheck.enabled = false
thirdPartyAudit.enabled = false
// test for elasticsearch.build tries to run with ES...
test.enabled = false
// TODO: re-enable once randomizedtesting gradle code is published and removed from here
licenseHeaders.enabled = false
forbiddenPatterns {
exclude '**/*.wav'
// the file that actually defines nocommit
exclude '**/ForbiddenPatternsTask.groovy'
}
}
task copyEclipseSettings(type: Copy) {
from project.file('src/main/resources/eclipse.settings')
into '.settings'
}
// otherwise .settings is not nuked entirely
tasks.cleanEclipse {
delete '.settings'
}
tasks.eclipse.dependsOn(cleanEclipse, copyEclipseSettings)

View File

@ -19,6 +19,7 @@
package org.elasticsearch.gradle
import nebula.plugin.extraconfigurations.ProvidedBasePlugin
import nebula.plugin.publishing.maven.MavenBasePublishPlugin
import org.elasticsearch.gradle.precommit.PrecommitTasks
import org.gradle.api.GradleException
import org.gradle.api.JavaVersion
@ -33,6 +34,8 @@ import org.gradle.api.artifacts.ProjectDependency
import org.gradle.api.artifacts.ResolvedArtifact
import org.gradle.api.artifacts.dsl.RepositoryHandler
import org.gradle.api.artifacts.maven.MavenPom
import org.gradle.api.publish.maven.MavenPublication
import org.gradle.api.publish.maven.tasks.GenerateMavenPom
import org.gradle.api.tasks.bundling.Jar
import org.gradle.api.tasks.compile.JavaCompile
import org.gradle.internal.jvm.Jvm
@ -54,7 +57,7 @@ class BuildPlugin implements Plugin<Project> {
project.pluginManager.apply('java')
project.pluginManager.apply('carrotsearch.randomized-testing')
// these plugins add lots of info to our jars
configureJarManifest(project) // jar config must be added before info broker
configureJars(project) // jar config must be added before info broker
project.pluginManager.apply('nebula.info-broker')
project.pluginManager.apply('nebula.info-basic')
project.pluginManager.apply('nebula.info-java')
@ -68,6 +71,7 @@ class BuildPlugin implements Plugin<Project> {
configureConfigurations(project)
project.ext.versions = VersionProperties.versions
configureCompile(project)
configurePomGeneration(project)
configureTest(project)
configurePrecommit(project)
@ -109,7 +113,7 @@ class BuildPlugin implements Plugin<Project> {
}
// enforce gradle version
GradleVersion minGradle = GradleVersion.version('2.8')
GradleVersion minGradle = GradleVersion.version('2.13')
if (GradleVersion.current() < minGradle) {
throw new GradleException("${minGradle} or above is required to build elasticsearch")
}
@ -139,7 +143,7 @@ class BuildPlugin implements Plugin<Project> {
}
project.rootProject.ext.javaHome = javaHome
project.rootProject.ext.javaVersion = javaVersion
project.rootProject.ext.javaVersion = javaVersionEnum
project.rootProject.ext.buildChecksDone = true
}
project.targetCompatibility = minimumJava
@ -228,7 +232,7 @@ class BuildPlugin implements Plugin<Project> {
*/
static void configureConfigurations(Project project) {
// we are not shipping these jars, we act like dumb consumers of these things
if (project.path.startsWith(':test:fixtures')) {
if (project.path.startsWith(':test:fixtures') || project.path == ':build-tools') {
return
}
// fail on any conflicting dependency versions
@ -266,44 +270,7 @@ class BuildPlugin implements Plugin<Project> {
// add exclusions to the pom directly, for each of the transitive deps of this project's deps
project.modifyPom { MavenPom pom ->
pom.withXml { XmlProvider xml ->
// first find if we have dependencies at all, and grab the node
NodeList depsNodes = xml.asNode().get('dependencies')
if (depsNodes.isEmpty()) {
return
}
// check each dependency for any transitive deps
for (Node depNode : depsNodes.get(0).children()) {
String groupId = depNode.get('groupId').get(0).text()
String artifactId = depNode.get('artifactId').get(0).text()
String version = depNode.get('version').get(0).text()
// collect the transitive deps now that we know what this dependency is
String depConfig = transitiveDepConfigName(groupId, artifactId, version)
Configuration configuration = project.configurations.findByName(depConfig)
if (configuration == null) {
continue // we did not make this dep non-transitive
}
Set<ResolvedArtifact> artifacts = configuration.resolvedConfiguration.resolvedArtifacts
if (artifacts.size() <= 1) {
// this dep has no transitive deps (or the only artifact is itself)
continue
}
// we now know we have something to exclude, so add the exclusion elements
Node exclusions = depNode.appendNode('exclusions')
for (ResolvedArtifact transitiveArtifact : artifacts) {
ModuleVersionIdentifier transitiveDep = transitiveArtifact.moduleVersion.id
if (transitiveDep.group == groupId && transitiveDep.name == artifactId) {
continue; // don't exclude the dependency itself!
}
Node exclusion = exclusions.appendNode('exclusion')
exclusion.appendNode('groupId', transitiveDep.group)
exclusion.appendNode('artifactId', transitiveDep.name)
}
}
}
pom.withXml(removeTransitiveDependencies(project))
}
}
@ -332,6 +299,70 @@ class BuildPlugin implements Plugin<Project> {
}
}
/** Returns a closure which can be used with a MavenPom for removing transitive dependencies. */
private static Closure removeTransitiveDependencies(Project project) {
// TODO: remove this when enforcing gradle 2.13+, it now properly handles exclusions
return { XmlProvider xml ->
// first find if we have dependencies at all, and grab the node
NodeList depsNodes = xml.asNode().get('dependencies')
if (depsNodes.isEmpty()) {
return
}
// check each dependency for any transitive deps
for (Node depNode : depsNodes.get(0).children()) {
String groupId = depNode.get('groupId').get(0).text()
String artifactId = depNode.get('artifactId').get(0).text()
String version = depNode.get('version').get(0).text()
// collect the transitive deps now that we know what this dependency is
String depConfig = transitiveDepConfigName(groupId, artifactId, version)
Configuration configuration = project.configurations.findByName(depConfig)
if (configuration == null) {
continue // we did not make this dep non-transitive
}
Set<ResolvedArtifact> artifacts = configuration.resolvedConfiguration.resolvedArtifacts
if (artifacts.size() <= 1) {
// this dep has no transitive deps (or the only artifact is itself)
continue
}
// we now know we have something to exclude, so add the exclusion elements
Node exclusions = depNode.appendNode('exclusions')
for (ResolvedArtifact transitiveArtifact : artifacts) {
ModuleVersionIdentifier transitiveDep = transitiveArtifact.moduleVersion.id
if (transitiveDep.group == groupId && transitiveDep.name == artifactId) {
continue; // don't exclude the dependency itself!
}
Node exclusion = exclusions.appendNode('exclusion')
exclusion.appendNode('groupId', transitiveDep.group)
exclusion.appendNode('artifactId', transitiveDep.name)
}
}
}
}
/**Configuration generation of maven poms. */
public static void configurePomGeneration(Project project) {
project.plugins.withType(MavenBasePublishPlugin.class).whenPluginAdded {
project.publishing {
publications {
all { MavenPublication publication -> // we only deal with maven
// add exclusions to the pom directly, for each of the transitive deps of this project's deps
publication.pom.withXml(removeTransitiveDependencies(project))
}
}
}
project.tasks.withType(GenerateMavenPom.class) { GenerateMavenPom t ->
// place the pom next to the jar it is for
t.destination = new File(project.buildDir, "distributions/${project.archivesBaseName}-${project.version}.pom")
// build poms with assemble
project.assemble.dependsOn(t)
}
}
}
/** Adds compiler settings to the project */
static void configureCompile(Project project) {
project.ext.compactProfile = 'compact3'
@ -341,27 +372,40 @@ class BuildPlugin implements Plugin<Project> {
options.fork = true
options.forkOptions.executable = new File(project.javaHome, 'bin/javac')
options.forkOptions.memoryMaximumSize = "1g"
if (project.targetCompatibility >= JavaVersion.VERSION_1_8) {
// compile with compact 3 profile by default
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
if (project.compactProfile != 'full') {
options.compilerArgs << '-profile' << project.compactProfile
}
}
/*
* -path because gradle will send in paths that don't always exist.
* -missing because we have tons of missing @returns and @param.
* -serial because we don't use java serialization.
*/
// don't even think about passing args with -J-xxx, oracle will ask you to submit a bug report :)
options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial' << '-Xdoclint:all' << '-Xdoclint:-missing'
// compile with compact 3 profile by default
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
if (project.compactProfile != 'full') {
options.compilerArgs << '-profile' << project.compactProfile
}
options.compilerArgs << '-Werror' << '-Xlint:all,-path,-serial,-options,-deprecation' << '-Xdoclint:all' << '-Xdoclint:-missing'
options.encoding = 'UTF-8'
//options.incremental = true
if (project.javaVersion == JavaVersion.VERSION_1_9) {
// hack until gradle supports java 9's new "-release" arg
assert minimumJava == JavaVersion.VERSION_1_8
options.compilerArgs << '-release' << '8'
project.sourceCompatibility = null
project.targetCompatibility = null
}
}
}
}
/** Adds additional manifest info to jars */
static void configureJarManifest(Project project) {
/** Adds additional manifest info to jars, and adds source and javadoc jars */
static void configureJars(Project project) {
project.tasks.withType(Jar) { Jar jarTask ->
// we put all our distributable files under distributions
jarTask.destinationDir = new File(project.buildDir, 'distributions')
// fixup the jar manifest
jarTask.doFirst {
boolean isSnapshot = VersionProperties.elasticsearch.endsWith("-SNAPSHOT");
String version = VersionProperties.elasticsearch;
@ -417,7 +461,7 @@ class BuildPlugin implements Plugin<Project> {
// default test sysprop values
systemProperty 'tests.ifNoTests', 'fail'
// TODO: remove setting logging level via system property
systemProperty 'es.logger.level', 'WARN'
systemProperty 'tests.logger.level', 'WARN'
for (Map.Entry<String, String> property : System.properties.entrySet()) {
if (property.getKey().startsWith('tests.') ||
property.getKey().startsWith('es.')) {

View File

@ -26,14 +26,17 @@ import org.gradle.api.tasks.Exec
* A wrapper around gradle's Exec task to capture output and log on error.
*/
class LoggedExec extends Exec {
protected ByteArrayOutputStream output = new ByteArrayOutputStream()
LoggedExec() {
if (logger.isInfoEnabled() == false) {
standardOutput = new ByteArrayOutputStream()
errorOutput = standardOutput
standardOutput = output
errorOutput = output
ignoreExitValue = true
doLast {
if (execResult.exitValue != 0) {
standardOutput.toString('UTF-8').eachLine { line -> logger.error(line) }
output.toString('UTF-8').eachLine { line -> logger.error(line) }
throw new GradleException("Process '${executable} ${args.join(' ')}' finished with non-zero exit value ${execResult.exitValue}")
}
}

View File

@ -0,0 +1,65 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.elasticsearch.gradle.test.RestTestPlugin
import org.gradle.api.Project
import org.gradle.api.Task
/**
* Sets up tests for documentation.
*/
public class DocsTestPlugin extends RestTestPlugin {
@Override
public void apply(Project project) {
super.apply(project)
Task listSnippets = project.tasks.create('listSnippets', SnippetsTask)
listSnippets.group 'Docs'
listSnippets.description 'List each snippet'
listSnippets.perSnippet { println(it.toString()) }
Task listConsoleCandidates = project.tasks.create(
'listConsoleCandidates', SnippetsTask)
listConsoleCandidates.group 'Docs'
listConsoleCandidates.description
'List snippets that probably should be marked // CONSOLE'
listConsoleCandidates.perSnippet {
if (
it.console // Already marked, nothing to do
|| it.testResponse // It is a response
) {
return
}
List<String> languages = [
// These languages should almost always be marked console
'js', 'json',
// These are often curl commands that should be converted but
// are probably false positives
'sh', 'shell',
]
if (false == languages.contains(it.language)) {
return
}
println(it.toString())
}
project.tasks.create('buildRestTests', RestTestsFromSnippetsTask)
}
}

View File

@ -0,0 +1,226 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.elasticsearch.gradle.doc.SnippetsTask.Snippet
import org.gradle.api.InvalidUserDataException
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.OutputDirectory
import java.nio.file.Files
import java.nio.file.Path
import java.util.regex.Matcher
/**
* Generates REST tests for each snippet marked // TEST.
*/
public class RestTestsFromSnippetsTask extends SnippetsTask {
@Input
Map<String, String> setups = new HashMap()
/**
* Root directory of the tests being generated. To make rest tests happy
* we generate them in a testRoot() which is contained in this directory.
*/
@OutputDirectory
File testRoot = project.file('build/rest')
public RestTestsFromSnippetsTask() {
project.afterEvaluate {
// Wait to set this so testRoot can be customized
project.sourceSets.test.output.dir(testRoot, builtBy: this)
}
TestBuilder builder = new TestBuilder()
doFirst { outputRoot().delete() }
perSnippet builder.&handleSnippet
doLast builder.&finishLastTest
}
/**
* Root directory containing all the files generated by this task. It is
* contained withing testRoot.
*/
File outputRoot() {
return new File(testRoot, '/rest-api-spec/test')
}
private class TestBuilder {
private static final String SYNTAX = {
String method = /(?<method>GET|PUT|POST|HEAD|OPTIONS|DELETE)/
String pathAndQuery = /(?<pathAndQuery>[^\n]+)/
String badBody = /GET|PUT|POST|HEAD|OPTIONS|DELETE|#/
String body = /(?<body>(?:\n(?!$badBody)[^\n]+)+)/
String nonComment = /$method\s+$pathAndQuery$body?/
String comment = /(?<comment>#.+)/
/(?:$comment|$nonComment)\n+/
}()
/**
* The file in which we saw the last snippet that made a test.
*/
Path lastDocsPath
/**
* The file we're building.
*/
PrintWriter current
/**
* Called each time a snippet is encountered. Tracks the snippets and
* calls buildTest to actually build the test.
*/
void handleSnippet(Snippet snippet) {
if (snippet.language == 'json') {
throw new InvalidUserDataException(
"$snippet: Use `js` instead of `json`.")
}
if (snippet.testSetup) {
setup(snippet)
return
}
if (snippet.testResponse) {
response(snippet)
return
}
if (snippet.test || snippet.console) {
test(snippet)
return
}
// Must be an unmarked snippet....
}
private void test(Snippet test) {
setupCurrent(test)
if (false == test.continued) {
current.println('---')
current.println("\"$test.start\":")
}
if (test.skipTest) {
current.println(" - skip:")
current.println(" features: always_skip")
current.println(" reason: $test.skipTest")
}
if (test.setup != null) {
String setup = setups[test.setup]
if (setup == null) {
throw new InvalidUserDataException("Couldn't find setup "
+ "for $test")
}
current.println(setup)
}
body(test)
}
private void response(Snippet response) {
current.println(" - response_body: |")
response.contents.eachLine { current.println(" $it") }
}
void emitDo(String method, String pathAndQuery,
String body, String catchPart) {
def (String path, String query) = pathAndQuery.tokenize('?')
current.println(" - do:")
if (catchPart != null) {
current.println(" catch: $catchPart")
}
current.println(" raw:")
current.println(" method: $method")
current.println(" path: \"$path\"")
if (query != null) {
for (String param: query.tokenize('&')) {
def (String name, String value) = param.tokenize('=')
if (value == null) {
value = ''
}
current.println(" $name: \"$value\"")
}
}
if (body != null) {
// Throw out the leading newline we get from parsing the body
body = body.substring(1)
current.println(" body: |")
body.eachLine { current.println(" $it") }
}
}
private void setup(Snippet setup) {
if (lastDocsPath == setup.path) {
throw new InvalidUserDataException("$setup: wasn't first")
}
setupCurrent(setup)
current.println('---')
current.println("setup:")
body(setup)
// always wait for yellow before anything is executed
current.println(
" - do:\n" +
" raw:\n" +
" method: GET\n" +
" path: \"_cluster/health\"\n" +
" wait_for_status: \"yellow\"")
}
private void body(Snippet snippet) {
parse("$snippet", snippet.contents, SYNTAX) { matcher, last ->
if (matcher.group("comment") != null) {
// Comment
return
}
String method = matcher.group("method")
String pathAndQuery = matcher.group("pathAndQuery")
String body = matcher.group("body")
String catchPart = last ? snippet.catchPart : null
if (pathAndQuery.startsWith('/')) {
// Leading '/'s break the generated paths
pathAndQuery = pathAndQuery.substring(1)
}
emitDo(method, pathAndQuery, body, catchPart)
}
}
private PrintWriter setupCurrent(Snippet test) {
if (lastDocsPath == test.path) {
return
}
finishLastTest()
lastDocsPath = test.path
// Make the destination file:
// Shift the path into the destination directory tree
Path dest = outputRoot().toPath().resolve(test.path)
// Replace the extension
String fileName = dest.getName(dest.nameCount - 1)
dest = dest.parent.resolve(fileName.replace('.asciidoc', '.yaml'))
// Now setup the writer
Files.createDirectories(dest.parent)
current = dest.newPrintWriter('UTF-8')
}
void finishLastTest() {
if (current != null) {
current.close()
current = null
}
}
}
}

View File

@ -0,0 +1,308 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.gradle.doc
import org.gradle.api.DefaultTask
import org.gradle.api.InvalidUserDataException
import org.gradle.api.file.ConfigurableFileTree
import org.gradle.api.tasks.InputFiles
import org.gradle.api.tasks.TaskAction
import java.nio.file.Path
import java.util.regex.Matcher
/**
* A task which will run a closure on each snippet in the documentation.
*/
public class SnippetsTask extends DefaultTask {
private static final String SCHAR = /(?:\\\/|[^\/])/
private static final String SUBSTITUTION = /s\/($SCHAR+)\/($SCHAR*)\//
private static final String CATCH = /catch:\s*((?:\/[^\/]+\/)|[^ \]]+)/
private static final String SKIP = /skip:([^\]]+)/
private static final String SETUP = /setup:([^ \]]+)/
private static final String TEST_SYNTAX =
/(?:$CATCH|$SUBSTITUTION|$SKIP|(continued)|$SETUP) ?/
/**
* Action to take on each snippet. Called with a single parameter, an
* instance of Snippet.
*/
Closure perSnippet
/**
* The docs to scan. Defaults to every file in the directory exception the
* build.gradle file because that is appropriate for Elasticsearch's docs
* directory.
*/
@InputFiles
ConfigurableFileTree docs = project.fileTree(project.projectDir) {
// No snippets in the build file
exclude 'build.gradle'
// That is where the snippets go, not where they come from!
exclude 'build'
}
@TaskAction
public void executeTask() {
/*
* Walks each line of each file, building snippets as it encounters
* the lines that make up the snippet.
*/
for (File file: docs) {
String lastLanguage
int lastLanguageLine
Snippet snippet = null
StringBuilder contents = null
List substitutions = null
Closure emit = {
snippet.contents = contents.toString()
contents = null
if (substitutions != null) {
substitutions.each { String pattern, String subst ->
/*
* $body is really common but it looks like a
* backreference so we just escape it here to make the
* tests cleaner.
*/
subst = subst.replace('$body', '\\$body')
// \n is a new line....
subst = subst.replace('\\n', '\n')
snippet.contents = snippet.contents.replaceAll(
pattern, subst)
}
substitutions = null
}
perSnippet(snippet)
snippet = null
}
file.eachLine('UTF-8') { String line, int lineNumber ->
Matcher matcher
if (line ==~ /-{4,}\s*/) { // Four dashes looks like a snippet
if (snippet == null) {
Path path = docs.dir.toPath().relativize(file.toPath())
snippet = new Snippet(path: path, start: lineNumber)
if (lastLanguageLine == lineNumber - 1) {
snippet.language = lastLanguage
}
} else {
snippet.end = lineNumber
}
return
}
matcher = line =~ /\[source,(\w+)]\s*/
if (matcher.matches()) {
lastLanguage = matcher.group(1)
lastLanguageLine = lineNumber
return
}
if (line ==~ /\/\/\s*AUTOSENSE\s*/) {
throw new InvalidUserDataException("AUTOSENSE has been " +
"replaced by CONSOLE. Use that instead at " +
"$file:$lineNumber")
}
if (line ==~ /\/\/\s*CONSOLE\s*/) {
if (snippet == null) {
throw new InvalidUserDataException("CONSOLE not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.console = true
return
}
matcher = line =~ /\/\/\s*TEST(\[(.+)\])?\s*/
if (matcher.matches()) {
if (snippet == null) {
throw new InvalidUserDataException("TEST not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.test = true
if (matcher.group(2) != null) {
String loc = "$file:$lineNumber"
parse(loc, matcher.group(2), TEST_SYNTAX) {
if (it.group(1) != null) {
snippet.catchPart = it.group(1)
return
}
if (it.group(2) != null) {
if (substitutions == null) {
substitutions = []
}
substitutions.add([it.group(2), it.group(3)])
return
}
if (it.group(4) != null) {
snippet.skipTest = it.group(4)
return
}
if (it.group(5) != null) {
snippet.continued = true
return
}
if (it.group(6) != null) {
snippet.setup = it.group(6)
return
}
throw new InvalidUserDataException(
"Invalid test marker: $line")
}
}
return
}
matcher = line =~ /\/\/\s*TESTRESPONSE(\[(.+)\])?\s*/
if (matcher.matches()) {
if (snippet == null) {
throw new InvalidUserDataException("TESTRESPONSE not " +
"paired with a snippet at $file:$lineNumber")
}
snippet.testResponse = true
if (matcher.group(2) != null) {
if (substitutions == null) {
substitutions = []
}
String loc = "$file:$lineNumber"
parse(loc, matcher.group(2), /$SUBSTITUTION ?/) {
substitutions.add([it.group(1), it.group(2)])
}
}
return
}
if (line ==~ /\/\/\s*TESTSETUP\s*/) {
snippet.testSetup = true
return
}
if (snippet == null) {
// Outside
return
}
if (snippet.end == Snippet.NOT_FINISHED) {
// Inside
if (contents == null) {
contents = new StringBuilder()
}
// We don't need the annotations
line = line.replaceAll(/<\d+>/, '')
// Nor any trailing spaces
line = line.replaceAll(/\s+$/, '')
contents.append(line).append('\n')
return
}
// Just finished
emit()
}
if (snippet != null) emit()
}
}
static class Snippet {
static final int NOT_FINISHED = -1
/**
* Path to the file containing this snippet. Relative to docs.dir of the
* SnippetsTask that created it.
*/
Path path
int start
int end = NOT_FINISHED
String contents
boolean console = false
boolean test = false
boolean testResponse = false
boolean testSetup = false
String skipTest = null
boolean continued = false
String language = null
String catchPart = null
String setup = null
@Override
public String toString() {
String result = "$path[$start:$end]"
if (language != null) {
result += "($language)"
}
if (console) {
result += '// CONSOLE'
}
if (test) {
result += '// TEST'
if (catchPart) {
result += "[catch: $catchPart]"
}
if (skipTest) {
result += "[skip=$skipTest]"
}
if (continued) {
result += '[continued]'
}
if (setup) {
result += "[setup:$setup]"
}
}
if (testResponse) {
result += '// TESTRESPONSE'
}
if (testSetup) {
result += '// TESTSETUP'
}
return result
}
}
/**
* Repeatedly match the pattern to the string, calling the closure with the
* matchers each time there is a match. If there are characters that don't
* match then blow up. If the closure takes two parameters then the second
* one is "is this the last match?".
*/
protected parse(String location, String s, String pattern, Closure c) {
if (s == null) {
return // Silly null, only real stuff gets to match!
}
Matcher m = s =~ pattern
int offset = 0
Closure extraContent = { message ->
StringBuilder cutOut = new StringBuilder()
cutOut.append(s[offset - 6..offset - 1])
cutOut.append('*')
cutOut.append(s[offset..Math.min(offset + 5, s.length() - 1)])
String cutOutNoNl = cutOut.toString().replace('\n', '\\n')
throw new InvalidUserDataException("$location: Extra content "
+ "$message ('$cutOutNoNl') matching [$pattern]: $s")
}
while (m.find()) {
if (m.start() != offset) {
extraContent("between [$offset] and [${m.start()}]")
}
offset = m.end()
if (c.maximumNumberOfParameters == 1) {
c(m)
} else {
c(m, offset == s.length())
}
}
if (offset == 0) {
throw new InvalidUserDataException("$location: Didn't match "
+ "$pattern: $s")
}
if (offset != s.length()) {
extraContent("after [$offset]")
}
}
}

View File

@ -18,11 +18,13 @@
*/
package org.elasticsearch.gradle.plugin
import nebula.plugin.publishing.maven.MavenBasePublishPlugin
import nebula.plugin.publishing.maven.MavenManifestPlugin
import nebula.plugin.publishing.maven.MavenScmPlugin
import org.elasticsearch.gradle.BuildPlugin
import org.elasticsearch.gradle.test.RestIntegTestTask
import org.elasticsearch.gradle.test.RunTask
import org.gradle.api.Project
import org.gradle.api.artifacts.Dependency
import org.gradle.api.tasks.SourceSet
import org.gradle.api.tasks.bundling.Zip
@ -50,6 +52,7 @@ public class PluginBuildPlugin extends BuildPlugin {
} else {
project.integTest.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)
project.tasks.run.clusterConfig.plugin(name, project.bundlePlugin.outputs.files)
addPomGeneration(project)
}
project.namingConventions {
@ -125,4 +128,32 @@ public class PluginBuildPlugin extends BuildPlugin {
project.configurations.getByName('default').extendsFrom = []
project.artifacts.add('default', bundle)
}
/**
* Adds the plugin jar and zip as publications.
*/
protected static void addPomGeneration(Project project) {
project.plugins.apply(MavenBasePublishPlugin.class)
project.plugins.apply(MavenScmPlugin.class)
project.publishing {
publications {
nebula {
artifact project.bundlePlugin
pom.withXml {
// overwrite the name/description in the pom nebula set up
Node root = asNode()
for (Node node : root.children()) {
if (node.name() == 'name') {
node.setValue(project.pluginProperties.extension.name)
} else if (node.name() == 'description') {
node.setValue(project.pluginProperties.extension.description)
}
}
}
}
}
}
}
}

View File

@ -57,11 +57,13 @@ class PluginPropertiesTask extends Copy {
// configure property substitution
from(templateFile)
into(generatedResourcesDir)
expand(generateSubstitutions())
Map<String, String> properties = generateSubstitutions()
expand(properties)
inputs.properties(properties)
}
}
Map generateSubstitutions() {
Map<String, String> generateSubstitutions() {
def stringSnap = { version ->
if (version.endsWith("-SNAPSHOT")) {
return version.substring(0, version.length() - 9)

View File

@ -21,7 +21,6 @@ package org.elasticsearch.gradle.precommit
import org.elasticsearch.gradle.LoggedExec
import org.gradle.api.file.FileCollection
import org.gradle.api.tasks.InputFile
import org.gradle.api.tasks.OutputFile
/**
@ -35,14 +34,12 @@ public class JarHellTask extends LoggedExec {
* inputs (ie the jars/class files).
*/
@OutputFile
public File successMarker = new File(project.buildDir, 'markers/jarHell')
/** The classpath to run jarhell check on, defaults to the test runtime classpath */
@InputFile
public FileCollection classpath = project.sourceSets.test.runtimeClasspath
File successMarker = new File(project.buildDir, 'markers/jarHell')
public JarHellTask() {
project.afterEvaluate {
FileCollection classpath = project.sourceSets.test.runtimeClasspath
inputs.files(classpath)
dependsOn(classpath)
description = "Runs CheckJarHell on ${classpath}"
executable = new File(project.javaHome, 'bin/java')

View File

@ -22,6 +22,8 @@ import org.apache.rat.anttasks.Report
import org.apache.rat.anttasks.SubstringLicenseMatcher
import org.apache.rat.license.SimpleLicenseFamily
import org.elasticsearch.gradle.AntTask
import org.gradle.api.file.FileCollection
import org.gradle.api.tasks.OutputFile
import org.gradle.api.tasks.SourceSet
import java.nio.file.Files
@ -33,8 +35,22 @@ import java.nio.file.Files
*/
public class LicenseHeadersTask extends AntTask {
@OutputFile
File reportFile = new File(project.buildDir, 'reports/licenseHeaders/rat.log')
/**
* The list of java files to check. protected so the afterEvaluate closure in the
* constructor can write to it.
*/
protected List<FileCollection> javaFiles
LicenseHeadersTask() {
description = "Checks sources for missing, incorrect, or unacceptable license headers"
// Delay resolving the dependencies until after evaluation so we pick up generated sources
project.afterEvaluate {
javaFiles = project.sourceSets.collect({it.allJava})
inputs.files(javaFiles)
}
}
@Override
@ -43,17 +59,13 @@ public class LicenseHeadersTask extends AntTask {
ant.project.addDataTypeDefinition('substringMatcher', SubstringLicenseMatcher)
ant.project.addDataTypeDefinition('approvedLicense', SimpleLicenseFamily)
// create a file for the log to go to under reports/
File reportDir = new File(project.buildDir, "reports/licenseHeaders")
reportDir.mkdirs()
File reportFile = new File(reportDir, "rat.log")
Files.deleteIfExists(reportFile.toPath())
// run rat, going to the file
List<FileCollection> input = javaFiles
ant.ratReport(reportFile: reportFile.absolutePath, addDefaultLicenseMatchers: true) {
// checks all the java sources (allJava)
for (SourceSet set : project.sourceSets) {
for (File dir : set.allJava.srcDirs) {
for (FileCollection dirSet : input) {
for (File dir: dirSet.srcDirs) {
// sometimes these dirs don't exist, e.g. site-plugin has no actual java src/main...
if (dir.exists()) {
ant.fileset(dir: dir)
@ -85,12 +97,12 @@ public class LicenseHeadersTask extends AntTask {
// parsers generated by antlr
pattern(substring: "ANTLR GENERATED CODE")
}
// approved categories
approvedLicense(familyName: "Apache")
approvedLicense(familyName: "Generated")
approvedLicense(familyName: "Generated")
}
// check the license file for any errors, this should be fast.
boolean zeroUnknownLicenses = false
boolean foundProblemsWithFiles = false
@ -98,12 +110,12 @@ public class LicenseHeadersTask extends AntTask {
if (line.startsWith("0 Unknown Licenses")) {
zeroUnknownLicenses = true
}
if (line.startsWith(" !")) {
foundProblemsWithFiles = true
}
}
if (zeroUnknownLicenses == false || foundProblemsWithFiles) {
// print the unapproved license section, usually its all you need to fix problems.
int sectionNumber = 0

View File

@ -62,9 +62,8 @@ class PrecommitTasks {
private static Task configureForbiddenApis(Project project) {
project.pluginManager.apply(ForbiddenApisPlugin.class)
project.forbiddenApis {
internalRuntimeForbidden = true
failOnUnsupportedJava = false
bundledSignatures = ['jdk-unsafe', 'jdk-deprecated', 'jdk-system-out']
bundledSignatures = ['jdk-unsafe', 'jdk-deprecated', 'jdk-non-portable', 'jdk-system-out']
signaturesURLs = [getClass().getResource('/forbidden/jdk-signatures.txt'),
getClass().getResource('/forbidden/es-all-signatures.txt')]
suppressAnnotations = ['**.SuppressForbidden']
@ -87,16 +86,43 @@ class PrecommitTasks {
}
private static Task configureCheckstyle(Project project) {
// Always copy the checkstyle configuration files to 'buildDir/checkstyle' since the resources could be located in a jar
// file. If the resources are located in a jar, Gradle will fail when it tries to turn the URL into a file
URL checkstyleConfUrl = PrecommitTasks.getResource("/checkstyle.xml")
URL checkstyleSuppressionsUrl = PrecommitTasks.getResource("/checkstyle_suppressions.xml")
File checkstyleDir = new File(project.buildDir, "checkstyle")
File checkstyleSuppressions = new File(checkstyleDir, "checkstyle_suppressions.xml")
File checkstyleConf = new File(checkstyleDir, "checkstyle.xml");
Task copyCheckstyleConf = project.tasks.create("copyCheckstyleConf")
// configure inputs and outputs so up to date works properly
copyCheckstyleConf.outputs.files(checkstyleSuppressions, checkstyleConf)
if ("jar".equals(checkstyleConfUrl.getProtocol())) {
JarURLConnection jarURLConnection = (JarURLConnection) checkstyleConfUrl.openConnection()
copyCheckstyleConf.inputs.file(jarURLConnection.getJarFileURL())
} else if ("file".equals(checkstyleConfUrl.getProtocol())) {
copyCheckstyleConf.inputs.files(checkstyleConfUrl.getFile(), checkstyleSuppressionsUrl.getFile())
}
copyCheckstyleConf.doLast {
checkstyleDir.mkdirs()
// withStream will close the output stream and IOGroovyMethods#getBytes reads the InputStream fully and closes it
new FileOutputStream(checkstyleConf).withStream {
it.write(checkstyleConfUrl.openStream().getBytes())
}
new FileOutputStream(checkstyleSuppressions).withStream {
it.write(checkstyleSuppressionsUrl.openStream().getBytes())
}
}
Task checkstyleTask = project.tasks.create('checkstyle')
// Apply the checkstyle plugin to create `checkstyleMain` and `checkstyleTest`. It only
// creates them if there is main or test code to check and it makes `check` depend
// on them. But we want `precommit` to depend on `checkstyle` which depends on them so
// we have to swap them.
project.pluginManager.apply('checkstyle')
URL checkstyleSuppressions = PrecommitTasks.getResource('/checkstyle_suppressions.xml')
project.checkstyle {
config = project.resources.text.fromFile(
PrecommitTasks.getResource('/checkstyle.xml'), 'UTF-8')
config = project.resources.text.fromFile(checkstyleConf, 'UTF-8')
configProperties = [
suppressions: checkstyleSuppressions
]
@ -106,6 +132,7 @@ class PrecommitTasks {
if (task != null) {
project.tasks['check'].dependsOn.remove(task)
checkstyleTask.dependsOn(task)
task.dependsOn(copyCheckstyleConf)
task.inputs.file(checkstyleSuppressions)
}
}

View File

@ -27,6 +27,9 @@ import org.apache.tools.ant.Project;
import org.elasticsearch.gradle.AntTask;
import org.gradle.api.artifacts.Configuration;
import org.gradle.api.file.FileCollection;
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.InputFiles
import org.gradle.api.tasks.OutputFile
import java.nio.file.FileVisitResult;
import java.nio.file.Files;
@ -40,17 +43,62 @@ import java.util.regex.Pattern;
* Basic static checking to keep tabs on third party JARs
*/
public class ThirdPartyAuditTask extends AntTask {
// patterns for classes to exclude, because we understand their issues
private String[] excludes = new String[0];
private List<String> excludes = [];
/**
* Input for the task. Set javadoc for {#link getJars} for more. Protected
* so the afterEvaluate closure in the constructor can write it.
*/
protected FileCollection jars;
/**
* Classpath against which to run the third patty audit. Protected so the
* afterEvaluate closure in the constructor can write it.
*/
protected FileCollection classpath;
/**
* We use a simple "marker" file that we touch when the task succeeds
* as the task output. This is compared against the modified time of the
* inputs (ie the jars/class files).
*/
@OutputFile
File successMarker = new File(project.buildDir, 'markers/thirdPartyAudit')
ThirdPartyAuditTask() {
// we depend on this because its the only reliable configuration
// this probably makes the build slower: gradle you suck here when it comes to configurations, you pay the price.
dependsOn(project.configurations.testCompile);
description = "Checks third party JAR bytecode for missing classes, use of internal APIs, and other horrors'";
project.afterEvaluate {
Configuration configuration = project.configurations.findByName('runtime');
if (configuration == null) {
// some projects apparently do not have 'runtime'? what a nice inconsistency,
// basically only serves to waste time in build logic!
configuration = project.configurations.findByName('testCompile');
}
assert configuration != null;
classpath = configuration
// we only want third party dependencies.
jars = configuration.fileCollection({ dependency ->
dependency.group.startsWith("org.elasticsearch") == false
});
// we don't want provided dependencies, which we have already scanned. e.g. don't
// scan ES core's dependencies for every single plugin
Configuration provided = project.configurations.findByName('provided')
if (provided != null) {
jars -= provided
}
inputs.files(jars)
onlyIf { jars.isEmpty() == false }
}
}
/**
* classes that should be excluded from the scan,
* e.g. because we know what sheisty stuff those particular classes are up to.
@ -61,21 +109,22 @@ public class ThirdPartyAuditTask extends AntTask {
throw new IllegalArgumentException("illegal third party audit exclusion: '" + s + "', wildcards are not permitted!");
}
}
excludes = classes;
excludes = classes.sort();
}
/**
* Returns current list of exclusions.
*/
public String[] getExcludes() {
@Input
public List<String> getExcludes() {
return excludes;
}
// yes, we parse Uwe Schindler's errors to find missing classes, and to keep a continuous audit. Just don't let him know!
static final Pattern MISSING_CLASS_PATTERN =
Pattern.compile(/WARNING: The referenced class '(.*)' cannot be loaded\. Please fix the classpath\!/);
static final Pattern VIOLATION_PATTERN =
static final Pattern VIOLATION_PATTERN =
Pattern.compile(/\s\sin ([a-zA-Z0-9\$\.]+) \(.*\)/);
// we log everything and capture errors and handle them with our whitelist
@ -124,32 +173,8 @@ public class ThirdPartyAuditTask extends AntTask {
@Override
protected void runAnt(AntBuilder ant) {
Configuration configuration = project.configurations.findByName('runtime');
if (configuration == null) {
// some projects apparently do not have 'runtime'? what a nice inconsistency,
// basically only serves to waste time in build logic!
configuration = project.configurations.findByName('testCompile');
}
assert configuration != null;
ant.project.addTaskDefinition('thirdPartyAudit', de.thetaphi.forbiddenapis.ant.AntTask);
// we only want third party dependencies.
FileCollection jars = configuration.fileCollection({ dependency ->
dependency.group.startsWith("org.elasticsearch") == false
});
// we don't want provided dependencies, which we have already scanned. e.g. don't
// scan ES core's dependencies for every single plugin
Configuration provided = project.configurations.findByName('provided');
if (provided != null) {
jars -= provided;
}
// no dependencies matched, we are done
if (jars.isEmpty()) {
return;
}
// print which jars we are going to scan, always
// this is not the time to try to be succinct! Forbidden will print plenty on its own!
Set<String> names = new TreeSet<>();
@ -171,26 +196,22 @@ public class ThirdPartyAuditTask extends AntTask {
}
// convert exclusion class names to binary file names
String[] excludedFiles = new String[excludes.length];
for (int i = 0; i < excludes.length; i++) {
excludedFiles[i] = excludes[i].replace('.', '/') + ".class";
}
Set<String> excludedSet = new TreeSet<>(Arrays.asList(excludedFiles));
List<String> excludedFiles = excludes.collect {it.replace('.', '/') + ".class"}
Set<String> excludedSet = new TreeSet<>(excludedFiles);
// jarHellReprise
Set<String> sheistySet = getSheistyClasses(tmpDir.toPath());
try {
ant.thirdPartyAudit(internalRuntimeForbidden: false,
failOnUnsupportedJava: false,
try {
ant.thirdPartyAudit(failOnUnsupportedJava: false,
failOnMissingClasses: false,
signaturesFile: new File(getClass().getResource('/forbidden/third-party-audit.txt').toURI()),
classpath: configuration.asPath) {
classpath: classpath.asPath) {
fileset(dir: tmpDir)
}
} catch (BuildException ignore) {}
EvilLogger evilLogger = null;
EvilLogger evilLogger = null;
for (BuildListener listener : ant.project.getBuildListeners()) {
if (listener instanceof EvilLogger) {
evilLogger = (EvilLogger) listener;
@ -228,6 +249,8 @@ public class ThirdPartyAuditTask extends AntTask {
// clean up our mess (if we succeed)
ant.delete(dir: tmpDir.getAbsolutePath());
successMarker.setText("", 'UTF-8')
}
/**
@ -235,11 +258,11 @@ public class ThirdPartyAuditTask extends AntTask {
*/
private Set<String> getSheistyClasses(Path root) {
// system.parent = extensions loader.
// note: for jigsaw, this evilness will need modifications (e.g. use jrt filesystem!).
// note: for jigsaw, this evilness will need modifications (e.g. use jrt filesystem!).
// but groovy/gradle needs to work at all first!
ClassLoader ext = ClassLoader.getSystemClassLoader().getParent();
assert ext != null;
Set<String> sheistySet = new TreeSet<>();
Files.walkFileTree(root, new SimpleFileVisitor<Path>() {
@Override

View File

@ -418,8 +418,7 @@ class ClusterFormationTasks {
// argument are wrapped in an ExecArgWrapper that escapes commas
args execArgs.collect { a -> new EscapeCommaWrapper(arg: a) }
} else {
executable 'sh'
args execArgs
commandLine execArgs
}
}
}
@ -451,7 +450,7 @@ class ClusterFormationTasks {
// gradle task options are not processed until the end of the configuration phase
if (node.config.debug) {
println 'Running elasticsearch in debug mode, suspending until connected on port 8000'
node.env['JAVA_OPTS'] = '-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8000'
node.env['ES_JAVA_OPTS'] = '-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8000'
}
node.getCommandString().eachLine { line -> logger.info(line) }

View File

@ -19,7 +19,6 @@
package org.elasticsearch.gradle.test
import org.apache.tools.ant.taskdefs.condition.Os
import org.elasticsearch.gradle.VersionProperties
import org.gradle.api.InvalidUserDataException
import org.gradle.api.Project
import org.gradle.api.Task
@ -129,19 +128,19 @@ class NodeInfo {
args.add("${esScript}")
}
env = [
'JAVA_HOME' : project.javaHome,
'ES_GC_OPTS': config.jvmArgs // we pass these with the undocumented gc opts so the argline can set gc, etc
]
args.addAll("-E", "es.node.portsfile=true")
env.put('ES_JAVA_OPTS', config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" "))
env = [ 'JAVA_HOME' : project.javaHome ]
args.addAll("-E", "node.portsfile=true")
String collectedSystemProperties = config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ")
String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + " " + config.jvmArgs
env.put('ES_JAVA_OPTS', esJavaOpts)
for (Map.Entry<String, String> property : System.properties.entrySet()) {
if (property.getKey().startsWith('es.')) {
if (property.key.startsWith('tests.es.')) {
args.add("-E")
args.add("${property.getKey()}=${property.getValue()}")
args.add("${property.key.substring('tests.es.'.size())}=${property.value}")
}
}
args.addAll("-E", "es.path.conf=${confDir}")
env.put('ES_JVM_OPTIONS', new File(confDir, 'jvm.options'))
args.addAll("-E", "path.conf=${confDir}")
if (Os.isFamily(Os.FAMILY_WINDOWS)) {
args.add('"') // end the entire command, quoted
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.gradle.vagrant
import org.gradle.api.DefaultTask
import org.gradle.api.tasks.Input
import org.gradle.api.tasks.TaskAction
import org.gradle.logging.ProgressLoggerFactory
import org.gradle.process.internal.ExecAction
@ -30,41 +31,22 @@ import javax.inject.Inject
* Runs bats over vagrant. Pretty much like running it using Exec but with a
* nicer output formatter.
*/
class BatsOverVagrantTask extends DefaultTask {
String command
String boxName
ExecAction execAction
public class BatsOverVagrantTask extends VagrantCommandTask {
BatsOverVagrantTask() {
execAction = getExecActionFactory().newExecAction()
}
@Input
String command
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
BatsOverVagrantTask() {
project.afterEvaluate {
args 'ssh', boxName, '--command', command
}
}
@Inject
ExecActionFactory getExecActionFactory() {
throw new UnsupportedOperationException();
}
void boxName(String boxName) {
this.boxName = boxName
}
void command(String command) {
this.command = command
}
@TaskAction
void exec() {
// It'd be nice if --machine-readable were, well, nice
execAction.commandLine(['vagrant', 'ssh', boxName, '--command', command])
execAction.setStandardOutput(new TapLoggerOutputStream(
command: command,
factory: getProgressLoggerFactory(),
logger: logger))
execAction.execute();
}
@Override
protected OutputStream createLoggerOutputStream() {
return new TapLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
logger: logger)
}
}

View File

@ -19,9 +19,11 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
import groovy.transform.PackageScope
import org.gradle.api.GradleScriptException
import org.gradle.api.logging.Logger
import org.gradle.logging.ProgressLogger
import org.gradle.logging.ProgressLoggerFactory
import java.util.regex.Matcher
@ -35,73 +37,77 @@ import java.util.regex.Matcher
* There is a Tap4j project but we can't use it because it wants to parse the
* entire TAP stream at once and won't parse it stream-wise.
*/
class TapLoggerOutputStream extends LoggingOutputStream {
ProgressLogger progressLogger
Logger logger
int testsCompleted = 0
int testsFailed = 0
int testsSkipped = 0
Integer testCount
String countsFormat
public class TapLoggerOutputStream extends LoggingOutputStream {
private final ProgressLogger progressLogger
private boolean isStarted = false
private final Logger logger
private int testsCompleted = 0
private int testsFailed = 0
private int testsSkipped = 0
private Integer testCount
private String countsFormat
TapLoggerOutputStream(Map args) {
logger = args.logger
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("TAP output for $args.command")
progressLogger.started()
progressLogger.progress("Starting $args.command...")
}
void flush() {
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// System.out.print "===> $line\n"
if (testCount == null) {
try {
testCount = line.split('\\.').last().toInteger()
def length = (testCount as String).length()
countsFormat = "%0${length}d"
countsFormat = "[$countsFormat|$countsFormat|$countsFormat/$countsFormat]"
return
} catch (Exception e) {
throw new GradleScriptException(
'Error parsing first line of TAP stream!!', e)
}
}
Matcher m = line =~ /(?<status>ok|not ok) \d+(?<skip> # skip (?<skipReason>\(.+\))?)? \[(?<suite>.+)\] (?<test>.+)/
if (!m.matches()) {
/* These might be failure report lines or comments or whatever. Its hard
to tell and it doesn't matter. */
logger.warn(line)
return
}
boolean skipped = m.group('skip') != null
boolean success = !skipped && m.group('status') == 'ok'
String skipReason = m.group('skipReason')
String suiteName = m.group('suite')
String testName = m.group('test')
String status
if (skipped) {
status = "SKIPPED"
testsSkipped++
} else if (success) {
status = " OK"
testsCompleted++
} else {
status = " FAILED"
testsFailed++
TapLoggerOutputStream(Map args) {
logger = args.logger
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("TAP output for `${args.command}`")
}
String counts = sprintf(countsFormat,
[testsCompleted, testsFailed, testsSkipped, testCount])
progressLogger.progress("Tests $counts, $status [$suiteName] $testName")
if (!success) {
logger.warn(line)
@Override
public void flush() {
if (isStarted == false) {
progressLogger.started()
isStarted = true
}
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// System.out.print "===> $line\n"
if (testCount == null) {
try {
testCount = line.split('\\.').last().toInteger()
def length = (testCount as String).length()
countsFormat = "%0${length}d"
countsFormat = "[$countsFormat|$countsFormat|$countsFormat/$countsFormat]"
return
} catch (Exception e) {
throw new GradleScriptException(
'Error parsing first line of TAP stream!!', e)
}
}
Matcher m = line =~ /(?<status>ok|not ok) \d+(?<skip> # skip (?<skipReason>\(.+\))?)? \[(?<suite>.+)\] (?<test>.+)/
if (!m.matches()) {
/* These might be failure report lines or comments or whatever. Its hard
to tell and it doesn't matter. */
logger.warn(line)
return
}
boolean skipped = m.group('skip') != null
boolean success = !skipped && m.group('status') == 'ok'
String skipReason = m.group('skipReason')
String suiteName = m.group('suite')
String testName = m.group('test')
String status
if (skipped) {
status = "SKIPPED"
testsSkipped++
} else if (success) {
status = " OK"
testsCompleted++
} else {
status = " FAILED"
testsFailed++
}
String counts = sprintf(countsFormat,
[testsCompleted, testsFailed, testsSkipped, testCount])
progressLogger.progress("Tests $counts, $status [$suiteName] $testName")
if (!success) {
logger.warn(line)
}
}
}
}

View File

@ -18,11 +18,10 @@
*/
package org.elasticsearch.gradle.vagrant
import org.gradle.api.DefaultTask
import org.gradle.api.tasks.TaskAction
import org.apache.commons.io.output.TeeOutputStream
import org.elasticsearch.gradle.LoggedExec
import org.gradle.api.tasks.Input
import org.gradle.logging.ProgressLoggerFactory
import org.gradle.process.internal.ExecAction
import org.gradle.process.internal.ExecActionFactory
import javax.inject.Inject
@ -30,43 +29,30 @@ import javax.inject.Inject
* Runs a vagrant command. Pretty much like Exec task but with a nicer output
* formatter and defaults to `vagrant` as first part of commandLine.
*/
class VagrantCommandTask extends DefaultTask {
List<Object> commandLine
String boxName
ExecAction execAction
public class VagrantCommandTask extends LoggedExec {
VagrantCommandTask() {
execAction = getExecActionFactory().newExecAction()
}
@Input
String boxName
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
public VagrantCommandTask() {
executable = 'vagrant'
project.afterEvaluate {
// It'd be nice if --machine-readable were, well, nice
standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())
}
}
@Inject
ExecActionFactory getExecActionFactory() {
throw new UnsupportedOperationException();
}
protected OutputStream createLoggerOutputStream() {
return new VagrantLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
/* Vagrant tends to output a lot of stuff, but most of the important
stuff starts with ==> $box */
squashedPrefix: "==> $boxName: ")
}
void boxName(String boxName) {
this.boxName = boxName
}
void commandLine(Object... commandLine) {
this.commandLine = commandLine
}
@TaskAction
void exec() {
// It'd be nice if --machine-readable were, well, nice
execAction.commandLine(['vagrant'] + commandLine)
execAction.setStandardOutput(new VagrantLoggerOutputStream(
command: commandLine.join(' '),
factory: getProgressLoggerFactory(),
/* Vagrant tends to output a lot of stuff, but most of the important
stuff starts with ==> $box */
squashedPrefix: "==> $boxName: "))
execAction.execute();
}
@Inject
ProgressLoggerFactory getProgressLoggerFactory() {
throw new UnsupportedOperationException();
}
}

View File

@ -19,7 +19,9 @@
package org.elasticsearch.gradle.vagrant
import com.carrotsearch.gradle.junit4.LoggingOutputStream
import org.gradle.api.logging.Logger
import org.gradle.logging.ProgressLogger
import org.gradle.logging.ProgressLoggerFactory
/**
* Adapts an OutputStream being written to by vagrant into a ProcessLogger. It
@ -42,79 +44,60 @@ import org.gradle.logging.ProgressLogger
* to catch so it can render the output like
* "Heading text > stdout from the provisioner".
*/
class VagrantLoggerOutputStream extends LoggingOutputStream {
static final String HEADING_PREFIX = '==> '
public class VagrantLoggerOutputStream extends LoggingOutputStream {
private static final String HEADING_PREFIX = '==> '
ProgressLogger progressLogger
String squashedPrefix
String lastLine = ''
boolean inProgressReport = false
String heading = ''
private final ProgressLogger progressLogger
private boolean isStarted = false
private String squashedPrefix
private String lastLine = ''
private boolean inProgressReport = false
private String heading = ''
VagrantLoggerOutputStream(Map args) {
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("Vagrant $args.command")
progressLogger.started()
progressLogger.progress("Starting vagrant $args.command...")
squashedPrefix = args.squashedPrefix
}
void flush() {
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
void line(String line) {
// debugPrintLine(line) // Uncomment me to log every incoming line
if (line.startsWith('\r\u001b')) {
/* We don't want to try to be a full terminal emulator but we want to
keep the escape sequences from leaking and catch _some_ of the
meaning. */
line = line.substring(2)
if ('[K' == line) {
inProgressReport = true
}
return
VagrantLoggerOutputStream(Map args) {
progressLogger = args.factory.newOperation(VagrantLoggerOutputStream)
progressLogger.setDescription("Vagrant output for `$args.command`")
squashedPrefix = args.squashedPrefix
}
if (line.startsWith(squashedPrefix)) {
line = line.substring(squashedPrefix.length())
inProgressReport = false
lastLine = line
if (line.startsWith(HEADING_PREFIX)) {
line = line.substring(HEADING_PREFIX.length())
heading = line + ' > '
} else {
line = heading + line
}
} else if (inProgressReport) {
inProgressReport = false
line = lastLine + line
} else {
return
}
// debugLogLine(line) // Uncomment me to log every line we add to the logger
progressLogger.progress(line)
}
void debugPrintLine(line) {
System.out.print '----------> '
for (int i = start; i < end; i++) {
switch (buffer[i] as char) {
case ' '..'~':
System.out.print buffer[i] as char
break
default:
System.out.print '%'
System.out.print Integer.toHexString(buffer[i])
}
@Override
public void flush() {
if (isStarted == false) {
progressLogger.started()
isStarted = true
}
if (end == start) return
line(new String(buffer, start, end - start))
start = end
}
System.out.print '\n'
}
void debugLogLine(line) {
System.out.print '>>>>>>>>>>> '
System.out.print line
System.out.print '\n'
}
void line(String line) {
if (line.startsWith('\r\u001b')) {
/* We don't want to try to be a full terminal emulator but we want to
keep the escape sequences from leaking and catch _some_ of the
meaning. */
line = line.substring(2)
if ('[K' == line) {
inProgressReport = true
}
return
}
if (line.startsWith(squashedPrefix)) {
line = line.substring(squashedPrefix.length())
inProgressReport = false
lastLine = line
if (line.startsWith(HEADING_PREFIX)) {
line = line.substring(HEADING_PREFIX.length())
heading = line + ' > '
} else {
line = heading + line
}
} else if (inProgressReport) {
inProgressReport = false
line = lastLine + line
} else {
return
}
progressLogger.progress(line)
}
}

View File

@ -0,0 +1,20 @@
#
# Licensed to Elasticsearch under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
implementation-class=org.elasticsearch.gradle.doc.DocsTestPlugin

View File

@ -7,23 +7,20 @@
<!-- On Windows, Checkstyle matches files using \ path separator -->
<!-- These files are generated by ANTLR so its silly to hold them to our rules. -->
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]PainlessLexer\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]PainlessParser(|BaseVisitor|Visitor)\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]antlr[/\\]PainlessLexer\.java" checks="." />
<suppress files="org[/\\]elasticsearch[/\\]painless[/\\]antlr[/\\]PainlessParser(|BaseVisitor|Visitor)\.java" checks="." />
<!-- Hopefully temporary suppression of LineLength on files that don't pass it. We should remove these when we the
files start to pass. -->
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]queries[/\\]BlendedTermQuery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]queries[/\\]ExtendedCommonTermsQuery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]queryparser[/\\]classic[/\\]MapperQueryParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]search[/\\]postingshighlight[/\\]CustomPostingsHighlighter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]search[/\\]vectorhighlight[/\\]CustomFieldQuery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]Version.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]Action.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ActionModule.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ActionRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ReplicationResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]TransportClusterHealthAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]hotthreads[/\\]NodesHotThreadsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]hotthreads[/\\]TransportNodesHotThreadsAction.java" checks="LineLength" />
@ -31,8 +28,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]info[/\\]TransportNodesInfoAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]stats[/\\]NodesStatsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]stats[/\\]TransportNodesStatsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]tasks[/\\]list[/\\]ListTasksResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]tasks[/\\]list[/\\]TransportListTasksAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]delete[/\\]DeleteRepositoryRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]delete[/\\]TransportDeleteRepositoryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]get[/\\]GetRepositoriesRequestBuilder.java" checks="LineLength" />
@ -42,8 +37,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]put[/\\]TransportPutRepositoryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]verify[/\\]TransportVerifyRepositoryAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]verify[/\\]VerifyRepositoryRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]ClusterRerouteRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]ClusterRerouteRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]TransportClusterRerouteAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsRequestBuilder.java" checks="LineLength" />
@ -65,7 +58,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]snapshots[/\\]status[/\\]TransportSnapshotsStatusAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]ClusterStateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]TransportClusterStateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsIndices.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsNodeResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]ClusterStatsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]stats[/\\]TransportClusterStatsAction.java" checks="LineLength" />
@ -157,11 +149,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]delete[/\\]DeleteRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]delete[/\\]TransportDeleteAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]explain[/\\]TransportExplainAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]fieldstats[/\\]FieldStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]fieldstats[/\\]FieldStatsRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]fieldstats[/\\]FieldStatsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]fieldstats[/\\]FieldStatsResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]fieldstats[/\\]TransportFieldStatsTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]GetRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]MultiGetRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]TransportGetAction.java" checks="LineLength" />
@ -187,21 +174,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestProxyActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]PutPipelineTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateExecutionService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolateRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]PercolateShardResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportMultiPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]TransportShardMultiPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]MultiSearchRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchPhaseExecutionException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]ShardSearchFailure.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]TransportClearScrollAction.java" checks="LineLength" />
@ -210,10 +187,8 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ActionFilter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]AutoCreateIndex.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]DelegatingActionListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]DestructiveOperations.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]HandledTransportAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]IndicesOptions.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ThreadedActionListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]ToXContentToBytes.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]broadcast[/\\]BroadcastOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]broadcast[/\\]BroadcastRequest.java" checks="LineLength" />
@ -225,23 +200,18 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]MasterNodeReadOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeReadAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]ClusterInfoRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]info[/\\]TransportClusterInfoAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]nodes[/\\]NodesOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]nodes[/\\]TransportNodesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]ReplicationRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]ReplicationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]TransportBroadcastReplicationAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]TransportReplicationAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]instance[/\\]InstanceShardOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]instance[/\\]TransportInstanceSingleOperationAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]shard[/\\]SingleShardOperationRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]shard[/\\]SingleShardRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]shard[/\\]TransportSingleShardAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]tasks[/\\]TasksRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]tasks[/\\]TransportTasksAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]termvectors[/\\]MultiTermVectorsRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]termvectors[/\\]MultiTermVectorsRequestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]termvectors[/\\]TermVectorsRequest.java" checks="LineLength" />
@ -263,7 +233,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]JarHell.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]Seccomp.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]bootstrap[/\\]Security.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cache[/\\]recycler[/\\]PageCacheRecycler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]ElasticsearchClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]FilterClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]node[/\\]NodeClient.java" checks="LineLength" />
@ -279,7 +248,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]InternalClusterInfoService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]LocalNodeMasterListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]SnapshotsInProgress.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]MappingUpdatedAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]NodeIndexDeletedAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]index[/\\]NodeMappingRefreshAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]action[/\\]shard[/\\]ShardStateAction.java" checks="LineLength" />
@ -356,12 +324,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]network[/\\]NetworkService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]recycler[/\\]Recyclers.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]rounding[/\\]Rounding.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]AbstractScopedSettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]ClusterSettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]IndexScopedSettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]Setting.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]Settings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]loader[/\\]XContentSettingsLoader.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]unit[/\\]ByteSizeValue.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]unit[/\\]TimeValue.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]BigArrays.java" checks="LineLength" />
@ -393,10 +355,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PendingClusterStatesQueue.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PublishClusterStateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]ESFileStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]Environment.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]NodeEnvironment.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]AsyncShardFetch.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]DanglingIndicesState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayAllocator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayMetaState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayService.java" checks="LineLength" />
@ -405,31 +364,21 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]PrimaryShardAllocator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]ReplicaShardAllocator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]TransportNodesListGatewayMetaState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]TransportNodesListGatewayStartedShards.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]http[/\\]HttpTransportSettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]http[/\\]netty[/\\]HttpRequestHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]http[/\\]netty[/\\]NettyHttpChannel.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]http[/\\]netty[/\\]NettyHttpServerTransport.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]AlreadyExpiredException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]CompositeIndexEventListener.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexSettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexingSlowLog.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergePolicyConfig.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergeSchedulerConfig.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]NodeServicesProvider.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]SearchSlowLog.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisRegistry.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]AnalysisService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]CommonGramsTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]CustomAnalyzerProvider.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]EdgeNGramTokenizerFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]NumericDoubleAnalyzer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]ShingleTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]StemmerOverrideTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]StopTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]compound[/\\]HyphenationCompoundWordTokenFilterFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]cache[/\\]bitset[/\\]BitsetFilterCache.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]cache[/\\]request[/\\]ShardRequestCache.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]codec[/\\]PerFieldMappingPostingFormatCodec.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]engine[/\\]ElasticsearchConcurrentMergeScheduler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]engine[/\\]Engine.java" checks="LineLength" />
@ -443,16 +392,13 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]fieldcomparator[/\\]FloatValuesComparatorSource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]fieldcomparator[/\\]LongValuesComparatorSource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]GlobalOrdinalsBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]GlobalOrdinalsIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]InternalGlobalOrdinalsIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]MultiOrdinals.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]OrdinalsBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ordinals[/\\]SinglePackedOrdinals.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractAtomicParentChildFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractIndexGeoPointFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]AbstractIndexOrdinalsFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]BinaryDVIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]GeoPointArrayIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]PagedBytesIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]ParentChildIndexFieldData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]plain[/\\]SortedNumericDVIndexFieldData.java" checks="LineLength" />
@ -473,12 +419,12 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]ParseContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]ParsedDocument.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]CompletionFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]DateFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]DoubleFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]FloatFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]NumberFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]LegacyDateFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]LegacyDoubleFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]LegacyFloatFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]LegacyNumberFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]StringFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]TokenCountFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]LegacyTokenCountFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]TypeParsers.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]geo[/\\]BaseGeoPointFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]geo[/\\]GeoPointFieldMapper.java" checks="LineLength" />
@ -499,16 +445,10 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]ObjectMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]RootObjectMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]merge[/\\]MergeStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]ExtractQueryTermsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorFieldMapper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorQueriesRegistry.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]AbstractQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MatchQueryParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MoreLikeThisQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryBuilders.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryShardContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryValidationException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SimpleQueryParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]InnerHitsQueryParserHelper.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]QueryParsers.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]search[/\\]MatchQuery.java" checks="LineLength" />
@ -524,7 +464,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]ShardStateMetaData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]StoreRecovery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]TranslogRecoveryPerformer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]similarity[/\\]SimilarityService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]snapshots[/\\]blobstore[/\\]BlobStoreIndexShardRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]snapshots[/\\]blobstore[/\\]BlobStoreIndexShardSnapshots.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]store[/\\]IndexStore.java" checks="LineLength" />
@ -554,31 +493,22 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoveryFailedException.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySettings.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySource.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoverySourceHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]RecoveryState.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]recovery[/\\]StartRecoveryRequest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]TransportNodesListShardStoreMetaData.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]ttl[/\\]IndicesTTLService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]IngestMetadata.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineExecutionService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineStore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]CompoundProcessor.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]IngestDocument.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]Pipeline.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]ConvertProcessor.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]fs[/\\]FsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]DeadlockAnalyzer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]GcNames.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]HotThreads.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmGcMonitorService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]os[/\\]OsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]process[/\\]ProcessService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]Node.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]internal[/\\]InternalSettingsPreparer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorQuery.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]DummyPluginInfo.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]RemovePluginCommand.java" checks="LineLength" />
@ -593,13 +523,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]fs[/\\]FsRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]uri[/\\]URLIndexShardRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]uri[/\\]URLRepository.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BaseRestHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BytesRestResponse.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]RestController.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]RestClusterHealthAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]info[/\\]RestNodesInfoAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]stats[/\\]RestNodesStatsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]reroute[/\\]RestClusterRerouteAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]RestClusterGetSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]RestClusterUpdateSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]cluster[/\\]state[/\\]RestClusterStateAction.java" checks="LineLength" />
@ -620,21 +548,15 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]bulk[/\\]RestBulkAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestCountAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestIndicesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestNodeAttrsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestNodesAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestPendingClusterTasksAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestShardsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestThreadPoolAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]fieldstats[/\\]RestFieldStatsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]get[/\\]RestMultiGetAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]index[/\\]RestIndexAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]main[/\\]RestMainAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]percolate[/\\]RestPercolateAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]script[/\\]RestDeleteIndexedScriptAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]script[/\\]RestPutIndexedScriptAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestClearScrollAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestMultiSearchAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestSearchAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]search[/\\]RestSearchScrollAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]suggest[/\\]RestSuggestAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]support[/\\]RestActions.java" checks="LineLength" />
@ -651,7 +573,6 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]MultiValueMode.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]SearchService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]AggregatorFactories.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]AggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]InternalAggregation.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]InternalMultiBucketAggregation.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]ValuesSourceAggregationBuilder.java" checks="LineLength" />
@ -664,10 +585,8 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]filters[/\\]FiltersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]filters[/\\]InternalFilters.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]geogrid[/\\]GeoHashGridAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]geogrid[/\\]GeoHashGridParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]global[/\\]GlobalAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]global[/\\]InternalGlobal.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]histogram[/\\]DateHistogramParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]histogram[/\\]HistogramAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]missing[/\\]InternalMissing.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]missing[/\\]MissingAggregator.java" checks="LineLength" />
@ -676,10 +595,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]nested[/\\]ReverseNestedAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]InternalRange.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]RangeAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]date[/\\]DateRangeParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]date[/\\]InternalDateRange.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]geodistance[/\\]GeoDistanceParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]geodistance[/\\]InternalGeoDistance.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]range[/\\]ipv4[/\\]InternalIPv4Range.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]sampler[/\\]DiversifiedBytesHashSamplerAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]sampler[/\\]DiversifiedMapSamplerAggregator.java" checks="LineLength" />
@ -690,15 +606,11 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]GlobalOrdinalsSignificantTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]InternalSignificantTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantLongTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantLongTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantStringTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantStringTermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsParametersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]SignificantTermsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]UnmappedSignificantTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]GND.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]JLHScore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]NXYSignificanceHeuristic.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]PercentageScore.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]significant[/\\]heuristics[/\\]ScriptHeuristic.java" checks="LineLength" />
@ -716,39 +628,23 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsParametersParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]TermsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]UnmappedTerms.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]terms[/\\]support[/\\]IncludeExclude.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]ValuesSourceMetricsAggregationBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]CardinalityAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]CardinalityAggregatorFactory.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]cardinality[/\\]HyperLogLogPlusPlus.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]geobounds[/\\]GeoBoundsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]geobounds[/\\]InternalGeoBounds.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]AbstractPercentilesParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]AbstractTDigestPercentilesAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]TDigestPercentileRanksAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]percentiles[/\\]tdigest[/\\]TDigestPercentilesAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]scripted[/\\]InternalScriptedMetric.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]scripted[/\\]ScriptedMetricAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]scripted[/\\]ScriptedMetricParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]StatsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]ExtendedStatsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]ExtendedStatsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]stats[/\\]extended[/\\]InternalExtendedStats.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]metrics[/\\]tophits[/\\]TopHitsAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]BucketHelpers.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketmetrics[/\\]BucketMetricsParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketmetrics[/\\]avg[/\\]AvgBucketPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]bucketscript[/\\]BucketScriptPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]cumulativesum[/\\]CumulativeSumPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]derivative[/\\]DerivativePipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]derivative[/\\]InternalDerivative.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]pipeline[/\\]having[/\\]BucketSelectorPipelineAggregator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]AggregationContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]AggregationPath.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]GeoPointParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]ValueType.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]ValuesSourceParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]format[/\\]ValueFormat.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]support[/\\]format[/\\]ValueParser.java" checks="LineLength" />
@ -763,10 +659,7 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]FetchSubPhaseParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]explain[/\\]ExplainFetchSubPhase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]fielddata[/\\]FieldDataFieldsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsFetchSubPhase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]innerhits[/\\]InnerHitsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]script[/\\]ScriptFieldsParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]source[/\\]FetchSourceContext.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]highlight[/\\]FastVectorHighlighter.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]highlight[/\\]HighlightPhase.java" checks="LineLength" />
@ -785,15 +678,12 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]lookup[/\\]FieldLookup.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]lookup[/\\]LeafDocLookup.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]lookup[/\\]LeafFieldsLookup.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]profile[/\\]ProfileResult.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]query[/\\]QueryPhase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]rescore[/\\]QueryRescorer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]rescore[/\\]RescoreParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]searchafter[/\\]SearchAfterBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]GeoDistanceSortParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]ScriptSortParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]SortParseElement.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestContextParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]SuggestUtils.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]completion[/\\]CompletionSuggestParser.java" checks="LineLength" />
@ -807,28 +697,18 @@
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]phrase[/\\]WordScorer.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]suggest[/\\]term[/\\]TermSuggestParser.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]RestoreService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotInfo.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotShardFailure.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotShardsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]snapshots[/\\]SnapshotsService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]threadpool[/\\]ThreadPool.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]PlainTransportFuture.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]RequestHandlerRegistry.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]Transport.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]TransportChannelResponseHandler.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]TransportService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]netty[/\\]NettyTransport.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]queries[/\\]BlendedTermQueryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]apache[/\\]lucene[/\\]search[/\\]postingshighlight[/\\]CustomPostingsHighlighterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ESExceptionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]NamingConventionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]VersionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ListenerActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]RejectionActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]HotThreadsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]health[/\\]ClusterHealthResponsesTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]tasks[/\\]TasksIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]tasks[/\\]TransportTasksActionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]repositories[/\\]RepositoryBlocksIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]SettingsUpdaterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]snapshots[/\\]SnapshotBlocksIT.java" checks="LineLength" />
@ -842,20 +722,16 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreRequestIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]put[/\\]MetaDataIndexTemplateServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]upgrade[/\\]UpgradeIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkProcessorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]RetryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]fieldstats[/\\]FieldStatsRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]MultiGetShardRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]BulkRequestModifierTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]IngestProxyActionFilterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateDocumentSimpleResultTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulateExecutionServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineRequestParsingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]SimulatePipelineResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]ingest[/\\]WriteableIngestDocumentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]percolate[/\\]MultiPercolatorRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]MultiSearchRequestTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]search[/\\]SearchRequestBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]AutoCreateIndexTests.java" checks="LineLength" />
@ -864,7 +740,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]broadcast[/\\]node[/\\]TransportBroadcastByNodeActionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]master[/\\]TransportMasterNodeActionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]BroadcastReplicationTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]replication[/\\]TransportReplicationActionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]support[/\\]single[/\\]instance[/\\]TransportInstanceSingleOperationActionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]termvectors[/\\]AbstractTermVectorsTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]termvectors[/\\]GetTermVectorsCheckDocFreqIT.java" checks="LineLength" />
@ -880,13 +755,9 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]bwcompat[/\\]RecoveryWithUnsupportedIndicesIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]bwcompat[/\\]RestoreBackwardsCompatIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]AbstractClientHeadersTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]transport[/\\]FailAndRetryMockTransport.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]transport[/\\]TransportClientIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]transport[/\\]TransportClientRetryIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterHealthIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterInfoServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterModuleTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateDiffIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]DiskUsageTests.java" checks="LineLength" />
@ -907,7 +778,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]allocation[/\\]SimpleAllocationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]ClusterIndexHealthTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]ClusterStateHealthTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]health[/\\]RoutingTableGenerator.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]AutoExpandReplicasTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]DateMathExpressionResolverTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]HumanReadableIndexSettingsTests.java" checks="LineLength" />
@ -974,14 +844,10 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]geo[/\\]ShapeBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]hash[/\\]MessageDigestsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]inject[/\\]ModuleTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]io[/\\]stream[/\\]BytesStreamsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]index[/\\]FreqTermsEnumTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]uid[/\\]VersionsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]network[/\\]CidrsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]rounding[/\\]TimeZoneRoundingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]ScopedSettingsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]SettingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]transport[/\\]BoundTransportAddressTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]unit[/\\]DistanceUnitTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]unit[/\\]FuzzinessTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]BigArraysTests.java" checks="LineLength" />
@ -993,17 +859,13 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]builder[/\\]XContentBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]cbor[/\\]JsonVsCborTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]smile[/\\]JsonVsSmileTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]support[/\\]filtering[/\\]AbstractFilteringJsonGeneratorTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]support[/\\]filtering[/\\]FilterPathGeneratorFilteringTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]consistencylevel[/\\]WriteConsistencyLevelIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]deps[/\\]joda[/\\]SimpleJodaTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]deps[/\\]lucene[/\\]VectorHighlighterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]BlockingClusterStatePublishResponseHandlerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoveryWithServiceDisruptionsIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ZenFaultDetectionTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ZenUnicastDiscoveryIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]NodeJoinControllerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ZenDiscoveryIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ZenDiscoveryUnitTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ping[/\\]unicast[/\\]UnicastZenPingIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PublishClusterStateActionTests.java" checks="LineLength" />
@ -1011,8 +873,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]EnvironmentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]NodeEnvironmentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]explain[/\\]ExplainActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]fieldstats[/\\]FieldStatsIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]fieldstats[/\\]FieldStatsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayModuleTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayTests.java" checks="LineLength" />
@ -1028,8 +888,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]ReusePeerRecoverySharedTest.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]get[/\\]GetActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]http[/\\]netty[/\\]NettyHttpServerPipeliningTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]http[/\\]netty[/\\]NettyPipeliningDisabledIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]http[/\\]netty[/\\]NettyPipeliningEnabledIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexModuleTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexWithShadowReplicasIT.java" checks="LineLength" />
@ -1057,11 +915,8 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]BinaryDVFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]DuelFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]FieldDataCacheTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]FilterFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]IndexFieldDataServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]PagedBytesStringFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]ParentChildFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]fielddata[/\\]SortedSetDVStringFieldDataTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]DocumentFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]DynamicMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]FieldTypeTestCase.java" checks="LineLength" />
@ -1075,8 +930,8 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]BooleanFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]CompletionFieldTypeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]MultiFieldCopyToMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]TokenCountFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]date[/\\]SimpleDateMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]core[/\\]LegacyTokenCountFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]date[/\\]LegacyDateMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]dynamictemplate[/\\]genericstore[/\\]GenericStoreDynamicTemplateTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]dynamictemplate[/\\]pathmatch[/\\]PathMatchDynamicTemplateTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]dynamictemplate[/\\]simple[/\\]SimpleDynamicTemplatesTests.java" checks="LineLength" />
@ -1092,12 +947,12 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]index[/\\]IndexTypeMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]internal[/\\]FieldNamesFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]internal[/\\]TypeFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]ip[/\\]SimpleIpMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]ip[/\\]LegacyIpMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]merge[/\\]TestMergeMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]multifield[/\\]MultiFieldTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]multifield[/\\]merge[/\\]JavaMultiFieldMergeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]nested[/\\]NestedMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]numeric[/\\]SimpleNumericTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]numeric[/\\]LegacyNumericTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]NullValueObjectMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]object[/\\]SimpleObjectMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]parent[/\\]ParentMappingTests.java" checks="LineLength" />
@ -1112,8 +967,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]typelevels[/\\]ParseDocumentTypeLevelsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]update[/\\]UpdateMappingOnClusterIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]update[/\\]UpdateMappingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]percolator[/\\]PercolatorFieldMapperTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]AbstractQueryTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoolQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoostingQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]CommonTermsQueryBuilderTests.java" checks="LineLength" />
@ -1123,10 +976,8 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]HasParentQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MoreLikeThisQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MultiMatchQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]QueryStringQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RandomQueryBuilder.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RangeQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]ScoreModeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanMultiTermQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanNotQueryBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]plugin[/\\]CustomQueryParserIT.java" checks="LineLength" />
@ -1157,8 +1008,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerSingleNodeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesOptionsIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analysis[/\\]PreBuiltAnalyzerIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]AnalyzeActionIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]HunspellServiceIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]exists[/\\]indices[/\\]IndicesExistsIT.java" checks="LineLength" />
@ -1188,12 +1037,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]template[/\\]SimpleIndexTemplateIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineExecutionServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]PipelineStoreTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]CompoundProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]IngestDocumentTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]PipelineFactoryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]core[/\\]ValueSourceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]AbstractStringProcessorTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]AppendProcessorTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]processor[/\\]DateFormatTests.java" checks="LineLength" />
@ -1207,9 +1050,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]os[/\\]OsProbeTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]nodesinfo[/\\]NodeInfoStreamingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]options[/\\]detailederrors[/\\]DetailedErrorsEnabledIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorQueryTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginInfoTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsServiceTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]recovery[/\\]FullRollingRestartIT.java" checks="LineLength" />
@ -1220,7 +1060,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]BytesRestResponseTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]CorsRegexDefaultIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]CorsRegexIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]NoOpClient.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]RestControllerTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]util[/\\]RestUtilsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]routing[/\\]AliasResolveRoutingIT.java" checks="LineLength" />
@ -1287,7 +1126,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]scroll[/\\]DuelScrollIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]scroll[/\\]SearchScrollIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]scroll[/\\]SearchScrollWithFailingNodesIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]searchafter[/\\]SearchAfterBuilderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]searchafter[/\\]SearchAfterIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]simple[/\\]SimpleSearchIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]SortParserTests.java" checks="LineLength" />
@ -1313,15 +1151,6 @@
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]threadpool[/\\]ThreadPoolSerializationTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]threadpool[/\\]UpdateThreadPoolSettingsTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]timestamp[/\\]SimpleTimestampIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]AbstractSimpleTransportTestCase.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]ActionNamesIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]ContextAndHeaderTransportIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]NettySizeHeaderFrameDecoderTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]local[/\\]SimpleLocalTransportTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]netty[/\\]NettyScheduledPingTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]netty[/\\]NettyTransportIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]netty[/\\]NettyTransportMultiPortIntegrationIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]netty[/\\]NettyTransportMultiPortTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]tribe[/\\]TribeIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ttl[/\\]SimpleTTLIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]update[/\\]UpdateIT.java" checks="LineLength" />
@ -1342,13 +1171,11 @@
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]BulkTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]DoubleTermsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]EquivalenceTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]GeoDistanceTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HDRPercentileRanksTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HDRPercentilesTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]HistogramTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IPv4RangeTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndexLookupTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndexedScriptTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]IndicesRequestTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]LongTermsTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]MinDocCountTests.java" checks="LineLength" />
@ -1365,12 +1192,21 @@
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovySecurityTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustachePlugin.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]RenderSearchTemplateTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]SuggestSearchTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TemplateQueryParserTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]TemplateQueryTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]messy[/\\]tests[/\\]package-info.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustacheScriptEngineTests.java" checks="LineLength" />
<suppress files="modules[/\\]lang-mustache[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]mustache[/\\]MustacheTests.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequest.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequestBuilder.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolateShardResponse.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportMultiPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]TransportShardMultiPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]RestPercolateAction.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorIT.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]PercolatorIT.java" checks="LineLength" />
<suppress files="modules[/\\]percolator[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolatorRequestTests.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuCollationTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuFoldingTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-icu[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]IcuNormalizerTokenFilterFactory.java" checks="LineLength" />
@ -1381,22 +1217,12 @@
<suppress files="plugins[/\\]analysis-phonetic[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PhoneticTokenFilterFactory.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-smartcn[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]SimpleSmartChineseAnalysisTests.java" checks="LineLength" />
<suppress files="plugins[/\\]analysis-stempel[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]analysis[/\\]PolishAnalysisTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryRequest.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryRequestBuilder.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]DeleteByQueryResponse.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]TransportDeleteByQueryAction.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]IndexDeleteByQueryResponseTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]deletebyquery[/\\]TransportDeleteByQueryActionTests.java" checks="LineLength" />
<suppress files="plugins[/\\]delete-by-query[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugin[/\\]deletebyquery[/\\]DeleteByQueryTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]management[/\\]AzureComputeService.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]AbstractAzureTestCase.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureMinimumMasterNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureSimpleTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-azure[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]azure[/\\]AzureTwoStartedNodesTests.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]aws[/\\]AwsEc2ServiceImpl.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]aws[/\\]AbstractAwsTestCase.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-ec2[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ec2[/\\]AmazonEC2Mock.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-gce[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]gce[/\\]GceUnicastHostsProvider.java" checks="LineLength" />
<suppress files="plugins[/\\]discovery-gce[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]gce[/\\]GceNetworkTests.java" checks="LineLength" />
<suppress files="plugins[/\\]ingest-geoip[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]geoip[/\\]GeoIpProcessor.java" checks="LineLength" />
<suppress files="plugins[/\\]ingest-geoip[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]ingest[/\\]geoip[/\\]GeoIpProcessorFactoryTests.java" checks="LineLength" />
@ -1428,7 +1254,6 @@
<suppress files="plugins[/\\]mapper-size[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]size[/\\]SizeMappingTests.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]blobstore[/\\]AzureBlobContainer.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]blobstore[/\\]AzureBlobStore.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageService.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageServiceImpl.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cloud[/\\]azure[/\\]storage[/\\]AzureStorageSettings.java" checks="LineLength" />
<suppress files="plugins[/\\]repository-azure[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]repositories[/\\]azure[/\\]AzureRepository.java" checks="LineLength" />
@ -1461,8 +1286,8 @@
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]TestShardRouting.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]cli[/\\]CliToolTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]MockBigArrays.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]MockSearchService.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]aggregations[/\\]bucket[/\\]script[/\\]NativeSignificanceScoreScriptWithParams.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]AbstractQueryTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]BackgroundIndexer.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]CompositeTestCluster.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]CorruptionUtils.java" checks="LineLength" />
@ -1485,13 +1310,10 @@
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]disruption[/\\]SlowClusterStateProcessing.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]AssertingSearcher.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]MockEngineSupport.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]engine[/\\]MockInternalEngine.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]hamcrest[/\\]ElasticsearchAssertions.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]junit[/\\]listeners[/\\]LoggingListener.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]junit[/\\]rule[/\\]RepeatOnExceptionRule.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]ESRestTestCase.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]RestTestExecutionContext.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]client[/\\]RestClient.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]client[/\\]http[/\\]HttpRequestBuilder.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]json[/\\]JsonPath.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]parser[/\\]GreaterThanEqualToParser.java" checks="LineLength" />
@ -1511,24 +1333,15 @@
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]support[/\\]FileUtils.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]store[/\\]MockFSDirectoryService.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]store[/\\]MockFSIndexStore.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]transport[/\\]AssertingLocalTransport.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]transport[/\\]CapturingTransport.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]transport[/\\]MockTransportService.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]FileUtilsTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]JsonPathTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]rest[/\\]test[/\\]RestTestParserTests.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]test[/\\]InternalTestClusterTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]node[/\\]tasks[/\\]list[/\\]TaskInfo.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]cli[/\\]CliTool.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]SettingsModule.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]admin[/\\]indices[/\\]settings[/\\]RestGetSettingsAction.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]tribe[/\\]TribeService.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]transport[/\\]TransportModuleTests.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]sort[/\\]GeoDistanceSortBuilderIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]CorsNotSetIT.java" checks="LineLength" />
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]settings[/\\]SettingsModuleTests.java" checks="LineLength" />
<suppress files="plugins[/\\]store-smb[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]store[/\\]SmbDirectoryWrapper.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]test[/\\]tasks[/\\]MockTaskManager.java" checks="LineLength" />
<suppress files="test[/\\]framework[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]inject[/\\]ModuleTestCase.java" checks="LineLength" />
</suppressions>

View File

@ -1,2 +1,2 @@
#!/bin/sh -e
#!/bin/bash -e
<% commands.each {command -> %><%= command %><% } %>

View File

@ -1,2 +1,2 @@
#!/bin/sh -e
#!/bin/bash -e
<% commands.each {command -> %><%= command %><% } %>

View File

@ -28,3 +28,8 @@ java.security.MessageDigest#clone() @ use org.elasticsearch.common.hash.MessageD
@defaultMessage this should not have been added to lucene in the first place
org.apache.lucene.index.IndexReader#getCombinedCoreAndDeletesKey()
@defaultMessage Soon to be removed
org.apache.lucene.document.FieldType#numericType()
org.apache.lucene.document.InetAddressPoint#newPrefixQuery(java.lang.String, java.net.InetAddress, int) @LUCENE-7232

View File

@ -21,5 +21,7 @@ com.carrotsearch.randomizedtesting.annotations.Repeat @ Don't commit hardcoded r
org.apache.lucene.codecs.Codec#setDefault(org.apache.lucene.codecs.Codec) @ Use the SuppressCodecs("*") annotation instead
org.apache.lucene.util.LuceneTestCase$Slow @ Don't write slow tests
org.junit.Ignore @ Use AwaitsFix instead
org.apache.lucene.util.LuceneTestCase$Nightly @ We don't run nightly tests at this point!
com.carrotsearch.randomizedtesting.annotations.Nightly @ We don't run nightly tests at this point!
org.junit.Test @defaultMessage Just name your test method testFooBar

View File

@ -1,5 +1,5 @@
elasticsearch = 5.0.0-alpha1
lucene = 6.0.0-snapshot-f0aa4fc
elasticsearch = 5.0.0
lucene = 6.0.1
# optional dependencies
spatial4j = 0.6
@ -13,9 +13,7 @@ jna = 4.1.0
# test dependencies
randomizedrunner = 2.3.2
junit = 4.11
# TODO: Upgrade httpclient to a version > 4.5.1 once released. Then remove o.e.test.rest.client.StrictHostnameVerifier* and use
# DefaultHostnameVerifier instead since we no longer need to workaround https://issues.apache.org/jira/browse/HTTPCLIENT-1698
httpclient = 4.3.6
httpcore = 4.3.3
httpclient = 4.5.2
httpcore = 4.4.4
commonslogging = 1.1.3
commonscodec = 1.10

View File

@ -1,235 +0,0 @@
h1. Elasticsearch
h2. A Distributed RESTful Search Engine
h3. "https://www.elastic.co/products/elasticsearch":https://www.elastic.co/products/elasticsearch
Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:
* Distributed and Highly Available Search Engine.
** Each index is fully sharded with a configurable number of shards.
** Each shard can have one or more replicas.
** Read / Search operations performed on either one of the replica shard.
* Multi Tenant with Multi Types.
** Support for more than one index.
** Support for more than one type per index.
** Index level configuration (number of shards, index storage, ...).
* Various set of APIs
** HTTP RESTful API
** Native Java API.
** All APIs perform automatic node operation rerouting.
* Document oriented
** No need for upfront schema definition.
** Schema can be defined per type for customization of the indexing process.
* Reliable, Asynchronous Write Behind for long term persistency.
* (Near) Real Time Search.
* Built on top of Lucene
** Each shard is a fully functional Lucene index
** All the power of Lucene easily exposed through simple configuration / plugins.
* Per operation consistency
** Single document level operations are atomic, consistent, isolated and durable.
* Open Source under the Apache License, version 2 ("ALv2")
h2. Getting Started
First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about.
h3. Requirements
You need to have a recent version of Java installed. See the "Setup":http://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html#jvm-version page for more information.
h3. Installation
* "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.
* Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows.
* Run @curl -X GET http://localhost:9200/@.
* Start more servers ...
h3. Indexing
Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
Now, let's see if the information was added by GETting it:
<pre>
curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'
</pre>
h3. Searching
Mmm search..., shouldn't it be elastic?
Let's find all the tweets that @kimchy@ posted:
<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
</pre>
We can also use the JSON query language Elasticsearch provides instead of a query string:
<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
{
"query" : {
"match" : { "user": "kimchy" }
}
}'
</pre>
Just for kicks, let's get all the documents stored (we should see the user as well):
<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
We can also do range search (the @postDate@ was automatically identified as date)
<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"range" : {
"postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
}
}
}'
</pre>
There are many more options to perform search, after all, it's a search product no? All the familiar Lucene queries are available through the JSON query language, or through the query parser.
h3. Multi Tenant - Indices and Types
Maan, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.
Elasticsearch supports multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>
The above will index information into the @kimchy@ index, with two types, @info@ and @tweet@. Each user will get his own special index.
Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
<pre>
curl -XPUT http://localhost:9200/another_user/ -d '
{
"index" : {
"numberOfShards" : 1,
"numberOfReplicas" : 1
}
}'
</pre>
Search (and similar operations) are multi index aware. This means that we can easily search on more than one
index (twitter user), for example:
<pre>
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
Or on all the indices:
<pre>
curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>
{One liner teaser}: And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from friends of my friends).
h3. Distributed, Highly Available
Let's face it, things will fail....
Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).
In order to play with the distributed nature of Elasticsearch, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.
h3. Where to go from here?
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website.
h3. Building from Source
Elasticsearch uses "Maven":http://maven.apache.org for its build system.
In order to create a distribution, simply run the @mvn clean package
-DskipTests@ command in the cloned directory.
The distribution will be created under @target/releases@.
See the "TESTING":TESTING.asciidoc file for more information about
running the Elasticsearch test suite.
h3. Upgrading to Elasticsearch 1.x?
In order to ensure a smooth upgrade process from earlier versions of Elasticsearch (< 1.0.0), it is recommended to perform a full cluster restart. Please see the "setup reference":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.
h1. License
<pre>
This software is licensed under the Apache License, version 2 ("ALv2"), quoted below.
Copyright 2009-2015 Elasticsearch <https://www.elastic.co>
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
</pre>

View File

@ -24,6 +24,16 @@ import org.elasticsearch.gradle.BuildPlugin
apply plugin: 'elasticsearch.build'
apply plugin: 'com.bmuschko.nexus'
apply plugin: 'nebula.optional-base'
apply plugin: 'nebula.maven-base-publish'
apply plugin: 'nebula.maven-scm'
publishing {
publications {
nebula {
artifactId 'elasticsearch'
}
}
}
archivesBaseName = 'elasticsearch'
@ -53,7 +63,7 @@ dependencies {
compile 'com.carrotsearch:hppc:0.7.1'
// time handling, remove with java 8 time
compile 'joda-time:joda-time:2.8.2'
compile 'joda-time:joda-time:2.9.4'
// joda 2.0 moved to using volatile fields for datetime
// When updating to a new version, make sure to update our copy of BaseDateTime
compile 'org.joda:joda-convert:1.2'
@ -111,6 +121,36 @@ forbiddenPatterns {
exclude '**/org/elasticsearch/cluster/routing/shard_routes.txt'
}
task generateModulesList {
List<String> modules = project(':modules').subprojects.collect { it.name }
File modulesFile = new File(buildDir, 'generated-resources/modules.txt')
processResources.from(modulesFile)
inputs.property('modules', modules)
outputs.file(modulesFile)
doLast {
modulesFile.parentFile.mkdirs()
modulesFile.setText(modules.join('\n'), 'UTF-8')
}
}
task generatePluginsList {
List<String> plugins = project(':plugins').subprojects
.findAll { it.name.contains('example') == false }
.collect { it.name }
File pluginsFile = new File(buildDir, 'generated-resources/plugins.txt')
processResources.from(pluginsFile)
inputs.property('plugins', plugins)
outputs.file(pluginsFile)
doLast {
pluginsFile.parentFile.mkdirs()
pluginsFile.setText(plugins.join('\n'), 'UTF-8')
}
}
processResources {
dependsOn generateModulesList, generatePluginsList
}
thirdPartyAudit.excludes = [
// uses internal java api: sun.security.x509 (X509CertInfo, X509CertImpl, X500Name)
'org.jboss.netty.handler.ssl.util.OpenJdkSelfSignedCertGenerator',

View File

@ -0,0 +1,37 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.log4j;
import org.apache.log4j.helpers.ThreadLocalMap;
/**
* Log4j 1.2 MDC breaks because it parses java.version incorrectly (does not handle new java9 versioning).
*
* This hack fixes up the pkg private members as if it had detected the java version correctly.
*/
public class Java9Hack {
public static void fixLog4j() {
if (MDC.mdc.tlm == null) {
MDC.mdc.java1 = false;
MDC.mdc.tlm = new ThreadLocalMap();
}
}
}

View File

@ -0,0 +1,117 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.lucene.document;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.Arrays;
import org.apache.lucene.search.Query;
import org.apache.lucene.util.NumericUtils;
import org.elasticsearch.common.SuppressForbidden;
/**
* Forked utility methods from Lucene's InetAddressPoint until LUCENE-7232 and
* LUCENE-7234 are released.
*/
// TODO: remove me when we upgrade to Lucene 6.1
@SuppressForbidden(reason="uses InetAddress.getHostAddress")
public final class XInetAddressPoint {
private XInetAddressPoint() {}
/** The minimum value that an ip address can hold. */
public static final InetAddress MIN_VALUE;
/** The maximum value that an ip address can hold. */
public static final InetAddress MAX_VALUE;
static {
MIN_VALUE = InetAddressPoint.decode(new byte[InetAddressPoint.BYTES]);
byte[] maxValueBytes = new byte[InetAddressPoint.BYTES];
Arrays.fill(maxValueBytes, (byte) 0xFF);
MAX_VALUE = InetAddressPoint.decode(maxValueBytes);
}
/**
* Return the {@link InetAddress} that compares immediately greater than
* {@code address}.
* @throws ArithmeticException if the provided address is the
* {@link #MAX_VALUE maximum ip address}
*/
public static InetAddress nextUp(InetAddress address) {
if (address.equals(MAX_VALUE)) {
throw new ArithmeticException("Overflow: there is no greater InetAddress than "
+ address.getHostAddress());
}
byte[] delta = new byte[InetAddressPoint.BYTES];
delta[InetAddressPoint.BYTES-1] = 1;
byte[] nextUpBytes = new byte[InetAddressPoint.BYTES];
NumericUtils.add(InetAddressPoint.BYTES, 0, InetAddressPoint.encode(address), delta, nextUpBytes);
return InetAddressPoint.decode(nextUpBytes);
}
/**
* Return the {@link InetAddress} that compares immediately less than
* {@code address}.
* @throws ArithmeticException if the provided address is the
* {@link #MIN_VALUE minimum ip address}
*/
public static InetAddress nextDown(InetAddress address) {
if (address.equals(MIN_VALUE)) {
throw new ArithmeticException("Underflow: there is no smaller InetAddress than "
+ address.getHostAddress());
}
byte[] delta = new byte[InetAddressPoint.BYTES];
delta[InetAddressPoint.BYTES-1] = 1;
byte[] nextDownBytes = new byte[InetAddressPoint.BYTES];
NumericUtils.subtract(InetAddressPoint.BYTES, 0, InetAddressPoint.encode(address), delta, nextDownBytes);
return InetAddressPoint.decode(nextDownBytes);
}
/**
* Create a prefix query for matching a CIDR network range.
*
* @param field field name. must not be {@code null}.
* @param value any host address
* @param prefixLength the network prefix length for this address. This is also known as the subnet mask in the context of IPv4
* addresses.
* @throws IllegalArgumentException if {@code field} is null, or prefixLength is invalid.
* @return a query matching documents with addresses contained within this network
*/
// TODO: remove me when we upgrade to Lucene 6.0.1
public static Query newPrefixQuery(String field, InetAddress value, int prefixLength) {
if (value == null) {
throw new IllegalArgumentException("InetAddress must not be null");
}
if (prefixLength < 0 || prefixLength > 8 * value.getAddress().length) {
throw new IllegalArgumentException("illegal prefixLength '" + prefixLength
+ "'. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges");
}
// create the lower value by zeroing out the host portion, upper value by filling it with all ones.
byte lower[] = value.getAddress();
byte upper[] = value.getAddress();
for (int i = prefixLength; i < 8 * lower.length; i++) {
int m = 1 << (7 - (i & 7));
lower[i >> 3] &= ~m;
upper[i >> 3] |= m;
}
try {
return InetAddressPoint.newRangeQuery(field, InetAddress.getByAddress(lower), InetAddress.getByAddress(upper));
} catch (UnknownHostException e) {
throw new AssertionError(e); // values are coming from InetAddress
}
}
}

View File

@ -0,0 +1,130 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.lucene.index;
import org.apache.lucene.util.StringHelper;
import java.io.IOException;
/**
* Forked utility methods from Lucene's PointValues until LUCENE-7257 is released.
*/
public class XPointValues {
/** Return the cumulated number of points across all leaves of the given
* {@link IndexReader}. Leaves that do not have points for the given field
* are ignored.
* @see PointValues#size(String) */
public static long size(IndexReader reader, String field) throws IOException {
long size = 0;
for (LeafReaderContext ctx : reader.leaves()) {
FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);
if (info == null || info.getPointDimensionCount() == 0) {
continue;
}
PointValues values = ctx.reader().getPointValues();
size += values.size(field);
}
return size;
}
/** Return the cumulated number of docs that have points across all leaves
* of the given {@link IndexReader}. Leaves that do not have points for the
* given field are ignored.
* @see PointValues#getDocCount(String) */
public static int getDocCount(IndexReader reader, String field) throws IOException {
int count = 0;
for (LeafReaderContext ctx : reader.leaves()) {
FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);
if (info == null || info.getPointDimensionCount() == 0) {
continue;
}
PointValues values = ctx.reader().getPointValues();
count += values.getDocCount(field);
}
return count;
}
/** Return the minimum packed values across all leaves of the given
* {@link IndexReader}. Leaves that do not have points for the given field
* are ignored.
* @see PointValues#getMinPackedValue(String) */
public static byte[] getMinPackedValue(IndexReader reader, String field) throws IOException {
byte[] minValue = null;
for (LeafReaderContext ctx : reader.leaves()) {
FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);
if (info == null || info.getPointDimensionCount() == 0) {
continue;
}
PointValues values = ctx.reader().getPointValues();
byte[] leafMinValue = values.getMinPackedValue(field);
if (leafMinValue == null) {
continue;
}
if (minValue == null) {
minValue = leafMinValue.clone();
} else {
final int numDimensions = values.getNumDimensions(field);
final int numBytesPerDimension = values.getBytesPerDimension(field);
for (int i = 0; i < numDimensions; ++i) {
int offset = i * numBytesPerDimension;
if (StringHelper.compare(numBytesPerDimension, leafMinValue, offset, minValue, offset) < 0) {
System.arraycopy(leafMinValue, offset, minValue, offset, numBytesPerDimension);
}
}
}
}
return minValue;
}
/** Return the maximum packed values across all leaves of the given
* {@link IndexReader}. Leaves that do not have points for the given field
* are ignored.
* @see PointValues#getMaxPackedValue(String) */
public static byte[] getMaxPackedValue(IndexReader reader, String field) throws IOException {
byte[] maxValue = null;
for (LeafReaderContext ctx : reader.leaves()) {
FieldInfo info = ctx.reader().getFieldInfos().fieldInfo(field);
if (info == null || info.getPointDimensionCount() == 0) {
continue;
}
PointValues values = ctx.reader().getPointValues();
byte[] leafMaxValue = values.getMaxPackedValue(field);
if (leafMaxValue == null) {
continue;
}
if (maxValue == null) {
maxValue = leafMaxValue.clone();
} else {
final int numDimensions = values.getNumDimensions(field);
final int numBytesPerDimension = values.getBytesPerDimension(field);
for (int i = 0; i < numDimensions; ++i) {
int offset = i * numBytesPerDimension;
if (StringHelper.compare(numBytesPerDimension, leafMaxValue, offset, maxValue, offset) > 0) {
System.arraycopy(leafMaxValue, offset, maxValue, offset, numBytesPerDimension);
}
}
}
}
return maxValue;
}
/** Default constructor */
private XPointValues() {
}
}

View File

@ -22,6 +22,7 @@ package org.apache.lucene.queryparser.classic;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
@ -32,6 +33,7 @@ import org.apache.lucene.search.MatchNoDocsQuery;
import org.apache.lucene.search.MultiPhraseQuery;
import org.apache.lucene.search.PhraseQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.SynonymQuery;
import org.apache.lucene.util.IOUtils;
import org.apache.lucene.util.automaton.RegExp;
import org.elasticsearch.common.lucene.search.Queries;
@ -39,6 +41,7 @@ import org.elasticsearch.common.unit.Fuzziness;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.core.DateFieldMapper;
import org.elasticsearch.index.mapper.core.LegacyDateFieldMapper;
import org.elasticsearch.index.query.QueryShardContext;
import org.elasticsearch.index.query.support.QueryParsers;
@ -105,7 +108,8 @@ public class MapperQueryParser extends QueryParser {
}
/**
* We override this one so we can get the fuzzy part to be treated as string, so people can do: "age:10~5" or "timestamp:2012-10-10~5d"
* We override this one so we can get the fuzzy part to be treated as string,
* so people can do: "age:10~5" or "timestamp:2012-10-10~5d"
*/
@Override
Query handleBareFuzzy(String qfield, Token fuzzySlop, String termImage) throws ParseException {
@ -164,8 +168,7 @@ public class MapperQueryParser extends QueryParser {
clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD));
}
}
if (clauses.size() == 0) // happens for stopwords
return null;
if (clauses.isEmpty()) return null; // happens for stopwords
return getBooleanQueryCoordDisabled(clauses);
}
} else {
@ -215,7 +218,8 @@ public class MapperQueryParser extends QueryParser {
}
if (currentFieldType != null) {
Query query = null;
if (currentFieldType.useTermQueryWithQueryString()) {
if (currentFieldType.tokenized() == false) {
// this might be a structured field like a numeric
try {
query = currentFieldType.termQuery(queryText, context);
} catch (RuntimeException e) {
@ -266,8 +270,7 @@ public class MapperQueryParser extends QueryParser {
clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD));
}
}
if (clauses.size() == 0) // happens for stopwords
return null;
if (clauses.isEmpty()) return null; // happens for stopwords
return getBooleanQueryCoordDisabled(clauses);
}
} else {
@ -276,7 +279,8 @@ public class MapperQueryParser extends QueryParser {
}
@Override
protected Query getRangeQuery(String field, String part1, String part2, boolean startInclusive, boolean endInclusive) throws ParseException {
protected Query getRangeQuery(String field, String part1, String part2,
boolean startInclusive, boolean endInclusive) throws ParseException {
if ("*".equals(part1)) {
part1 = null;
}
@ -317,23 +321,26 @@ public class MapperQueryParser extends QueryParser {
clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD));
}
}
if (clauses.size() == 0) // happens for stopwords
return null;
if (clauses.isEmpty()) return null; // happens for stopwords
return getBooleanQueryCoordDisabled(clauses);
}
}
private Query getRangeQuerySingle(String field, String part1, String part2, boolean startInclusive, boolean endInclusive) {
private Query getRangeQuerySingle(String field, String part1, String part2,
boolean startInclusive, boolean endInclusive) {
currentFieldType = context.fieldMapper(field);
if (currentFieldType != null) {
if (lowercaseExpandedTerms && !currentFieldType.isNumeric()) {
if (lowercaseExpandedTerms && currentFieldType.tokenized()) {
part1 = part1 == null ? null : part1.toLowerCase(locale);
part2 = part2 == null ? null : part2.toLowerCase(locale);
}
try {
Query rangeQuery;
if (currentFieldType instanceof DateFieldMapper.DateFieldType && settings.timeZone() != null) {
if (currentFieldType instanceof LegacyDateFieldMapper.DateFieldType && settings.timeZone() != null) {
LegacyDateFieldMapper.DateFieldType dateFieldType = (LegacyDateFieldMapper.DateFieldType) this.currentFieldType;
rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null);
} else if (currentFieldType instanceof DateFieldMapper.DateFieldType && settings.timeZone() != null) {
DateFieldMapper.DateFieldType dateFieldType = (DateFieldMapper.DateFieldType) this.currentFieldType;
rangeQuery = dateFieldType.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null);
} else {
@ -392,7 +399,8 @@ public class MapperQueryParser extends QueryParser {
currentFieldType = context.fieldMapper(field);
if (currentFieldType != null) {
try {
return currentFieldType.fuzzyQuery(termStr, Fuzziness.build(minSimilarity), fuzzyPrefixLength, settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);
return currentFieldType.fuzzyQuery(termStr, Fuzziness.build(minSimilarity),
fuzzyPrefixLength, settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);
} catch (RuntimeException e) {
if (settings.lenient()) {
return null;
@ -407,7 +415,8 @@ public class MapperQueryParser extends QueryParser {
protected Query newFuzzyQuery(Term term, float minimumSimilarity, int prefixLength) {
String text = term.text();
int numEdits = FuzzyQuery.floatToEdits(minimumSimilarity, text.codePointCount(0, text.length()));
FuzzyQuery query = new FuzzyQuery(term, numEdits, prefixLength, settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);
FuzzyQuery query = new FuzzyQuery(term, numEdits, prefixLength,
settings.fuzzyMaxExpansions(), FuzzyQuery.defaultTranspositions);
QueryParsers.setRewriteMethod(query, settings.fuzzyRewriteMethod());
return query;
}
@ -444,8 +453,7 @@ public class MapperQueryParser extends QueryParser {
clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD));
}
}
if (clauses.size() == 0) // happens for stopwords
return null;
if (clauses.isEmpty()) return null; // happens for stopwords
return getBooleanQueryCoordDisabled(clauses);
}
} else {
@ -463,7 +471,7 @@ public class MapperQueryParser extends QueryParser {
setAnalyzer(context.getSearchAnalyzer(currentFieldType));
}
Query query = null;
if (currentFieldType.useTermQueryWithQueryString()) {
if (currentFieldType.tokenized() == false) {
query = currentFieldType.prefixQuery(termStr, multiTermRewriteMethod, context);
}
if (query == null) {
@ -486,7 +494,7 @@ public class MapperQueryParser extends QueryParser {
if (!settings.analyzeWildcard()) {
return super.getPrefixQuery(field, termStr);
}
List<String> tlist;
List<List<String> > tlist;
// get Analyzer from superclass and tokenize the term
TokenStream source = null;
try {
@ -497,7 +505,9 @@ public class MapperQueryParser extends QueryParser {
return super.getPrefixQuery(field, termStr);
}
tlist = new ArrayList<>();
List<String> currentPos = new ArrayList<>();
CharTermAttribute termAtt = source.addAttribute(CharTermAttribute.class);
PositionIncrementAttribute posAtt = source.addAttribute(PositionIncrementAttribute.class);
while (true) {
try {
@ -505,7 +515,14 @@ public class MapperQueryParser extends QueryParser {
} catch (IOException e) {
break;
}
tlist.add(termAtt.toString());
if (currentPos.isEmpty() == false && posAtt.getPositionIncrement() > 0) {
tlist.add(currentPos);
currentPos = new ArrayList<>();
}
currentPos.add(termAtt.toString());
}
if (currentPos.isEmpty() == false) {
tlist.add(currentPos);
}
} finally {
if (source != null) {
@ -513,16 +530,45 @@ public class MapperQueryParser extends QueryParser {
}
}
if (tlist.size() == 1) {
return super.getPrefixQuery(field, tlist.get(0));
} else {
// build a boolean query with prefix on each one...
List<BooleanClause> clauses = new ArrayList<>();
for (String token : tlist) {
clauses.add(new BooleanClause(super.getPrefixQuery(field, token), BooleanClause.Occur.SHOULD));
}
return getBooleanQueryCoordDisabled(clauses);
if (tlist.size() == 0) {
return null;
}
if (tlist.size() == 1 && tlist.get(0).size() == 1) {
return super.getPrefixQuery(field, tlist.get(0).get(0));
}
// build a boolean query with prefix on the last position only.
List<BooleanClause> clauses = new ArrayList<>();
for (int pos = 0; pos < tlist.size(); pos++) {
List<String> plist = tlist.get(pos);
boolean isLastPos = (pos == tlist.size() - 1);
Query posQuery;
if (plist.size() == 1) {
if (isLastPos) {
posQuery = super.getPrefixQuery(field, plist.get(0));
} else {
posQuery = newTermQuery(new Term(field, plist.get(0)));
}
} else if (isLastPos == false) {
// build a synonym query for terms in the same position.
Term[] terms = new Term[plist.size()];
for (int i = 0; i < plist.size(); i++) {
terms[i] = new Term(field, plist.get(i));
}
posQuery = new SynonymQuery(terms);
} else {
List<BooleanClause> innerClauses = new ArrayList<>();
for (String token : plist) {
innerClauses.add(new BooleanClause(super.getPrefixQuery(field, token),
BooleanClause.Occur.SHOULD));
}
posQuery = getBooleanQueryCoordDisabled(innerClauses);
}
clauses.add(new BooleanClause(posQuery,
getDefaultOperator() == Operator.AND ? BooleanClause.Occur.MUST : BooleanClause.Occur.SHOULD));
}
return getBooleanQuery(clauses);
}
@Override
@ -574,8 +620,7 @@ public class MapperQueryParser extends QueryParser {
clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD));
}
}
if (clauses.size() == 0) // happens for stopwords
return null;
if (clauses.isEmpty()) return null; // happens for stopwords
return getBooleanQueryCoordDisabled(clauses);
}
} else {
@ -703,8 +748,7 @@ public class MapperQueryParser extends QueryParser {
clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD));
}
}
if (clauses.size() == 0) // happens for stopwords
return null;
if (clauses.isEmpty()) return null; // happens for stopwords
return getBooleanQueryCoordDisabled(clauses);
}
} else {
@ -722,8 +766,9 @@ public class MapperQueryParser extends QueryParser {
setAnalyzer(context.getSearchAnalyzer(currentFieldType));
}
Query query = null;
if (currentFieldType.useTermQueryWithQueryString()) {
query = currentFieldType.regexpQuery(termStr, RegExp.ALL, maxDeterminizedStates, multiTermRewriteMethod, context);
if (currentFieldType.tokenized() == false) {
query = currentFieldType.regexpQuery(termStr, RegExp.ALL,
maxDeterminizedStates, multiTermRewriteMethod, context);
}
if (query == null) {
query = super.getRegexpQuery(field, termStr);
@ -740,7 +785,7 @@ public class MapperQueryParser extends QueryParser {
setAnalyzer(oldAnalyzer);
}
}
/**
* @deprecated review all use of this, don't rely on coord
*/

View File

@ -0,0 +1,267 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.lucene.search.suggest.analyzing;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStreamToAutomaton;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.IntsRef;
import org.apache.lucene.util.UnicodeUtil;
import org.apache.lucene.util.automaton.Automata;
import org.apache.lucene.util.automaton.Automaton;
import org.apache.lucene.util.automaton.FiniteStringsIterator;
import org.apache.lucene.util.automaton.LevenshteinAutomata;
import org.apache.lucene.util.automaton.Operations;
import org.apache.lucene.util.automaton.UTF32ToUTF8;
import org.apache.lucene.util.fst.FST;
import org.apache.lucene.util.fst.PairOutputs;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import static org.apache.lucene.util.automaton.Operations.DEFAULT_MAX_DETERMINIZED_STATES;
/**
* Implements a fuzzy {@link AnalyzingSuggester}. The similarity measurement is
* based on the Damerau-Levenshtein (optimal string alignment) algorithm, though
* you can explicitly choose classic Levenshtein by passing <code>false</code>
* for the <code>transpositions</code> parameter.
* <p>
* At most, this query will match terms up to
* {@value org.apache.lucene.util.automaton.LevenshteinAutomata#MAXIMUM_SUPPORTED_DISTANCE}
* edits. Higher distances are not supported. Note that the
* fuzzy distance is measured in "byte space" on the bytes
* returned by the {@link org.apache.lucene.analysis.TokenStream}'s {@link
* org.apache.lucene.analysis.tokenattributes.TermToBytesRefAttribute}, usually UTF8. By default
* the analyzed bytes must be at least 3 {@link
* #DEFAULT_MIN_FUZZY_LENGTH} bytes before any edits are
* considered. Furthermore, the first 1 {@link
* #DEFAULT_NON_FUZZY_PREFIX} byte is not allowed to be
* edited. We allow up to 1 (@link
* #DEFAULT_MAX_EDITS} edit.
* If {@link #unicodeAware} parameter in the constructor is set to true, maxEdits,
* minFuzzyLength, transpositions and nonFuzzyPrefix are measured in Unicode code
* points (actual letters) instead of bytes.*
*
* <p>
* NOTE: This suggester does not boost suggestions that
* required no edits over suggestions that did require
* edits. This is a known limitation.
*
* <p>
* Note: complex query analyzers can have a significant impact on the lookup
* performance. It's recommended to not use analyzers that drop or inject terms
* like synonyms to keep the complexity of the prefix intersection low for good
* lookup performance. At index time, complex analyzers can safely be used.
* </p>
*/
public final class XFuzzySuggester extends XAnalyzingSuggester {
private final int maxEdits;
private final boolean transpositions;
private final int nonFuzzyPrefix;
private final int minFuzzyLength;
private final boolean unicodeAware;
/**
* Measure maxEdits, minFuzzyLength, transpositions and nonFuzzyPrefix
* parameters in Unicode code points (actual letters)
* instead of bytes.
*/
public static final boolean DEFAULT_UNICODE_AWARE = false;
/**
* The default minimum length of the key passed to {@link
* #lookup} before any edits are allowed.
*/
public static final int DEFAULT_MIN_FUZZY_LENGTH = 3;
/**
* The default prefix length where edits are not allowed.
*/
public static final int DEFAULT_NON_FUZZY_PREFIX = 1;
/**
* The default maximum number of edits for fuzzy
* suggestions.
*/
public static final int DEFAULT_MAX_EDITS = 1;
/**
* The default transposition value passed to {@link org.apache.lucene.util.automaton.LevenshteinAutomata}
*/
public static final boolean DEFAULT_TRANSPOSITIONS = true;
/**
* Creates a {@link FuzzySuggester} instance initialized with default values.
*
* @param analyzer the analyzer used for this suggester
*/
public XFuzzySuggester(Analyzer analyzer) {
this(analyzer, analyzer);
}
/**
* Creates a {@link FuzzySuggester} instance with an index &amp; a query analyzer initialized with default values.
*
* @param indexAnalyzer
* Analyzer that will be used for analyzing suggestions while building the index.
* @param queryAnalyzer
* Analyzer that will be used for analyzing query text during lookup
*/
public XFuzzySuggester(Analyzer indexAnalyzer, Analyzer queryAnalyzer) {
this(indexAnalyzer, null, queryAnalyzer, EXACT_FIRST | PRESERVE_SEP, 256, -1,
DEFAULT_MAX_EDITS, DEFAULT_TRANSPOSITIONS,
DEFAULT_NON_FUZZY_PREFIX, DEFAULT_MIN_FUZZY_LENGTH, DEFAULT_UNICODE_AWARE,
null, false, 0, SEP_LABEL, PAYLOAD_SEP, END_BYTE, HOLE_CHARACTER);
}
/**
* Creates a {@link FuzzySuggester} instance.
*
* @param indexAnalyzer Analyzer that will be used for
* analyzing suggestions while building the index.
* @param queryAnalyzer Analyzer that will be used for
* analyzing query text during lookup
* @param options see {@link #EXACT_FIRST}, {@link #PRESERVE_SEP}
* @param maxSurfaceFormsPerAnalyzedForm Maximum number of
* surface forms to keep for a single analyzed form.
* When there are too many surface forms we discard the
* lowest weighted ones.
* @param maxGraphExpansions Maximum number of graph paths
* to expand from the analyzed form. Set this to -1 for
* no limit.
* @param maxEdits must be &gt;= 0 and &lt;= {@link org.apache.lucene.util.automaton.LevenshteinAutomata#MAXIMUM_SUPPORTED_DISTANCE} .
* @param transpositions <code>true</code> if transpositions should be treated as a primitive
* edit operation. If this is false, comparisons will implement the classic
* Levenshtein algorithm.
* @param nonFuzzyPrefix length of common (non-fuzzy) prefix (see default {@link #DEFAULT_NON_FUZZY_PREFIX}
* @param minFuzzyLength minimum length of lookup key before any edits are allowed (see default {@link #DEFAULT_MIN_FUZZY_LENGTH})
* @param sepLabel separation label
* @param payloadSep payload separator byte
* @param endByte end byte marker byte
*/
public XFuzzySuggester(Analyzer indexAnalyzer, Automaton queryPrefix, Analyzer queryAnalyzer,
int options, int maxSurfaceFormsPerAnalyzedForm, int maxGraphExpansions,
int maxEdits, boolean transpositions, int nonFuzzyPrefix, int minFuzzyLength,
boolean unicodeAware, FST<PairOutputs.Pair<Long, BytesRef>> fst, boolean hasPayloads,
int maxAnalyzedPathsForOneInput, int sepLabel, int payloadSep, int endByte, int holeCharacter) {
super(indexAnalyzer, queryPrefix, queryAnalyzer, options, maxSurfaceFormsPerAnalyzedForm, maxGraphExpansions,
true, fst, hasPayloads, maxAnalyzedPathsForOneInput, sepLabel, payloadSep, endByte, holeCharacter);
if (maxEdits < 0 || maxEdits > LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE) {
throw new IllegalArgumentException(
"maxEdits must be between 0 and " + LevenshteinAutomata.MAXIMUM_SUPPORTED_DISTANCE);
}
if (nonFuzzyPrefix < 0) {
throw new IllegalArgumentException("nonFuzzyPrefix must not be >= 0 (got " + nonFuzzyPrefix + ")");
}
if (minFuzzyLength < 0) {
throw new IllegalArgumentException("minFuzzyLength must not be >= 0 (got " + minFuzzyLength + ")");
}
this.maxEdits = maxEdits;
this.transpositions = transpositions;
this.nonFuzzyPrefix = nonFuzzyPrefix;
this.minFuzzyLength = minFuzzyLength;
this.unicodeAware = unicodeAware;
}
@Override
protected List<FSTUtil.Path<PairOutputs.Pair<Long,BytesRef>>> getFullPrefixPaths(
List<FSTUtil.Path<PairOutputs.Pair<Long,BytesRef>>> prefixPaths, Automaton lookupAutomaton,
FST<PairOutputs.Pair<Long,BytesRef>> fst)
throws IOException {
// TODO: right now there's no penalty for fuzzy/edits,
// ie a completion whose prefix matched exactly what the
// user typed gets no boost over completions that
// required an edit, which get no boost over completions
// requiring two edits. I suspect a multiplicative
// factor is appropriate (eg, say a fuzzy match must be at
// least 2X better weight than the non-fuzzy match to
// "compete") ... in which case I think the wFST needs
// to be log weights or something ...
Automaton levA = convertAutomaton(toLevenshteinAutomata(lookupAutomaton));
/*
Writer w = new OutputStreamWriter(new FileOutputStream("out.dot"), "UTF-8");
w.write(levA.toDot());
w.close();
System.out.println("Wrote LevA to out.dot");
*/
return FSTUtil.intersectPrefixPaths(levA, fst);
}
@Override
protected Automaton convertAutomaton(Automaton a) {
if (unicodeAware) {
// FLORIAN EDIT: get converted Automaton from superclass
Automaton utf8automaton = new UTF32ToUTF8().convert(super.convertAutomaton(a));
// This automaton should not blow up during determinize:
utf8automaton = Operations.determinize(utf8automaton, Integer.MAX_VALUE);
return utf8automaton;
} else {
return super.convertAutomaton(a);
}
}
@Override
public TokenStreamToAutomaton getTokenStreamToAutomaton() {
final TokenStreamToAutomaton tsta = super.getTokenStreamToAutomaton();
tsta.setUnicodeArcs(unicodeAware);
return tsta;
}
Automaton toLevenshteinAutomata(Automaton automaton) {
List<Automaton> subs = new ArrayList<>();
FiniteStringsIterator finiteStrings = new FiniteStringsIterator(automaton);
for (IntsRef string; (string = finiteStrings.next()) != null;) {
if (string.length <= nonFuzzyPrefix || string.length < minFuzzyLength) {
subs.add(Automata.makeString(string.ints, string.offset, string.length));
} else {
int ints[] = new int[string.length-nonFuzzyPrefix];
System.arraycopy(string.ints, string.offset+nonFuzzyPrefix, ints, 0, ints.length);
// TODO: maybe add alphaMin to LevenshteinAutomata,
// and pass 1 instead of 0? We probably don't want
// to allow the trailing dedup bytes to be
// edited... but then 0 byte is "in general" allowed
// on input (but not in UTF8).
LevenshteinAutomata lev = new LevenshteinAutomata(
ints, unicodeAware ? Character.MAX_CODE_POINT : 255, transpositions);
subs.add(lev.toAutomaton(maxEdits, UnicodeUtil.newString(string.ints, string.offset, nonFuzzyPrefix)));
}
}
if (subs.isEmpty()) {
// automaton is empty, there is no accepted paths through it
return Automata.makeEmpty(); // matches nothing
} else if (subs.size() == 1) {
// no synonyms or anything: just a single path through the tokenstream
return subs.get(0);
} else {
// multiple paths: this is really scary! is it slow?
// maybe we should not do this and throw UOE?
Automaton a = Operations.union(subs);
// TODO: we could call toLevenshteinAutomata() before det?
// this only happens if you have multiple paths anyway (e.g. synonyms)
return Operations.determinize(a, DEFAULT_MAX_DETERMINIZED_STATES);
}
}
}

View File

@ -0,0 +1,124 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.lucene.spatial.geopoint.search;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.spatial.geopoint.document.GeoPointField.TermEncoding;
/** Implements a point distance range query on a GeoPoint field. This is based on
* {@code org.apache.lucene.spatial.geopoint.search.GeoPointDistanceQuery} and is implemented using a
* {@code org.apache.lucene.search.BooleanClause.MUST_NOT} clause to exclude any points that fall within
* minRadiusMeters from the provided point.
* <p>
* NOTE: this query does not correctly support multi-value docs (see: https://issues.apache.org/jira/browse/LUCENE-7126)
* <br>
* TODO: remove this per ISSUE #17658
**/
public final class XGeoPointDistanceRangeQuery extends GeoPointDistanceQuery {
/** minimum distance range (in meters) from lat, lon center location, maximum is inherited */
protected final double minRadiusMeters;
/**
* Constructs a query for all {@link org.apache.lucene.spatial.geopoint.document.GeoPointField} types within a minimum / maximum
* distance (in meters) range from a given point
*/
public XGeoPointDistanceRangeQuery(final String field, final double centerLat, final double centerLon,
final double minRadiusMeters, final double maxRadiusMeters) {
this(field, TermEncoding.PREFIX, centerLat, centerLon, minRadiusMeters, maxRadiusMeters);
}
/**
* Constructs a query for all {@link org.apache.lucene.spatial.geopoint.document.GeoPointField} types within a minimum / maximum
* distance (in meters) range from a given point. Accepts an optional
* {@link org.apache.lucene.spatial.geopoint.document.GeoPointField.TermEncoding}
*/
public XGeoPointDistanceRangeQuery(final String field, final TermEncoding termEncoding, final double centerLat, final double centerLon,
final double minRadiusMeters, final double maxRadius) {
super(field, termEncoding, centerLat, centerLon, maxRadius);
this.minRadiusMeters = minRadiusMeters;
}
@Override
public Query rewrite(IndexReader reader) {
Query q = super.rewrite(reader);
if (minRadiusMeters == 0.0) {
return q;
}
// add an exclusion query
BooleanQuery.Builder bqb = new BooleanQuery.Builder();
// create a new exclusion query
GeoPointDistanceQuery exclude = new GeoPointDistanceQuery(field, termEncoding, centerLat, centerLon, minRadiusMeters);
// full map search
// if (radiusMeters >= GeoProjectionUtils.SEMIMINOR_AXIS) {
// bqb.add(new BooleanClause(new GeoPointInBBoxQuery(this.field, -180.0, -90.0, 180.0, 90.0), BooleanClause.Occur.MUST));
// } else {
bqb.add(new BooleanClause(q, BooleanClause.Occur.MUST));
// }
bqb.add(new BooleanClause(exclude, BooleanClause.Occur.MUST_NOT));
return bqb.build();
}
@Override
public String toString(String field) {
final StringBuilder sb = new StringBuilder();
sb.append(getClass().getSimpleName());
sb.append(':');
if (!this.field.equals(field)) {
sb.append(" field=");
sb.append(this.field);
sb.append(':');
}
return sb.append( " Center: [")
.append(centerLat)
.append(',')
.append(centerLon)
.append(']')
.append(" From Distance: ")
.append(minRadiusMeters)
.append(" m")
.append(" To Distance: ")
.append(radiusMeters)
.append(" m")
.append(" Lower Left: [")
.append(minLat)
.append(',')
.append(minLon)
.append(']')
.append(" Upper Right: [")
.append(maxLat)
.append(',')
.append(maxLon)
.append("]")
.toString();
}
/** getter method for minimum distance */
public double getMinRadiusMeters() {
return this.minRadiusMeters;
}
/** getter method for maximum distance */
public double getMaxRadiusMeters() {
return this.radiusMeters;
}
}

View File

@ -19,10 +19,11 @@
package org.elasticsearch;
import org.elasticsearch.action.support.replication.ReplicationOperation;
import org.elasticsearch.cluster.action.shard.ShardStateAction;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.logging.LoggerMessageFormat;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
@ -46,7 +47,7 @@ import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_UUID_NA_VAL
/**
* A base class for all elasticsearch exceptions.
*/
public class ElasticsearchException extends RuntimeException implements ToXContent {
public class ElasticsearchException extends RuntimeException implements ToXContent, Writeable {
public static final String REST_EXCEPTION_SKIP_CAUSE = "rest.exception.cause.skip";
public static final String REST_EXCEPTION_SKIP_STACK_TRACE = "rest.exception.stacktrace.skip";
@ -199,41 +200,7 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
return rootCause;
}
/**
* Check whether this exception contains an exception of the given type:
* either it is of the given class itself or it contains a nested cause
* of the given type.
*
* @param exType the exception type to look for
* @return whether there is a nested exception of the specified type
*/
public boolean contains(Class<? extends Throwable> exType) {
if (exType == null) {
return false;
}
if (exType.isInstance(this)) {
return true;
}
Throwable cause = getCause();
if (cause == this) {
return false;
}
if (cause instanceof ElasticsearchException) {
return ((ElasticsearchException) cause).contains(exType);
} else {
while (cause != null) {
if (exType.isInstance(cause)) {
return true;
}
if (cause.getCause() == cause) {
break;
}
cause = cause.getCause();
}
return false;
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeOptionalString(this.getMessage());
out.writeThrowable(this.getCause());
@ -412,7 +379,8 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
if (simpleName.startsWith("Elasticsearch")) {
simpleName = simpleName.substring("Elasticsearch".length());
}
return Strings.toUnderscoreCase(simpleName);
// TODO: do we really need to make the exception name in underscore casing?
return toUnderscoreCase(simpleName);
}
@Override
@ -528,7 +496,8 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
org.elasticsearch.index.shard.IndexShardStartedException::new, 23),
SEARCH_CONTEXT_MISSING_EXCEPTION(org.elasticsearch.search.SearchContextMissingException.class,
org.elasticsearch.search.SearchContextMissingException::new, 24),
SCRIPT_EXCEPTION(org.elasticsearch.script.ScriptException.class, org.elasticsearch.script.ScriptException::new, 25),
GENERAL_SCRIPT_EXCEPTION(org.elasticsearch.script.GeneralScriptException.class,
org.elasticsearch.script.GeneralScriptException::new, 25),
BATCH_OPERATION_EXCEPTION(org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException.class,
org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException::new, 26),
SNAPSHOT_CREATION_EXCEPTION(org.elasticsearch.snapshots.SnapshotCreationException.class,
@ -595,8 +564,7 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException::new, 59),
EARLY_TERMINATION_EXCEPTION(org.elasticsearch.common.lucene.Lucene.EarlyTerminationException.class,
org.elasticsearch.common.lucene.Lucene.EarlyTerminationException::new, 60),
ROUTING_VALIDATION_EXCEPTION(org.elasticsearch.cluster.routing.RoutingValidationException.class,
org.elasticsearch.cluster.routing.RoutingValidationException::new, 61),
// 61 used to be for RoutingValidationException
NOT_SERIALIZABLE_EXCEPTION_WRAPPER(org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper.class,
org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper::new, 62),
ALIAS_FILTER_PARSING_EXCEPTION(org.elasticsearch.indices.AliasFilterParsingException.class,
@ -678,8 +646,6 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
org.elasticsearch.index.shard.IndexShardRecoveryException::new, 106),
REPOSITORY_MISSING_EXCEPTION(org.elasticsearch.repositories.RepositoryMissingException.class,
org.elasticsearch.repositories.RepositoryMissingException::new, 107),
PERCOLATOR_EXCEPTION(org.elasticsearch.index.percolator.PercolatorException.class,
org.elasticsearch.index.percolator.PercolatorException::new, 108),
DOCUMENT_SOURCE_MISSING_EXCEPTION(org.elasticsearch.index.engine.DocumentSourceMissingException.class,
org.elasticsearch.index.engine.DocumentSourceMissingException::new, 109),
FLUSH_NOT_ALLOWED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.FlushNotAllowedEngineException.class,
@ -696,8 +662,8 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
org.elasticsearch.index.translog.TranslogException::new, 115),
PROCESS_CLUSTER_EVENT_TIMEOUT_EXCEPTION(org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException.class,
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException::new, 116),
RETRY_ON_PRIMARY_EXCEPTION(org.elasticsearch.action.support.replication.TransportReplicationAction.RetryOnPrimaryException.class,
org.elasticsearch.action.support.replication.TransportReplicationAction.RetryOnPrimaryException::new, 117),
RETRY_ON_PRIMARY_EXCEPTION(ReplicationOperation.RetryOnPrimaryException.class,
ReplicationOperation.RetryOnPrimaryException::new, 117),
ELASTICSEARCH_TIMEOUT_EXCEPTION(org.elasticsearch.ElasticsearchTimeoutException.class,
org.elasticsearch.ElasticsearchTimeoutException::new, 118),
QUERY_PHASE_EXECUTION_EXCEPTION(org.elasticsearch.search.query.QueryPhaseExecutionException.class,
@ -741,7 +707,8 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
QUERY_SHARD_EXCEPTION(org.elasticsearch.index.query.QueryShardException.class,
org.elasticsearch.index.query.QueryShardException::new, 141),
NO_LONGER_PRIMARY_SHARD_EXCEPTION(ShardStateAction.NoLongerPrimaryShardException.class,
ShardStateAction.NoLongerPrimaryShardException::new, 142);
ShardStateAction.NoLongerPrimaryShardException::new, 142),
SCRIPT_EXCEPTION(org.elasticsearch.script.ScriptException.class, org.elasticsearch.script.ScriptException::new, 143);
final Class<? extends ElasticsearchException> exceptionClass;
@ -845,4 +812,39 @@ public class ElasticsearchException extends RuntimeException implements ToXConte
interface FunctionThatThrowsIOException<T, R> {
R apply(T t) throws IOException;
}
// lower cases and adds underscores to transitions in a name
private static String toUnderscoreCase(String value) {
StringBuilder sb = new StringBuilder();
boolean changed = false;
for (int i = 0; i < value.length(); i++) {
char c = value.charAt(i);
if (Character.isUpperCase(c)) {
if (!changed) {
// copy it over here
for (int j = 0; j < i; j++) {
sb.append(value.charAt(j));
}
changed = true;
if (i == 0) {
sb.append(Character.toLowerCase(c));
} else {
sb.append('_');
sb.append(Character.toLowerCase(c));
}
} else {
sb.append('_');
sb.append(Character.toLowerCase(c));
}
} else {
if (changed) {
sb.append(c);
}
}
}
if (!changed) {
return value;
}
return sb.toString();
}
}

View File

@ -32,14 +32,12 @@ import java.io.IOException;
/**
*/
@SuppressWarnings("deprecation")
public class Version {
// The logic for ID is: XXYYZZAA, where XX is major version, YY is minor version, ZZ is revision, and AA is alpha/beta/rc indicator
// AA values below 25 are for alpha builder (since 5.0), and above 25 and below 50 are beta builds, and below 99 are RC builds, with 99 indicating a release
// the (internal) format of the id is there so we can easily do after/before checks on the id
/*
* The logic for ID is: XXYYZZAA, where XX is major version, YY is minor version, ZZ is revision, and AA is alpha/beta/rc indicator AA
* values below 25 are for alpha builder (since 5.0), and above 25 and below 50 are beta builds, and below 99 are RC builds, with 99
* indicating a release the (internal) format of the id is there so we can easily do after/before checks on the id
*/
public static final int V_2_0_0_beta1_ID = 2000001;
public static final Version V_2_0_0_beta1 = new Version(V_2_0_0_beta1_ID, org.apache.lucene.util.Version.LUCENE_5_2_1);
public static final int V_2_0_0_beta2_ID = 2000002;
@ -68,9 +66,19 @@ public class Version {
public static final Version V_2_3_0 = new Version(V_2_3_0_ID, org.apache.lucene.util.Version.LUCENE_5_5_0);
public static final int V_2_3_1_ID = 2030199;
public static final Version V_2_3_1 = new Version(V_2_3_1_ID, org.apache.lucene.util.Version.LUCENE_5_5_0);
public static final int V_2_3_2_ID = 2030299;
public static final Version V_2_3_2 = new Version(V_2_3_2_ID, org.apache.lucene.util.Version.LUCENE_5_5_0);
public static final int V_2_3_3_ID = 2030399;
public static final Version V_2_3_3 = new Version(V_2_3_3_ID, org.apache.lucene.util.Version.LUCENE_5_5_0);
public static final int V_5_0_0_alpha1_ID = 5000001;
public static final Version V_5_0_0_alpha1 = new Version(V_5_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
public static final Version CURRENT = V_5_0_0_alpha1;
public static final int V_5_0_0_alpha2_ID = 5000002;
public static final Version V_5_0_0_alpha2 = new Version(V_5_0_0_alpha2_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
public static final int V_5_0_0_alpha3_ID = 5000003;
public static final Version V_5_0_0_alpha3 = new Version(V_5_0_0_alpha3_ID, org.apache.lucene.util.Version.LUCENE_6_0_0);
public static final int V_5_0_0_ID = 5000099;
public static final Version V_5_0_0 = new Version(V_5_0_0_ID, org.apache.lucene.util.Version.LUCENE_6_0_1);
public static final Version CURRENT = V_5_0_0;
static {
assert CURRENT.luceneVersion.equals(org.apache.lucene.util.Version.LATEST) : "Version must be upgraded to ["
@ -83,8 +91,18 @@ public class Version {
public static Version fromId(int id) {
switch (id) {
case V_5_0_0_ID:
return V_5_0_0;
case V_5_0_0_alpha3_ID:
return V_5_0_0_alpha3;
case V_5_0_0_alpha2_ID:
return V_5_0_0_alpha2;
case V_5_0_0_alpha1_ID:
return V_5_0_0_alpha1;
case V_2_3_3_ID:
return V_2_3_3;
case V_2_3_2_ID:
return V_2_3_2;
case V_2_3_1_ID:
return V_2_3_1;
case V_2_3_0_ID:
@ -121,12 +139,15 @@ public class Version {
/**
* Return the {@link Version} of Elasticsearch that has been used to create an index given its settings.
*
* @throws IllegalStateException if the given index settings doesn't contain a value for the key {@value IndexMetaData#SETTING_VERSION_CREATED}
* @throws IllegalStateException if the given index settings doesn't contain a value for the key
* {@value IndexMetaData#SETTING_VERSION_CREATED}
*/
public static Version indexCreated(Settings indexSettings) {
final Version indexVersion = indexSettings.getAsVersion(IndexMetaData.SETTING_VERSION_CREATED, null);
if (indexVersion == null) {
throw new IllegalStateException("[" + IndexMetaData.SETTING_VERSION_CREATED + "] is not present in the index settings for index with uuid: [" + indexSettings.get(IndexMetaData.SETTING_INDEX_UUID) + "]");
throw new IllegalStateException(
"[" + IndexMetaData.SETTING_VERSION_CREATED + "] is not present in the index settings for index with uuid: ["
+ indexSettings.get(IndexMetaData.SETTING_INDEX_UUID) + "]");
}
return indexVersion;
}
@ -155,7 +176,8 @@ public class Version {
}
String[] parts = version.split("\\.|\\-");
if (parts.length < 3 || parts.length > 4) {
throw new IllegalArgumentException("the version needs to contain major, minor, and revision, and optionally the build: " + version);
throw new IllegalArgumentException(
"the version needs to contain major, minor, and revision, and optionally the build: " + version);
}
try {
@ -239,7 +261,8 @@ public class Version {
@SuppressForbidden(reason = "System.out.*")
public static void main(String[] args) {
System.out.println("Version: " + Version.CURRENT + ", Build: " + Build.CURRENT.shortHash() + "/" + Build.CURRENT.date() + ", JVM: " + JvmInfo.jvmInfo().version());
System.out.println("Version: " + Version.CURRENT + ", Build: " + Build.CURRENT.shortHash() + "/" + Build.CURRENT.date() + ", JVM: "
+ JvmInfo.jvmInfo().version());
}
@Override

View File

@ -24,16 +24,21 @@ import org.elasticsearch.transport.BaseTransportResponseHandler;
import org.elasticsearch.transport.TransportException;
import org.elasticsearch.transport.TransportResponse;
import java.util.Objects;
import java.util.function.Supplier;
/**
* A simple base class for action response listeners, defaulting to using the SAME executor (as its
* very common on response handlers).
*/
public abstract class ActionListenerResponseHandler<Response extends TransportResponse> extends BaseTransportResponseHandler<Response> {
public class ActionListenerResponseHandler<Response extends TransportResponse> extends BaseTransportResponseHandler<Response> {
private final ActionListener<Response> listener;
private final Supplier<Response> responseSupplier;
public ActionListenerResponseHandler(ActionListener<Response> listener) {
this.listener = listener;
public ActionListenerResponseHandler(ActionListener<Response> listener, Supplier<Response> responseSupplier) {
this.listener = Objects.requireNonNull(listener);
this.responseSupplier = Objects.requireNonNull(responseSupplier);
}
@Override
@ -46,6 +51,11 @@ public abstract class ActionListenerResponseHandler<Response extends TransportRe
listener.onFailure(e);
}
@Override
public Response newInstance() {
return responseSupplier.get();
}
@Override
public String executor() {
return ThreadPool.Names.SAME;

View File

@ -115,6 +115,8 @@ import org.elasticsearch.action.admin.indices.settings.put.TransportUpdateSettin
import org.elasticsearch.action.admin.indices.settings.put.UpdateSettingsAction;
import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresAction;
import org.elasticsearch.action.admin.indices.shards.TransportIndicesShardStoresAction;
import org.elasticsearch.action.admin.indices.shrink.ShrinkAction;
import org.elasticsearch.action.admin.indices.shrink.TransportShrinkAction;
import org.elasticsearch.action.admin.indices.stats.IndicesStatsAction;
import org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction;
import org.elasticsearch.action.admin.indices.template.delete.DeleteIndexTemplateAction;
@ -147,12 +149,12 @@ import org.elasticsearch.action.get.TransportMultiGetAction;
import org.elasticsearch.action.get.TransportShardMultiGetAction;
import org.elasticsearch.action.index.IndexAction;
import org.elasticsearch.action.index.TransportIndexAction;
import org.elasticsearch.action.indexedscripts.delete.DeleteIndexedScriptAction;
import org.elasticsearch.action.indexedscripts.delete.TransportDeleteIndexedScriptAction;
import org.elasticsearch.action.indexedscripts.get.GetIndexedScriptAction;
import org.elasticsearch.action.indexedscripts.get.TransportGetIndexedScriptAction;
import org.elasticsearch.action.indexedscripts.put.PutIndexedScriptAction;
import org.elasticsearch.action.indexedscripts.put.TransportPutIndexedScriptAction;
import org.elasticsearch.action.admin.cluster.storedscripts.DeleteStoredScriptAction;
import org.elasticsearch.action.admin.cluster.storedscripts.TransportDeleteStoredScriptAction;
import org.elasticsearch.action.admin.cluster.storedscripts.GetStoredScriptAction;
import org.elasticsearch.action.admin.cluster.storedscripts.TransportGetStoredScriptAction;
import org.elasticsearch.action.admin.cluster.storedscripts.PutStoredScriptAction;
import org.elasticsearch.action.admin.cluster.storedscripts.TransportPutStoredScriptAction;
import org.elasticsearch.action.ingest.IngestActionFilter;
import org.elasticsearch.action.ingest.IngestProxyActionFilter;
import org.elasticsearch.action.ingest.DeletePipelineAction;
@ -165,10 +167,6 @@ import org.elasticsearch.action.ingest.SimulatePipelineAction;
import org.elasticsearch.action.ingest.SimulatePipelineTransportAction;
import org.elasticsearch.action.main.MainAction;
import org.elasticsearch.action.main.TransportMainAction;
import org.elasticsearch.action.percolate.MultiPercolateAction;
import org.elasticsearch.action.percolate.PercolateAction;
import org.elasticsearch.action.percolate.TransportMultiPercolateAction;
import org.elasticsearch.action.percolate.TransportPercolateAction;
import org.elasticsearch.action.search.ClearScrollAction;
import org.elasticsearch.action.search.MultiSearchAction;
import org.elasticsearch.action.search.SearchAction;
@ -290,6 +288,7 @@ public class ActionModule extends AbstractModule {
registerAction(IndicesSegmentsAction.INSTANCE, TransportIndicesSegmentsAction.class);
registerAction(IndicesShardStoresAction.INSTANCE, TransportIndicesShardStoresAction.class);
registerAction(CreateIndexAction.INSTANCE, TransportCreateIndexAction.class);
registerAction(ShrinkAction.INSTANCE, TransportShrinkAction.class);
registerAction(DeleteIndexAction.INSTANCE, TransportDeleteIndexAction.class);
registerAction(GetIndexAction.INSTANCE, TransportGetIndexAction.class);
registerAction(OpenIndexAction.INSTANCE, TransportOpenIndexAction.class);
@ -332,17 +331,15 @@ public class ActionModule extends AbstractModule {
registerAction(SearchAction.INSTANCE, TransportSearchAction.class);
registerAction(SearchScrollAction.INSTANCE, TransportSearchScrollAction.class);
registerAction(MultiSearchAction.INSTANCE, TransportMultiSearchAction.class);
registerAction(PercolateAction.INSTANCE, TransportPercolateAction.class);
registerAction(MultiPercolateAction.INSTANCE, TransportMultiPercolateAction.class);
registerAction(ExplainAction.INSTANCE, TransportExplainAction.class);
registerAction(ClearScrollAction.INSTANCE, TransportClearScrollAction.class);
registerAction(RecoveryAction.INSTANCE, TransportRecoveryAction.class);
registerAction(RenderSearchTemplateAction.INSTANCE, TransportRenderSearchTemplateAction.class);
//Indexed scripts
registerAction(PutIndexedScriptAction.INSTANCE, TransportPutIndexedScriptAction.class);
registerAction(GetIndexedScriptAction.INSTANCE, TransportGetIndexedScriptAction.class);
registerAction(DeleteIndexedScriptAction.INSTANCE, TransportDeleteIndexedScriptAction.class);
registerAction(PutStoredScriptAction.INSTANCE, TransportPutStoredScriptAction.class);
registerAction(GetStoredScriptAction.INSTANCE, TransportGetStoredScriptAction.class);
registerAction(DeleteStoredScriptAction.INSTANCE, TransportDeleteStoredScriptAction.class);
registerAction(FieldStatsAction.INSTANCE, TransportFieldStatsTransportAction.class);

View File

@ -39,6 +39,10 @@ public abstract class ActionRequest<Request extends ActionRequest<Request>> exte
public abstract ActionRequestValidationException validate();
public boolean getShouldPersistResult() {
return false;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);

View File

@ -22,7 +22,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.StatusToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.rest.RestStatus;
@ -110,10 +109,10 @@ public abstract class DocWriteResponse extends ReplicationResponse implements St
}
static final class Fields {
static final XContentBuilderString _INDEX = new XContentBuilderString("_index");
static final XContentBuilderString _TYPE = new XContentBuilderString("_type");
static final XContentBuilderString _ID = new XContentBuilderString("_id");
static final XContentBuilderString _VERSION = new XContentBuilderString("_version");
static final String _INDEX = "_index";
static final String _TYPE = "_type";
static final String _ID = "_id";
static final String _VERSION = "_version";
}
@Override

View File

@ -40,7 +40,7 @@ public interface IndicesRequest {
*/
IndicesOptions indicesOptions();
static interface Replaceable extends IndicesRequest {
interface Replaceable extends IndicesRequest {
/**
* Sets the indices that the action relates to.
*/

View File

@ -26,10 +26,8 @@ package org.elasticsearch.action;
public interface RealtimeRequest {
/**
* @param realtime Controls whether this request should be realtime by reading from the translog. If <code>null</code>
* is specified then whether the operation will be realtime depends on the api of the concrete request
* subclass.
* @param realtime Controls whether this request should be realtime by reading from the translog.
*/
<R extends RealtimeRequest> R realtime(Boolean realtime);
<R extends RealtimeRequest> R realtime(boolean realtime);
}

View File

@ -28,7 +28,6 @@ import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.rest.RestStatus;
@ -280,24 +279,24 @@ public class ReplicationResponse extends ActionResponse {
private static class Fields {
private static final XContentBuilderString _INDEX = new XContentBuilderString("_index");
private static final XContentBuilderString _SHARD = new XContentBuilderString("_shard");
private static final XContentBuilderString _NODE = new XContentBuilderString("_node");
private static final XContentBuilderString REASON = new XContentBuilderString("reason");
private static final XContentBuilderString STATUS = new XContentBuilderString("status");
private static final XContentBuilderString PRIMARY = new XContentBuilderString("primary");
private static final String _INDEX = "_index";
private static final String _SHARD = "_shard";
private static final String _NODE = "_node";
private static final String REASON = "reason";
private static final String STATUS = "status";
private static final String PRIMARY = "primary";
}
}
private static class Fields {
private static final XContentBuilderString _SHARDS = new XContentBuilderString("_shards");
private static final XContentBuilderString TOTAL = new XContentBuilderString("total");
private static final XContentBuilderString SUCCESSFUL = new XContentBuilderString("successful");
private static final XContentBuilderString PENDING = new XContentBuilderString("pending");
private static final XContentBuilderString FAILED = new XContentBuilderString("failed");
private static final XContentBuilderString FAILURES = new XContentBuilderString("failures");
private static final String _SHARDS = "_shards";
private static final String TOTAL = "total";
private static final String SUCCESSFUL = "successful";
private static final String PENDING = "pending";
private static final String FAILED = "failed";
private static final String FAILURES = "failures";
}
}

View File

@ -37,7 +37,7 @@ import static org.elasticsearch.ExceptionsHelper.detailedMessage;
*
* The class is final due to serialization limitations
*/
public final class TaskOperationFailure implements Writeable<TaskOperationFailure>, ToXContent {
public final class TaskOperationFailure implements Writeable, ToXContent {
private final String nodeId;
@ -47,6 +47,16 @@ public final class TaskOperationFailure implements Writeable<TaskOperationFailur
private final RestStatus status;
public TaskOperationFailure(String nodeId, long taskId, Throwable t) {
this.nodeId = nodeId;
this.taskId = taskId;
this.reason = t;
status = ExceptionsHelper.status(t);
}
/**
* Read from a stream.
*/
public TaskOperationFailure(StreamInput in) throws IOException {
nodeId = in.readString();
taskId = in.readLong();
@ -54,11 +64,12 @@ public final class TaskOperationFailure implements Writeable<TaskOperationFailur
status = RestStatus.readFrom(in);
}
public TaskOperationFailure(String nodeId, long taskId, Throwable t) {
this.nodeId = nodeId;
this.taskId = taskId;
this.reason = t;
status = ExceptionsHelper.status(t);
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(nodeId);
out.writeLong(taskId);
out.writeThrowable(reason);
RestStatus.writeTo(out, status);
}
public String getNodeId() {
@ -81,19 +92,6 @@ public final class TaskOperationFailure implements Writeable<TaskOperationFailur
return reason;
}
@Override
public TaskOperationFailure readFrom(StreamInput in) throws IOException {
return new TaskOperationFailure(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(nodeId);
out.writeLong(taskId);
out.writeThrowable(reason);
RestStatus.writeTo(out, status);
}
@Override
public String toString() {
return "[" + nodeId + "][" + taskId + "] failed, reason [" + getReason() + "]";

View File

@ -49,11 +49,7 @@ public class TransportActionNodeProxy<Request extends ActionRequest, Response ex
listener.onFailure(validationException);
return;
}
transportService.sendRequest(node, action.name(), request, transportOptions, new ActionListenerResponseHandler<Response>(listener) {
@Override
public Response newInstance() {
return action.newResponse();
}
});
transportService.sendRequest(node, action.name(), request, transportOptions,
new ActionListenerResponseHandler<>(listener, action::newResponse));
}
}

View File

@ -24,14 +24,14 @@ import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.MasterNodeRequest;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.ParseFieldMatcher;
import org.elasticsearch.common.ParseFieldMatcherSupplier;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.Objects;
import static org.elasticsearch.action.ValidateActions.addValidationError;
@ -40,7 +40,8 @@ import static org.elasticsearch.action.ValidateActions.addValidationError;
*/
public class ClusterAllocationExplainRequest extends MasterNodeRequest<ClusterAllocationExplainRequest> {
private static ObjectParser<ClusterAllocationExplainRequest, Void> PARSER = new ObjectParser("cluster/allocation/explain");
private static ObjectParser<ClusterAllocationExplainRequest, ParseFieldMatcherSupplier> PARSER = new ObjectParser(
"cluster/allocation/explain");
static {
PARSER.declareString(ClusterAllocationExplainRequest::setIndex, new ParseField("index"));
PARSER.declareInt(ClusterAllocationExplainRequest::setShard, new ParseField("shard"));
@ -148,7 +149,7 @@ public class ClusterAllocationExplainRequest extends MasterNodeRequest<ClusterAl
}
public static ClusterAllocationExplainRequest parse(XContentParser parser) throws IOException {
ClusterAllocationExplainRequest req = PARSER.parse(parser, new ClusterAllocationExplainRequest());
ClusterAllocationExplainRequest req = PARSER.parse(parser, new ClusterAllocationExplainRequest(), () -> ParseFieldMatcher.STRICT);
Exception e = req.validate();
if (e != null) {
throw new ElasticsearchParseException("'index', 'shard', and 'primary' must be specified in allocation explain request", e);

View File

@ -21,7 +21,6 @@ package org.elasticsearch.action.admin.cluster.allocation;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.routing.UnassignedInfo;
import org.elasticsearch.cluster.routing.allocation.decider.Decision;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
@ -32,7 +31,6 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.index.shard.ShardId;
import java.io.IOException;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
@ -40,64 +38,79 @@ import java.util.Map;
* A {@code ClusterAllocationExplanation} is an explanation of why a shard may or may not be allocated to nodes. It also includes weights
* for where the shard is likely to be assigned. It is an immutable class
*/
public final class ClusterAllocationExplanation implements ToXContent, Writeable<ClusterAllocationExplanation> {
public final class ClusterAllocationExplanation implements ToXContent, Writeable {
private final ShardId shard;
private final boolean primary;
private final boolean hasPendingAsyncFetch;
private final String assignedNodeId;
private final Map<DiscoveryNode, Decision> nodeToDecision;
private final Map<DiscoveryNode, Float> nodeWeights;
private final UnassignedInfo unassignedInfo;
private final long remainingDelayNanos;
private final long allocationDelayMillis;
private final long remainingDelayMillis;
private final Map<DiscoveryNode, NodeExplanation> nodeExplanations;
public ClusterAllocationExplanation(ShardId shard, boolean primary, @Nullable String assignedNodeId, long allocationDelayMillis,
long remainingDelayMillis, @Nullable UnassignedInfo unassignedInfo, boolean hasPendingAsyncFetch,
Map<DiscoveryNode, NodeExplanation> nodeExplanations) {
this.shard = shard;
this.primary = primary;
this.hasPendingAsyncFetch = hasPendingAsyncFetch;
this.assignedNodeId = assignedNodeId;
this.unassignedInfo = unassignedInfo;
this.allocationDelayMillis = allocationDelayMillis;
this.remainingDelayMillis = remainingDelayMillis;
this.nodeExplanations = nodeExplanations;
}
public ClusterAllocationExplanation(StreamInput in) throws IOException {
this.shard = ShardId.readShardId(in);
this.primary = in.readBoolean();
this.hasPendingAsyncFetch = in.readBoolean();
this.assignedNodeId = in.readOptionalString();
this.unassignedInfo = in.readOptionalWriteable(UnassignedInfo::new);
this.allocationDelayMillis = in.readVLong();
this.remainingDelayMillis = in.readVLong();
Map<DiscoveryNode, Decision> ntd = null;
int size = in.readVInt();
ntd = new HashMap<>(size);
for (int i = 0; i < size; i++) {
DiscoveryNode dn = new DiscoveryNode(in);
Decision decision = Decision.readFrom(in);
ntd.put(dn, decision);
int mapSize = in.readVInt();
Map<DiscoveryNode, NodeExplanation> nodeToExplanation = new HashMap<>(mapSize);
for (int i = 0; i < mapSize; i++) {
NodeExplanation nodeExplanation = new NodeExplanation(in);
nodeToExplanation.put(nodeExplanation.getNode(), nodeExplanation);
}
this.nodeToDecision = ntd;
Map<DiscoveryNode, Float> ntw = null;
size = in.readVInt();
ntw = new HashMap<>(size);
for (int i = 0; i < size; i++) {
DiscoveryNode dn = new DiscoveryNode(in);
float weight = in.readFloat();
ntw.put(dn, weight);
}
this.nodeWeights = ntw;
remainingDelayNanos = in.readVLong();
this.nodeExplanations = nodeToExplanation;
}
public ClusterAllocationExplanation(ShardId shard, boolean primary, @Nullable String assignedNodeId,
UnassignedInfo unassignedInfo, Map<DiscoveryNode, Decision> nodeToDecision,
Map<DiscoveryNode, Float> nodeWeights, long remainingDelayNanos) {
this.shard = shard;
this.primary = primary;
this.assignedNodeId = assignedNodeId;
this.unassignedInfo = unassignedInfo;
this.nodeToDecision = nodeToDecision == null ? Collections.emptyMap() : nodeToDecision;
this.nodeWeights = nodeWeights == null ? Collections.emptyMap() : nodeWeights;
this.remainingDelayNanos = remainingDelayNanos;
@Override
public void writeTo(StreamOutput out) throws IOException {
this.getShard().writeTo(out);
out.writeBoolean(this.isPrimary());
out.writeBoolean(this.isStillFetchingShardData());
out.writeOptionalString(this.getAssignedNodeId());
out.writeOptionalWriteable(this.getUnassignedInfo());
out.writeVLong(allocationDelayMillis);
out.writeVLong(remainingDelayMillis);
out.writeVInt(this.nodeExplanations.size());
for (NodeExplanation explanation : this.nodeExplanations.values()) {
explanation.writeTo(out);
}
}
/** Return the shard that the explanation is about */
public ShardId getShard() {
return this.shard;
}
/** Return true if the explained shard is primary, false otherwise */
public boolean isPrimary() {
return this.primary;
}
/** Return turn if shard data is still being fetched for the allocation */
public boolean isStillFetchingShardData() {
return this.hasPendingAsyncFetch;
}
/** Return turn if the shard is assigned to a node */
public boolean isAssigned() {
return this.assignedNodeId != null;
@ -115,22 +128,19 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
return this.unassignedInfo;
}
/** Return a map of node to decision for shard allocation */
public Map<DiscoveryNode, Decision> getNodeDecisions() {
return this.nodeToDecision;
/** Return the configured delay before the shard can be allocated in milliseconds */
public long getAllocationDelayMillis() {
return this.allocationDelayMillis;
}
/**
* Return a map of node to balancer "weight" for allocation. Higher weights mean the balancer wants to allocated the shard to that node
* more
*/
public Map<DiscoveryNode, Float> getNodeWeights() {
return this.nodeWeights;
/** Return the remaining allocation delay for this shard in milliseconds */
public long getRemainingDelayMillis() {
return this.remainingDelayMillis;
}
/** Return the remaining allocation delay for this shard in nanoseconds */
public long getRemainingDelayNanos() {
return this.remainingDelayNanos;
/** Return a map of node to the explanation for that node */
public Map<DiscoveryNode, NodeExplanation> getNodeExplanations() {
return this.nodeExplanations;
}
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
@ -147,36 +157,16 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
if (assignedNodeId != null) {
builder.field("assigned_node_id", this.assignedNodeId);
}
builder.field("shard_state_fetch_pending", this.hasPendingAsyncFetch);
// If we have unassigned info, show that
if (unassignedInfo != null) {
unassignedInfo.toXContent(builder, params);
long delay = unassignedInfo.getLastComputedLeftDelayNanos();
builder.field("allocation_delay", TimeValue.timeValueNanos(delay));
builder.field("allocation_delay_ms", TimeValue.timeValueNanos(delay).millis());
builder.field("remaining_delay", TimeValue.timeValueNanos(remainingDelayNanos));
builder.field("remaining_delay_ms", TimeValue.timeValueNanos(remainingDelayNanos).millis());
builder.timeValueField("allocation_delay_in_millis", "allocation_delay", TimeValue.timeValueMillis(allocationDelayMillis));
builder.timeValueField("remaining_delay_in_millis", "remaining_delay", TimeValue.timeValueMillis(remainingDelayMillis));
}
builder.startObject("nodes");
for (Map.Entry<DiscoveryNode, Float> entry : nodeWeights.entrySet()) {
DiscoveryNode node = entry.getKey();
builder.startObject(node.getId()); {
builder.field("node_name", node.getName());
builder.startObject("node_attributes"); {
for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {
builder.field(attrEntry.getKey(), attrEntry.getValue());
}
}
builder.endObject(); // end attributes
Decision d = nodeToDecision.get(node);
if (node.getId().equals(assignedNodeId)) {
builder.field("final_decision", "CURRENTLY_ASSIGNED");
} else {
builder.field("final_decision", d.type().toString());
}
builder.field("weight", entry.getValue());
d.toXContent(builder, params);
}
builder.endObject(); // end node <uuid>
for (NodeExplanation explanation : nodeExplanations.values()) {
explanation.toXContent(builder, params);
}
builder.endObject(); // end nodes
}
@ -184,30 +174,105 @@ public final class ClusterAllocationExplanation implements ToXContent, Writeable
return builder;
}
@Override
public ClusterAllocationExplanation readFrom(StreamInput in) throws IOException {
return new ClusterAllocationExplanation(in);
/** An Enum representing the final decision for a shard allocation on a node */
public enum FinalDecision {
// Yes, the shard can be assigned
YES((byte) 0),
// No, the shard cannot be assigned
NO((byte) 1),
// The shard is already assigned to this node
ALREADY_ASSIGNED((byte) 2);
private final byte id;
FinalDecision (byte id) {
this.id = id;
}
private static FinalDecision fromId(byte id) {
switch (id) {
case 0: return YES;
case 1: return NO;
case 2: return ALREADY_ASSIGNED;
default:
throw new IllegalArgumentException("unknown id for final decision: [" + id + "]");
}
}
@Override
public String toString() {
switch (id) {
case 0: return "YES";
case 1: return "NO";
case 2: return "ALREADY_ASSIGNED";
default:
throw new IllegalArgumentException("unknown id for final decision: [" + id + "]");
}
}
static FinalDecision readFrom(StreamInput in) throws IOException {
return fromId(in.readByte());
}
void writeTo(StreamOutput out) throws IOException {
out.writeByte(id);
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
this.getShard().writeTo(out);
out.writeBoolean(this.isPrimary());
out.writeOptionalString(this.getAssignedNodeId());
out.writeOptionalWriteable(this.getUnassignedInfo());
/** An Enum representing the state of the shard store's copy of the data on a node */
public enum StoreCopy {
// No data for this shard is on the node
NONE((byte) 0),
// A copy of the data is available on this node
AVAILABLE((byte) 1),
// The copy of the data on the node is corrupt
CORRUPT((byte) 2),
// There was an error reading this node's copy of the data
IO_ERROR((byte) 3),
// The copy of the data on the node is stale
STALE((byte) 4),
// It's unknown what the copy of the data is
UNKNOWN((byte) 5);
Map<DiscoveryNode, Decision> ntd = this.getNodeDecisions();
out.writeVInt(ntd.size());
for (Map.Entry<DiscoveryNode, Decision> entry : ntd.entrySet()) {
entry.getKey().writeTo(out);
Decision.writeTo(entry.getValue(), out);
private final byte id;
StoreCopy (byte id) {
this.id = id;
}
Map<DiscoveryNode, Float> ntw = this.getNodeWeights();
out.writeVInt(ntw.size());
for (Map.Entry<DiscoveryNode, Float> entry : ntw.entrySet()) {
entry.getKey().writeTo(out);
out.writeFloat(entry.getValue());
private static StoreCopy fromId(byte id) {
switch (id) {
case 0: return NONE;
case 1: return AVAILABLE;
case 2: return CORRUPT;
case 3: return IO_ERROR;
case 4: return STALE;
case 5: return UNKNOWN;
default:
throw new IllegalArgumentException("unknown id for store copy: [" + id + "]");
}
}
@Override
public String toString() {
switch (id) {
case 0: return "NONE";
case 1: return "AVAILABLE";
case 2: return "CORRUPT";
case 3: return "IO_ERROR";
case 4: return "STALE";
case 5: return "UNKNOWN";
default:
throw new IllegalArgumentException("unknown id for store copy: [" + id + "]");
}
}
static StoreCopy readFrom(StreamInput in) throws IOException {
return fromId(in.readByte());
}
void writeTo(StreamOutput out) throws IOException {
out.writeByte(id);
}
out.writeVLong(remainingDelayNanos);
}
}

View File

@ -0,0 +1,145 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.allocation;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresResponse;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.routing.allocation.decider.Decision;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;
import java.util.Map;
/** The cluster allocation explanation for a single node */
public class NodeExplanation implements Writeable, ToXContent {
private final DiscoveryNode node;
private final Decision nodeDecision;
private final Float nodeWeight;
private final IndicesShardStoresResponse.StoreStatus storeStatus;
private final ClusterAllocationExplanation.FinalDecision finalDecision;
private final ClusterAllocationExplanation.StoreCopy storeCopy;
private final String finalExplanation;
public NodeExplanation(final DiscoveryNode node, final Decision nodeDecision, final Float nodeWeight,
final @Nullable IndicesShardStoresResponse.StoreStatus storeStatus,
final ClusterAllocationExplanation.FinalDecision finalDecision,
final String finalExplanation,
final ClusterAllocationExplanation.StoreCopy storeCopy) {
this.node = node;
this.nodeDecision = nodeDecision;
this.nodeWeight = nodeWeight;
this.storeStatus = storeStatus;
this.finalDecision = finalDecision;
this.finalExplanation = finalExplanation;
this.storeCopy = storeCopy;
}
public NodeExplanation(StreamInput in) throws IOException {
this.node = new DiscoveryNode(in);
this.nodeDecision = Decision.readFrom(in);
this.nodeWeight = in.readFloat();
if (in.readBoolean()) {
this.storeStatus = IndicesShardStoresResponse.StoreStatus.readStoreStatus(in);
} else {
this.storeStatus = null;
}
this.finalDecision = ClusterAllocationExplanation.FinalDecision.readFrom(in);
this.finalExplanation = in.readString();
this.storeCopy = ClusterAllocationExplanation.StoreCopy.readFrom(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
node.writeTo(out);
Decision.writeTo(nodeDecision, out);
out.writeFloat(nodeWeight);
if (storeStatus == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
storeStatus.writeTo(out);
}
finalDecision.writeTo(out);
out.writeString(finalExplanation);
storeCopy.writeTo(out);
}
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject(node.getId()); {
builder.field("node_name", node.getName());
builder.startObject("node_attributes"); {
for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {
builder.field(attrEntry.getKey(), attrEntry.getValue());
}
}
builder.endObject(); // end attributes
builder.startObject("store"); {
builder.field("shard_copy", storeCopy.toString());
if (storeStatus != null) {
final Throwable storeErr = storeStatus.getStoreException();
if (storeErr != null) {
builder.field("store_exception", ExceptionsHelper.detailedMessage(storeErr));
}
}
}
builder.endObject(); // end store
builder.field("final_decision", finalDecision.toString());
builder.field("final_explanation", finalExplanation.toString());
builder.field("weight", nodeWeight);
nodeDecision.toXContent(builder, params);
}
builder.endObject(); // end node <uuid>
return builder;
}
public DiscoveryNode getNode() {
return this.node;
}
public Decision getDecision() {
return this.nodeDecision;
}
public Float getWeight() {
return this.nodeWeight;
}
@Nullable
public IndicesShardStoresResponse.StoreStatus getStoreStatus() {
return this.storeStatus;
}
public ClusterAllocationExplanation.FinalDecision getFinalDecision() {
return this.finalDecision;
}
public String getFinalExplanation() {
return this.finalExplanation;
}
public ClusterAllocationExplanation.StoreCopy getStoreCopy() {
return this.storeCopy;
}
}

View File

@ -19,43 +19,46 @@
package org.elasticsearch.action.admin.cluster.allocation;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import org.apache.lucene.index.CorruptIndexException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresRequest;
import org.elasticsearch.action.admin.indices.shards.IndicesShardStoresResponse;
import org.elasticsearch.action.admin.indices.shards.TransportIndicesShardStoresAction;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
import org.elasticsearch.cluster.ClusterInfoService;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.metadata.MetaData.Custom;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.routing.RoutingNode;
import org.elasticsearch.cluster.routing.RoutingNodes;
import org.elasticsearch.cluster.routing.RoutingNodes.RoutingNodesIterator;
import org.elasticsearch.cluster.routing.RoutingTable;
import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.routing.UnassignedInfo;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.cluster.routing.allocation.RoutingExplanations;
import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocator;
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;
import org.elasticsearch.cluster.routing.allocation.decider.Decision;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.collect.ImmutableOpenIntMap;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.gateway.GatewayAllocator;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import static org.elasticsearch.cluster.routing.UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING;
/**
* The {@code TransportClusterAllocationExplainAction} is responsible for actually executing the explanation of a shard's allocation on the
@ -64,23 +67,26 @@ import java.util.Map;
public class TransportClusterAllocationExplainAction
extends TransportMasterNodeAction<ClusterAllocationExplainRequest, ClusterAllocationExplainResponse> {
private final AllocationService allocationService;
private final ClusterInfoService clusterInfoService;
private final AllocationDeciders allocationDeciders;
private final ShardsAllocator shardAllocator;
private final TransportIndicesShardStoresAction shardStoresAction;
private final GatewayAllocator gatewayAllocator;
@Inject
public TransportClusterAllocationExplainAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver,
AllocationService allocationService, ClusterInfoService clusterInfoService,
AllocationDeciders allocationDeciders, ShardsAllocator shardAllocator) {
ClusterInfoService clusterInfoService, AllocationDeciders allocationDeciders,
ShardsAllocator shardAllocator, TransportIndicesShardStoresAction shardStoresAction,
GatewayAllocator gatewayAllocator) {
super(settings, ClusterAllocationExplainAction.NAME, transportService, clusterService, threadPool, actionFilters,
indexNameExpressionResolver, ClusterAllocationExplainRequest::new);
this.allocationService = allocationService;
this.clusterInfoService = clusterInfoService;
this.allocationDeciders = allocationDeciders;
this.shardAllocator = shardAllocator;
this.shardStoresAction = shardStoresAction;
this.gatewayAllocator = gatewayAllocator;
}
@Override
@ -118,68 +124,179 @@ public class TransportClusterAllocationExplainAction
}
}
/**
* Construct a {@code NodeExplanation} object for the given shard given all the metadata. This also attempts to construct the human
* readable FinalDecision and final explanation as part of the explanation.
*/
public static NodeExplanation calculateNodeExplanation(ShardRouting shard,
IndexMetaData indexMetaData,
DiscoveryNode node,
Decision nodeDecision,
Float nodeWeight,
IndicesShardStoresResponse.StoreStatus storeStatus,
String assignedNodeId,
Set<String> activeAllocationIds,
boolean hasPendingAsyncFetch) {
final ClusterAllocationExplanation.FinalDecision finalDecision;
final ClusterAllocationExplanation.StoreCopy storeCopy;
final String finalExplanation;
if (storeStatus == null) {
// No copies of the data
storeCopy = ClusterAllocationExplanation.StoreCopy.NONE;
} else {
final Throwable storeErr = storeStatus.getStoreException();
if (storeErr != null) {
if (ExceptionsHelper.unwrapCause(storeErr) instanceof CorruptIndexException) {
storeCopy = ClusterAllocationExplanation.StoreCopy.CORRUPT;
} else {
storeCopy = ClusterAllocationExplanation.StoreCopy.IO_ERROR;
}
} else if (activeAllocationIds.isEmpty()) {
// The ids are only empty if dealing with a legacy index
// TODO: fetch the shard state versions and display here?
storeCopy = ClusterAllocationExplanation.StoreCopy.UNKNOWN;
} else if (activeAllocationIds.contains(storeStatus.getAllocationId())) {
storeCopy = ClusterAllocationExplanation.StoreCopy.AVAILABLE;
} else {
// Otherwise, this is a stale copy of the data (allocation ids don't match)
storeCopy = ClusterAllocationExplanation.StoreCopy.STALE;
}
}
if (node.getId().equals(assignedNodeId)) {
finalDecision = ClusterAllocationExplanation.FinalDecision.ALREADY_ASSIGNED;
finalExplanation = "the shard is already assigned to this node";
} else if (hasPendingAsyncFetch &&
shard.primary() == false &&
shard.unassigned() &&
shard.allocatedPostIndexCreate(indexMetaData) &&
nodeDecision.type() != Decision.Type.YES) {
finalExplanation = "the shard cannot be assigned because allocation deciders return a " + nodeDecision.type().name() +
" decision and the shard's state is still being fetched";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else if (hasPendingAsyncFetch &&
shard.unassigned() &&
shard.allocatedPostIndexCreate(indexMetaData)) {
finalExplanation = "the shard's state is still being fetched so it cannot be allocated";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else if (shard.primary() && shard.unassigned() && shard.allocatedPostIndexCreate(indexMetaData) &&
storeCopy == ClusterAllocationExplanation.StoreCopy.STALE) {
finalExplanation = "the copy of the shard is stale, allocation ids do not match";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else if (shard.primary() && shard.unassigned() && shard.allocatedPostIndexCreate(indexMetaData) &&
storeCopy == ClusterAllocationExplanation.StoreCopy.NONE) {
finalExplanation = "there is no copy of the shard available";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else if (shard.primary() && shard.unassigned() && storeCopy == ClusterAllocationExplanation.StoreCopy.CORRUPT) {
finalExplanation = "the copy of the shard is corrupt";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else if (shard.primary() && shard.unassigned() && storeCopy == ClusterAllocationExplanation.StoreCopy.IO_ERROR) {
finalExplanation = "the copy of the shard cannot be read";
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
} else {
if (nodeDecision.type() == Decision.Type.NO) {
finalDecision = ClusterAllocationExplanation.FinalDecision.NO;
finalExplanation = "the shard cannot be assigned because one or more allocation decider returns a 'NO' decision";
} else {
// TODO: handle throttling decision better here
finalDecision = ClusterAllocationExplanation.FinalDecision.YES;
if (storeCopy == ClusterAllocationExplanation.StoreCopy.AVAILABLE) {
finalExplanation = "the shard can be assigned and the node contains a valid copy of the shard data";
} else {
finalExplanation = "the shard can be assigned";
}
}
}
return new NodeExplanation(node, nodeDecision, nodeWeight, storeStatus, finalDecision, finalExplanation, storeCopy);
}
/**
* For the given {@code ShardRouting}, return the explanation of the allocation for that shard on all nodes. If {@code
* includeYesDecisions} is true, returns all decisions, otherwise returns only 'NO' and 'THROTTLE' decisions.
*/
public static ClusterAllocationExplanation explainShard(ShardRouting shard, RoutingAllocation allocation, RoutingNodes routingNodes,
boolean includeYesDecisions, ShardsAllocator shardAllocator) {
boolean includeYesDecisions, ShardsAllocator shardAllocator,
List<IndicesShardStoresResponse.StoreStatus> shardStores,
GatewayAllocator gatewayAllocator) {
// don't short circuit deciders, we want a full explanation
allocation.debugDecision(true);
// get the existing unassigned info if available
UnassignedInfo ui = shard.unassignedInfo();
RoutingNodesIterator iter = routingNodes.nodes();
Map<DiscoveryNode, Decision> nodeToDecision = new HashMap<>();
while (iter.hasNext()) {
RoutingNode node = iter.next();
for (RoutingNode node : routingNodes) {
DiscoveryNode discoNode = node.node();
if (discoNode.isDataNode()) {
Decision d = tryShardOnNode(shard, node, allocation, includeYesDecisions);
nodeToDecision.put(discoNode, d);
}
}
long remainingDelayNanos = 0;
if (ui != null) {
final MetaData metadata = allocation.metaData();
final Settings indexSettings = metadata.index(shard.index()).getSettings();
remainingDelayNanos = ui.getRemainingDelay(System.nanoTime(), metadata.settings(), indexSettings);
long remainingDelayMillis = 0;
final MetaData metadata = allocation.metaData();
final IndexMetaData indexMetaData = metadata.index(shard.index());
long allocationDelayMillis = INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.get(indexMetaData.getSettings()).getMillis();
if (ui != null && ui.isDelayed()) {
long remainingDelayNanos = ui.getRemainingDelay(System.nanoTime(), indexMetaData.getSettings());
remainingDelayMillis = TimeValue.timeValueNanos(remainingDelayNanos).millis();
}
return new ClusterAllocationExplanation(shard.shardId(), shard.primary(), shard.currentNodeId(), ui, nodeToDecision,
shardAllocator.weighShard(allocation, shard), remainingDelayNanos);
// Calculate weights for each of the nodes
Map<DiscoveryNode, Float> weights = shardAllocator.weighShard(allocation, shard);
Map<DiscoveryNode, IndicesShardStoresResponse.StoreStatus> nodeToStatus = new HashMap<>(shardStores.size());
for (IndicesShardStoresResponse.StoreStatus status : shardStores) {
nodeToStatus.put(status.getNode(), status);
}
Map<DiscoveryNode, NodeExplanation> explanations = new HashMap<>(shardStores.size());
for (Map.Entry<DiscoveryNode, Decision> entry : nodeToDecision.entrySet()) {
DiscoveryNode node = entry.getKey();
Decision decision = entry.getValue();
Float weight = weights.get(node);
IndicesShardStoresResponse.StoreStatus storeStatus = nodeToStatus.get(node);
NodeExplanation nodeExplanation = calculateNodeExplanation(shard, indexMetaData, node, decision, weight,
storeStatus, shard.currentNodeId(), indexMetaData.activeAllocationIds(shard.getId()),
allocation.hasPendingAsyncFetch());
explanations.put(node, nodeExplanation);
}
return new ClusterAllocationExplanation(shard.shardId(), shard.primary(),
shard.currentNodeId(), allocationDelayMillis, remainingDelayMillis, ui,
gatewayAllocator.hasFetchPending(shard.shardId(), shard.primary()), explanations);
}
@Override
protected void masterOperation(final ClusterAllocationExplainRequest request, final ClusterState state,
final ActionListener<ClusterAllocationExplainResponse> listener) {
final RoutingNodes routingNodes = state.getRoutingNodes();
final RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, state.nodes(),
clusterInfoService.getClusterInfo(), System.nanoTime());
final RoutingAllocation allocation = new RoutingAllocation(allocationDeciders, routingNodes, state,
clusterInfoService.getClusterInfo(), System.nanoTime(), false);
ShardRouting shardRouting = null;
ShardRouting foundShard = null;
if (request.useAnyUnassignedShard()) {
// If we can use any shard, just pick the first unassigned one (if there are any)
RoutingNodes.UnassignedShards.UnassignedIterator ui = routingNodes.unassigned().iterator();
if (ui.hasNext()) {
shardRouting = ui.next();
foundShard = ui.next();
}
} else {
String index = request.getIndex();
int shard = request.getShard();
if (request.isPrimary()) {
// If we're looking for the primary shard, there's only one copy, so pick it directly
shardRouting = allocation.routingTable().shardRoutingTable(index, shard).primaryShard();
foundShard = allocation.routingTable().shardRoutingTable(index, shard).primaryShard();
} else {
// If looking for a replica, go through all the replica shards
List<ShardRouting> replicaShardRoutings = allocation.routingTable().shardRoutingTable(index, shard).replicaShards();
if (replicaShardRoutings.size() > 0) {
// Pick the first replica at the very least
shardRouting = replicaShardRoutings.get(0);
foundShard = replicaShardRoutings.get(0);
// In case there are multiple replicas where some are assigned and some aren't,
// try to find one that is unassigned at least
for (ShardRouting replica : replicaShardRoutings) {
if (replica.unassigned()) {
shardRouting = replica;
foundShard = replica;
break;
}
}
@ -187,14 +304,34 @@ public class TransportClusterAllocationExplainAction
}
}
if (shardRouting == null) {
if (foundShard == null) {
listener.onFailure(new ElasticsearchException("unable to find any shards to explain [{}] in the routing table", request));
return;
}
final ShardRouting shardRouting = foundShard;
logger.debug("explaining the allocation for [{}], found shard [{}]", request, shardRouting);
ClusterAllocationExplanation cae = explainShard(shardRouting, allocation, routingNodes,
request.includeYesDecisions(), shardAllocator);
listener.onResponse(new ClusterAllocationExplainResponse(cae));
getShardStores(shardRouting, new ActionListener<IndicesShardStoresResponse>() {
@Override
public void onResponse(IndicesShardStoresResponse shardStoreResponse) {
ImmutableOpenIntMap<List<IndicesShardStoresResponse.StoreStatus>> shardStatuses =
shardStoreResponse.getStoreStatuses().get(shardRouting.getIndexName());
List<IndicesShardStoresResponse.StoreStatus> shardStoreStatus = shardStatuses.get(shardRouting.id());
ClusterAllocationExplanation cae = explainShard(shardRouting, allocation, routingNodes,
request.includeYesDecisions(), shardAllocator, shardStoreStatus, gatewayAllocator);
listener.onResponse(new ClusterAllocationExplainResponse(cae));
}
@Override
public void onFailure(Throwable e) {
listener.onFailure(e);
}
});
}
private void getShardStores(ShardRouting shard, final ActionListener<IndicesShardStoresResponse> listener) {
IndicesShardStoresRequest request = new IndicesShardStoresRequest(shard.getIndexName());
request.shardStatuses("all");
shardStoresAction.execute(request, listener);
}
}

View File

@ -19,7 +19,6 @@
package org.elasticsearch.action.admin.cluster.health;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
@ -30,12 +29,10 @@ import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.StatusToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.rest.RestStatus;
import java.io.IOException;
import java.util.List;
import java.util.Locale;
import java.util.Map;
@ -83,14 +80,6 @@ public class ClusterHealthResponse extends ActionResponse implements StatusToXCo
return clusterStateHealth;
}
/**
* The validation failures on the cluster level (without index validation failures).
*/
public List<String> getValidationFailures() {
return clusterStateHealth.getValidationFailures();
}
public int getActiveShards() {
return clusterStateHealth.getActiveShards();
}
@ -193,7 +182,7 @@ public class ClusterHealthResponse extends ActionResponse implements StatusToXCo
super.readFrom(in);
clusterName = in.readString();
clusterHealthStatus = ClusterHealthStatus.fromValue(in.readByte());
clusterStateHealth = ClusterStateHealth.readClusterHealth(in);
clusterStateHealth = new ClusterStateHealth(in);
numberOfPendingTasks = in.readInt();
timedOut = in.readBoolean();
numberOfInFlightFetch = in.readInt();
@ -233,79 +222,50 @@ public class ClusterHealthResponse extends ActionResponse implements StatusToXCo
return isTimedOut() ? RestStatus.REQUEST_TIMEOUT : RestStatus.OK;
}
static final class Fields {
static final XContentBuilderString CLUSTER_NAME = new XContentBuilderString("cluster_name");
static final XContentBuilderString STATUS = new XContentBuilderString("status");
static final XContentBuilderString TIMED_OUT = new XContentBuilderString("timed_out");
static final XContentBuilderString NUMBER_OF_NODES = new XContentBuilderString("number_of_nodes");
static final XContentBuilderString NUMBER_OF_DATA_NODES = new XContentBuilderString("number_of_data_nodes");
static final XContentBuilderString NUMBER_OF_PENDING_TASKS = new XContentBuilderString("number_of_pending_tasks");
static final XContentBuilderString NUMBER_OF_IN_FLIGHT_FETCH = new XContentBuilderString("number_of_in_flight_fetch");
static final XContentBuilderString DELAYED_UNASSIGNED_SHARDS = new XContentBuilderString("delayed_unassigned_shards");
static final XContentBuilderString TASK_MAX_WAIT_TIME_IN_QUEUE = new XContentBuilderString("task_max_waiting_in_queue");
static final XContentBuilderString TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS = new XContentBuilderString("task_max_waiting_in_queue_millis");
static final XContentBuilderString ACTIVE_SHARDS_PERCENT_AS_NUMBER = new XContentBuilderString("active_shards_percent_as_number");
static final XContentBuilderString ACTIVE_SHARDS_PERCENT = new XContentBuilderString("active_shards_percent");
static final XContentBuilderString ACTIVE_PRIMARY_SHARDS = new XContentBuilderString("active_primary_shards");
static final XContentBuilderString ACTIVE_SHARDS = new XContentBuilderString("active_shards");
static final XContentBuilderString RELOCATING_SHARDS = new XContentBuilderString("relocating_shards");
static final XContentBuilderString INITIALIZING_SHARDS = new XContentBuilderString("initializing_shards");
static final XContentBuilderString UNASSIGNED_SHARDS = new XContentBuilderString("unassigned_shards");
static final XContentBuilderString VALIDATION_FAILURES = new XContentBuilderString("validation_failures");
static final XContentBuilderString INDICES = new XContentBuilderString("indices");
}
private static final String CLUSTER_NAME = "cluster_name";
private static final String STATUS = "status";
private static final String TIMED_OUT = "timed_out";
private static final String NUMBER_OF_NODES = "number_of_nodes";
private static final String NUMBER_OF_DATA_NODES = "number_of_data_nodes";
private static final String NUMBER_OF_PENDING_TASKS = "number_of_pending_tasks";
private static final String NUMBER_OF_IN_FLIGHT_FETCH = "number_of_in_flight_fetch";
private static final String DELAYED_UNASSIGNED_SHARDS = "delayed_unassigned_shards";
private static final String TASK_MAX_WAIT_TIME_IN_QUEUE = "task_max_waiting_in_queue";
private static final String TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS = "task_max_waiting_in_queue_millis";
private static final String ACTIVE_SHARDS_PERCENT_AS_NUMBER = "active_shards_percent_as_number";
private static final String ACTIVE_SHARDS_PERCENT = "active_shards_percent";
private static final String ACTIVE_PRIMARY_SHARDS = "active_primary_shards";
private static final String ACTIVE_SHARDS = "active_shards";
private static final String RELOCATING_SHARDS = "relocating_shards";
private static final String INITIALIZING_SHARDS = "initializing_shards";
private static final String UNASSIGNED_SHARDS = "unassigned_shards";
private static final String INDICES = "indices";
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field(Fields.CLUSTER_NAME, getClusterName());
builder.field(Fields.STATUS, getStatus().name().toLowerCase(Locale.ROOT));
builder.field(Fields.TIMED_OUT, isTimedOut());
builder.field(Fields.NUMBER_OF_NODES, getNumberOfNodes());
builder.field(Fields.NUMBER_OF_DATA_NODES, getNumberOfDataNodes());
builder.field(Fields.ACTIVE_PRIMARY_SHARDS, getActivePrimaryShards());
builder.field(Fields.ACTIVE_SHARDS, getActiveShards());
builder.field(Fields.RELOCATING_SHARDS, getRelocatingShards());
builder.field(Fields.INITIALIZING_SHARDS, getInitializingShards());
builder.field(Fields.UNASSIGNED_SHARDS, getUnassignedShards());
builder.field(Fields.DELAYED_UNASSIGNED_SHARDS, getDelayedUnassignedShards());
builder.field(Fields.NUMBER_OF_PENDING_TASKS, getNumberOfPendingTasks());
builder.field(Fields.NUMBER_OF_IN_FLIGHT_FETCH, getNumberOfInFlightFetch());
builder.timeValueField(Fields.TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, Fields.TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
builder.percentageField(Fields.ACTIVE_SHARDS_PERCENT_AS_NUMBER, Fields.ACTIVE_SHARDS_PERCENT, getActiveShardsPercent());
builder.field(CLUSTER_NAME, getClusterName());
builder.field(STATUS, getStatus().name().toLowerCase(Locale.ROOT));
builder.field(TIMED_OUT, isTimedOut());
builder.field(NUMBER_OF_NODES, getNumberOfNodes());
builder.field(NUMBER_OF_DATA_NODES, getNumberOfDataNodes());
builder.field(ACTIVE_PRIMARY_SHARDS, getActivePrimaryShards());
builder.field(ACTIVE_SHARDS, getActiveShards());
builder.field(RELOCATING_SHARDS, getRelocatingShards());
builder.field(INITIALIZING_SHARDS, getInitializingShards());
builder.field(UNASSIGNED_SHARDS, getUnassignedShards());
builder.field(DELAYED_UNASSIGNED_SHARDS, getDelayedUnassignedShards());
builder.field(NUMBER_OF_PENDING_TASKS, getNumberOfPendingTasks());
builder.field(NUMBER_OF_IN_FLIGHT_FETCH, getNumberOfInFlightFetch());
builder.timeValueField(TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
builder.percentageField(ACTIVE_SHARDS_PERCENT_AS_NUMBER, ACTIVE_SHARDS_PERCENT, getActiveShardsPercent());
String level = params.param("level", "cluster");
boolean outputIndices = "indices".equals(level) || "shards".equals(level);
if (!getValidationFailures().isEmpty()) {
builder.startArray(Fields.VALIDATION_FAILURES);
for (String validationFailure : getValidationFailures()) {
builder.value(validationFailure);
}
// if we don't print index level information, still print the index validation failures
// so we know why the status is red
if (!outputIndices) {
for (ClusterIndexHealth indexHealth : clusterStateHealth.getIndices().values()) {
builder.startObject(indexHealth.getIndex());
if (!indexHealth.getValidationFailures().isEmpty()) {
builder.startArray(Fields.VALIDATION_FAILURES);
for (String validationFailure : indexHealth.getValidationFailures()) {
builder.value(validationFailure);
}
builder.endArray();
}
builder.endObject();
}
}
builder.endArray();
}
if (outputIndices) {
builder.startObject(Fields.INDICES);
builder.startObject(INDICES);
for (ClusterIndexHealth indexHealth : clusterStateHealth.getIndices().values()) {
builder.startObject(indexHealth.getIndex(), XContentBuilder.FieldCaseConversion.NONE);
builder.startObject(indexHealth.getIndex());
indexHealth.toXContent(builder, params);
builder.endObject();
}

View File

@ -54,7 +54,8 @@ public class TransportClusterHealthAction extends TransportMasterNodeReadAction<
public TransportClusterHealthAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, ClusterName clusterName, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver, GatewayAllocator gatewayAllocator) {
super(settings, ClusterHealthAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterHealthRequest::new);
super(settings, ClusterHealthAction.NAME, false, transportService, clusterService, threadPool, actionFilters,
indexNameExpressionResolver, ClusterHealthRequest::new);
this.clusterName = clusterName;
this.gatewayAllocator = gatewayAllocator;
}

View File

@ -19,12 +19,14 @@
package org.elasticsearch.action.admin.cluster.node.hotthreads;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import java.util.List;
/**
*/
@ -33,26 +35,18 @@ public class NodesHotThreadsResponse extends BaseNodesResponse<NodeHotThreads> {
NodesHotThreadsResponse() {
}
public NodesHotThreadsResponse(ClusterName clusterName, NodeHotThreads[] nodes) {
super(clusterName, nodes);
public NodesHotThreadsResponse(ClusterName clusterName, List<NodeHotThreads> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeHotThreads[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = NodeHotThreads.readNodeHotThreads(in);
}
protected List<NodeHotThreads> readNodesFrom(StreamInput in) throws IOException {
return in.readList(NodeHotThreads::readNodeHotThreads);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeHotThreads node : nodes) {
node.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeHotThreads> nodes) throws IOException {
out.writeStreamableList(nodes);
}
}

View File

@ -20,6 +20,7 @@
package org.elasticsearch.action.admin.cluster.node.hotthreads;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
@ -35,33 +36,28 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
*/
public class TransportNodesHotThreadsAction extends TransportNodesAction<NodesHotThreadsRequest, NodesHotThreadsResponse, TransportNodesHotThreadsAction.NodeRequest, NodeHotThreads> {
public class TransportNodesHotThreadsAction extends TransportNodesAction<NodesHotThreadsRequest,
NodesHotThreadsResponse,
TransportNodesHotThreadsAction.NodeRequest,
NodeHotThreads> {
@Inject
public TransportNodesHotThreadsAction(Settings settings, ClusterName clusterName, ThreadPool threadPool,
ClusterService clusterService, TransportService transportService,
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesHotThreadsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, NodesHotThreadsRequest::new, NodeRequest::new, ThreadPool.Names.GENERIC);
indexNameExpressionResolver, NodesHotThreadsRequest::new, NodeRequest::new, ThreadPool.Names.GENERIC, NodeHotThreads.class);
}
@Override
protected NodesHotThreadsResponse newResponse(NodesHotThreadsRequest request, AtomicReferenceArray responses) {
final List<NodeHotThreads> nodes = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeHotThreads) {
nodes.add((NodeHotThreads) resp);
}
}
return new NodesHotThreadsResponse(clusterName, nodes.toArray(new NodeHotThreads[nodes.size()]));
protected NodesHotThreadsResponse newResponse(NodesHotThreadsRequest request,
List<NodeHotThreads> responses, List<FailedNodeException> failures) {
return new NodesHotThreadsResponse(clusterName, responses, failures);
}
@Override

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.info;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.node.DiscoveryNode;
@ -30,6 +31,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
import java.util.List;
import java.util.Map;
/**
@ -40,47 +42,37 @@ public class NodesInfoResponse extends BaseNodesResponse<NodeInfo> implements To
public NodesInfoResponse() {
}
public NodesInfoResponse(ClusterName clusterName, NodeInfo[] nodes) {
super(clusterName, nodes);
public NodesInfoResponse(ClusterName clusterName, List<NodeInfo> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeInfo[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = NodeInfo.readNodeInfo(in);
}
protected List<NodeInfo> readNodesFrom(StreamInput in) throws IOException {
return in.readList(NodeInfo::readNodeInfo);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeInfo node : nodes) {
node.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeInfo> nodes) throws IOException {
out.writeStreamableList(nodes);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("cluster_name", getClusterName().value(), XContentBuilder.FieldCaseConversion.NONE);
builder.startObject("nodes");
for (NodeInfo nodeInfo : this) {
builder.startObject(nodeInfo.getNode().getId(), XContentBuilder.FieldCaseConversion.NONE);
for (NodeInfo nodeInfo : getNodes()) {
builder.startObject(nodeInfo.getNode().getId());
builder.field("name", nodeInfo.getNode().getName(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("name", nodeInfo.getNode().getName());
builder.field("transport_address", nodeInfo.getNode().getAddress().toString());
builder.field("host", nodeInfo.getNode().getHostName(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("ip", nodeInfo.getNode().getHostAddress(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("host", nodeInfo.getNode().getHostName());
builder.field("ip", nodeInfo.getNode().getHostAddress());
builder.field("version", nodeInfo.getVersion());
builder.field("build_hash", nodeInfo.getBuild().shortHash());
if (nodeInfo.getServiceAttributes() != null) {
for (Map.Entry<String, String> nodeAttribute : nodeInfo.getServiceAttributes().entrySet()) {
builder.field(nodeAttribute.getKey(), nodeAttribute.getValue(), XContentBuilder.FieldCaseConversion.NONE);
builder.field(nodeAttribute.getKey(), nodeAttribute.getValue());
}
}
@ -93,7 +85,7 @@ public class NodesInfoResponse extends BaseNodesResponse<NodeInfo> implements To
if (!nodeInfo.getNode().getAttributes().isEmpty()) {
builder.startObject("attributes");
for (Map.Entry<String, String> entry : nodeInfo.getNode().getAttributes().entrySet()) {
builder.field(entry.getKey(), entry.getValue(), XContentBuilder.FieldCaseConversion.NONE);
builder.field(entry.getKey(), entry.getValue());
}
builder.endObject();
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.info;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
@ -34,36 +35,32 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
*/
public class TransportNodesInfoAction extends TransportNodesAction<NodesInfoRequest, NodesInfoResponse, TransportNodesInfoAction.NodeInfoRequest, NodeInfo> {
public class TransportNodesInfoAction extends TransportNodesAction<NodesInfoRequest,
NodesInfoResponse,
TransportNodesInfoAction.NodeInfoRequest,
NodeInfo> {
private final NodeService nodeService;
@Inject
public TransportNodesInfoAction(Settings settings, ClusterName clusterName, ThreadPool threadPool,
ClusterService clusterService, TransportService transportService,
NodeService nodeService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
NodeService nodeService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesInfoAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, NodesInfoRequest::new, NodeInfoRequest::new, ThreadPool.Names.MANAGEMENT);
indexNameExpressionResolver, NodesInfoRequest::new, NodeInfoRequest::new, ThreadPool.Names.MANAGEMENT, NodeInfo.class);
this.nodeService = nodeService;
}
@Override
protected NodesInfoResponse newResponse(NodesInfoRequest nodesInfoRequest, AtomicReferenceArray responses) {
final List<NodeInfo> nodesInfos = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeInfo) {
nodesInfos.add((NodeInfo) resp);
}
}
return new NodesInfoResponse(clusterName, nodesInfos.toArray(new NodeInfo[nodesInfos.size()]));
protected NodesInfoResponse newResponse(NodesInfoRequest nodesInfoRequest,
List<NodeInfo> responses, List<FailedNodeException> failures) {
return new NodesInfoResponse(clusterName, responses, failures);
}
@Override

View File

@ -38,7 +38,8 @@ public final class TransportLivenessAction implements TransportRequestHandler<Li
ClusterService clusterService, TransportService transportService) {
this.clusterService = clusterService;
this.clusterName = clusterName;
transportService.registerRequestHandler(NAME, LivenessRequest::new, ThreadPool.Names.SAME, this);
transportService.registerRequestHandler(NAME, LivenessRequest::new, ThreadPool.Names.SAME,
false, false /*can not trip circuit breaker*/, this);
}
@Override

View File

@ -224,7 +224,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
threadPool = ThreadPoolStats.readThreadPoolStats(in);
}
if (in.readBoolean()) {
fs = FsInfo.readFsInfo(in);
fs = new FsInfo(in);
}
if (in.readBoolean()) {
transport = TransportStats.readTransportStats(in);
@ -299,10 +299,10 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
if (!params.param("node_info_format", "default").equals("none")) {
builder.field("name", getNode().getName(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("transport_address", getNode().getAddress().toString(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("host", getNode().getHostName(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("ip", getNode().getAddress(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("name", getNode().getName());
builder.field("transport_address", getNode().getAddress().toString());
builder.field("host", getNode().getHostName());
builder.field("ip", getNode().getAddress());
builder.startArray("roles");
for (DiscoveryNode.Role role : getNode().getRoles()) {
@ -313,7 +313,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
if (!getNode().getAttributes().isEmpty()) {
builder.startObject("attributes");
for (Map.Entry<String, String> attrEntry : getNode().getAttributes().entrySet()) {
builder.field(attrEntry.getKey(), attrEntry.getValue(), XContentBuilder.FieldCaseConversion.NONE);
builder.field(attrEntry.getKey(), attrEntry.getValue());
}
builder.endObject();
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.common.io.stream.StreamInput;
@ -28,6 +29,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
import java.util.List;
/**
*
@ -37,35 +39,25 @@ public class NodesStatsResponse extends BaseNodesResponse<NodeStats> implements
NodesStatsResponse() {
}
public NodesStatsResponse(ClusterName clusterName, NodeStats[] nodes) {
super(clusterName, nodes);
public NodesStatsResponse(ClusterName clusterName, List<NodeStats> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeStats[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = NodeStats.readNodeStats(in);
}
protected List<NodeStats> readNodesFrom(StreamInput in) throws IOException {
return in.readList(NodeStats::readNodeStats);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeStats node : nodes) {
node.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeStats> nodes) throws IOException {
out.writeStreamableList(nodes);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("cluster_name", getClusterName().value());
builder.startObject("nodes");
for (NodeStats nodeStats : this) {
builder.startObject(nodeStats.getNode().getId(), XContentBuilder.FieldCaseConversion.NONE);
for (NodeStats nodeStats : getNodes()) {
builder.startObject(nodeStats.getNode().getId());
builder.field("timestamp", nodeStats.getTimestamp());
nodeStats.toXContent(builder, params);
@ -88,4 +80,4 @@ public class NodesStatsResponse extends BaseNodesResponse<NodeStats> implements
return "{ \"error\" : \"" + e.getMessage() + "\"}";
}
}
}
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
@ -34,36 +35,31 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
*/
public class TransportNodesStatsAction extends TransportNodesAction<NodesStatsRequest, NodesStatsResponse, TransportNodesStatsAction.NodeStatsRequest, NodeStats> {
public class TransportNodesStatsAction extends TransportNodesAction<NodesStatsRequest,
NodesStatsResponse,
TransportNodesStatsAction.NodeStatsRequest,
NodeStats> {
private final NodeService nodeService;
@Inject
public TransportNodesStatsAction(Settings settings, ClusterName clusterName, ThreadPool threadPool,
ClusterService clusterService, TransportService transportService,
NodeService nodeService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesStatsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,
NodesStatsRequest::new, NodeStatsRequest::new, ThreadPool.Names.MANAGEMENT);
NodeService nodeService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, NodesStatsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, NodesStatsRequest::new, NodeStatsRequest::new, ThreadPool.Names.MANAGEMENT, NodeStats.class);
this.nodeService = nodeService;
}
@Override
protected NodesStatsResponse newResponse(NodesStatsRequest nodesInfoRequest, AtomicReferenceArray responses) {
final List<NodeStats> nodeStats = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeStats) {
nodeStats.add((NodeStats) resp);
}
}
return new NodesStatsResponse(clusterName, nodeStats.toArray(new NodeStats[nodeStats.size()]));
protected NodesStatsResponse newResponse(NodesStatsRequest request, List<NodeStats> responses, List<FailedNodeException> failures) {
return new NodesStatsResponse(clusterName, responses, failures);
}
@Override

View File

@ -56,7 +56,7 @@ import java.util.function.Consumer;
* Transport action that can be used to cancel currently running cancellable tasks.
* <p>
* For a task to be cancellable it has to return an instance of
* {@link CancellableTask} from {@link TransportRequest#createTask(long, String, String)}
* {@link CancellableTask} from {@link TransportRequest#createTask(long, String, String, TaskId)}
*/
public class TransportCancelTasksAction extends TransportTasksAction<CancellableTask, CancelTasksRequest, CancelTasksResponse, TaskInfo> {
@ -251,7 +251,7 @@ public class TransportCancelTasksAction extends TransportTasksAction<Cancellable
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
parentTaskId = new TaskId(in);
parentTaskId = TaskId.readFromStream(in);
ban = in.readBoolean();
if (ban) {
reason = in.readString();

View File

@ -23,11 +23,11 @@ import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.TaskOperationFailure;
import org.elasticsearch.action.support.tasks.BaseTasksResponse;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.tasks.TaskId;
import java.io.IOException;
@ -54,7 +54,8 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
public ListTasksResponse() {
}
public ListTasksResponse(List<TaskInfo> tasks, List<TaskOperationFailure> taskFailures, List<? extends FailedNodeException> nodeFailures) {
public ListTasksResponse(List<TaskInfo> tasks, List<TaskOperationFailure> taskFailures,
List<? extends FailedNodeException> nodeFailures) {
super(taskFailures, nodeFailures);
this.tasks = tasks == null ? Collections.emptyList() : Collections.unmodifiableList(new ArrayList<>(tasks));
}
@ -163,7 +164,7 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
builder.startObject("nodes");
for (Map.Entry<DiscoveryNode, List<TaskInfo>> entry : getPerNodeTasks().entrySet()) {
DiscoveryNode node = entry.getKey();
builder.startObject(node.getId(), XContentBuilder.FieldCaseConversion.NONE);
builder.startObject(node.getId());
builder.field("name", node.getName());
builder.field("transport_address", node.getAddress().toString());
builder.field("host", node.getHostName());
@ -178,23 +179,24 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
if (!node.getAttributes().isEmpty()) {
builder.startObject("attributes");
for (Map.Entry<String, String> attrEntry : node.getAttributes().entrySet()) {
builder.field(attrEntry.getKey(), attrEntry.getValue(), XContentBuilder.FieldCaseConversion.NONE);
builder.field(attrEntry.getKey(), attrEntry.getValue());
}
builder.endObject();
}
builder.startObject("tasks");
for(TaskInfo task : entry.getValue()) {
builder.startObject(task.getTaskId().toString(), XContentBuilder.FieldCaseConversion.NONE);
builder.startObject(task.getTaskId().toString());
task.toXContent(builder, params);
builder.endObject();
}
builder.endObject();
builder.endObject();
}
builder.endObject();
} else if ("parents".equals(groupBy)) {
builder.startObject("tasks");
for (TaskGroup group : getTaskGroups()) {
builder.startObject(group.getTaskInfo().getTaskId().toString(), XContentBuilder.FieldCaseConversion.NONE);
builder.startObject(group.getTaskInfo().getTaskId().toString());
group.toXContent(builder, params);
builder.endObject();
}
@ -205,14 +207,6 @@ public class ListTasksResponse extends BaseTasksResponse implements ToXContent {
@Override
public String toString() {
try {
XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint();
builder.startObject();
toXContent(builder, EMPTY_PARAMS);
builder.endObject();
return builder.string();
} catch (IOException e) {
return "{ \"error\" : \"" + e.getMessage() + "\"}";
}
return Strings.toString(this);
}
}

View File

@ -39,7 +39,7 @@ import java.util.concurrent.TimeUnit;
* and use in APIs. Instead, immutable and streamable TaskInfo objects are used to represent
* snapshot information about currently running tasks.
*/
public class TaskInfo implements Writeable<TaskInfo>, ToXContent {
public class TaskInfo implements Writeable, ToXContent {
private final DiscoveryNode node;
@ -75,21 +75,34 @@ public class TaskInfo implements Writeable<TaskInfo>, ToXContent {
this.parentTaskId = parentTaskId;
}
/**
* Read from a stream.
*/
public TaskInfo(StreamInput in) throws IOException {
node = new DiscoveryNode(in);
taskId = new TaskId(node.getId(), in.readLong());
type = in.readString();
action = in.readString();
description = in.readOptionalString();
if (in.readBoolean()) {
status = in.readTaskStatus();
} else {
status = null;
}
status = in.readOptionalNamedWriteable(Task.Status.class);
startTime = in.readLong();
runningTimeNanos = in.readLong();
cancellable = in.readBoolean();
parentTaskId = new TaskId(in);
parentTaskId = TaskId.readFromStream(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
node.writeTo(out);
out.writeLong(taskId.getId());
out.writeString(type);
out.writeString(action);
out.writeOptionalString(description);
out.writeOptionalNamedWriteable(status);
out.writeLong(startTime);
out.writeLong(runningTimeNanos);
out.writeBoolean(cancellable);
parentTaskId.writeTo(out);
}
public TaskId getTaskId() {
@ -152,30 +165,6 @@ public class TaskInfo implements Writeable<TaskInfo>, ToXContent {
return parentTaskId;
}
@Override
public TaskInfo readFrom(StreamInput in) throws IOException {
return new TaskInfo(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
node.writeTo(out);
out.writeLong(taskId.getId());
out.writeString(type);
out.writeString(action);
out.writeOptionalString(description);
if (status != null) {
out.writeBoolean(true);
out.writeTaskStatus(status);
} else {
out.writeBoolean(false);
}
out.writeLong(startTime);
out.writeLong(runningTimeNanos);
out.writeBoolean(cancellable);
parentTaskId.writeTo(out);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("node", node.getId());

View File

@ -51,12 +51,15 @@ public class TransportListTasksAction extends TransportTasksAction<Task, ListTas
private static final TimeValue DEFAULT_WAIT_FOR_COMPLETION_TIMEOUT = timeValueSeconds(30);
@Inject
public TransportListTasksAction(Settings settings, ClusterName clusterName, ThreadPool threadPool, ClusterService clusterService, TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ListTasksAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, ListTasksRequest::new, ListTasksResponse::new, ThreadPool.Names.MANAGEMENT);
public TransportListTasksAction(Settings settings, ClusterName clusterName, ThreadPool threadPool, ClusterService clusterService,
TransportService transportService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ListTasksAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, ListTasksRequest::new, ListTasksResponse::new, ThreadPool.Names.MANAGEMENT);
}
@Override
protected ListTasksResponse newResponse(ListTasksRequest request, List<TaskInfo> tasks, List<TaskOperationFailure> taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {
protected ListTasksResponse newResponse(ListTasksRequest request, List<TaskInfo> tasks,
List<TaskOperationFailure> taskOperationFailures, List<FailedNodeException> failedNodeExceptions) {
return new ListTasksResponse(tasks, taskOperationFailures, failedNodeExceptions);
}

View File

@ -26,7 +26,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.common.xcontent.XContentHelper;
import java.io.IOException;
@ -78,16 +77,16 @@ public class VerifyRepositoryResponse extends ActionResponse implements ToXConte
}
static final class Fields {
static final XContentBuilderString NODES = new XContentBuilderString("nodes");
static final XContentBuilderString NAME = new XContentBuilderString("name");
static final String NODES = "nodes";
static final String NAME = "name";
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject(Fields.NODES);
for (DiscoveryNode node : nodes) {
builder.startObject(node.getId(), XContentBuilder.FieldCaseConversion.NONE);
builder.field(Fields.NAME, node.getName(), XContentBuilder.FieldCaseConversion.NONE);
builder.startObject(node.getId());
builder.field(Fields.NAME, node.getName());
builder.endObject();
}
builder.endObject();

View File

@ -19,27 +19,24 @@
package org.elasticsearch.action.admin.cluster.reroute;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand;
import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentParser;
import java.io.IOException;
import java.util.Objects;
/**
* Request to submit cluster reroute allocation commands
*/
public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteRequest> {
AllocationCommands commands = new AllocationCommands();
boolean dryRun;
boolean explain;
private AllocationCommands commands = new AllocationCommands();
private boolean dryRun;
private boolean explain;
private boolean retryFailed;
public ClusterRerouteRequest() {
}
@ -80,6 +77,15 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
return this;
}
/**
* Sets the retry failed flag (defaults to <tt>false</tt>). If true, the
* request will retry allocating shards that can't currently be allocated due to too many allocation failures.
*/
public ClusterRerouteRequest setRetryFailed(boolean retryFailed) {
this.retryFailed = retryFailed;
return this;
}
/**
* Returns the current explain flag
*/
@ -88,33 +94,28 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
}
/**
* Sets the source for the request.
* Returns the current retry failed flag
*/
public ClusterRerouteRequest source(BytesReference source) throws Exception {
try (XContentParser parser = XContentHelper.createParser(source)) {
XContentParser.Token token;
String currentFieldName = null;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
currentFieldName = parser.currentName();
} else if (token == XContentParser.Token.START_ARRAY) {
if ("commands".equals(currentFieldName)) {
this.commands = AllocationCommands.fromXContent(parser);
} else {
throw new ElasticsearchParseException("failed to parse reroute request, got start array with wrong field name [{}]", currentFieldName);
}
} else if (token.isValue()) {
if ("dry_run".equals(currentFieldName) || "dryRun".equals(currentFieldName)) {
dryRun = parser.booleanValue();
} else {
throw new ElasticsearchParseException("failed to parse reroute request, got value with wrong field name [{}]", currentFieldName);
}
}
}
}
public boolean isRetryFailed() {
return this.retryFailed;
}
/**
* Set the allocation commands to execute.
*/
public ClusterRerouteRequest commands(AllocationCommands commands) {
this.commands = commands;
return this;
}
/**
* Returns the allocation commands to execute
*/
public AllocationCommands getCommands() {
return commands;
}
@Override
public ActionRequestValidationException validate() {
return null;
@ -126,6 +127,7 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
commands = AllocationCommands.readFrom(in);
dryRun = in.readBoolean();
explain = in.readBoolean();
retryFailed = in.readBoolean();
readTimeout(in);
}
@ -135,6 +137,28 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
AllocationCommands.writeTo(commands, out);
out.writeBoolean(dryRun);
out.writeBoolean(explain);
out.writeBoolean(retryFailed);
writeTimeout(out);
}
@Override
public boolean equals(Object obj) {
if (obj == null || getClass() != obj.getClass()) {
return false;
}
ClusterRerouteRequest other = (ClusterRerouteRequest) obj;
// Override equals and hashCode for testing
return Objects.equals(commands, other.commands) &&
Objects.equals(dryRun, other.dryRun) &&
Objects.equals(explain, other.explain) &&
Objects.equals(timeout, other.timeout) &&
Objects.equals(retryFailed, other.retryFailed) &&
Objects.equals(masterNodeTimeout, other.masterNodeTimeout);
}
@Override
public int hashCode() {
// Override equals and hashCode for testing
return Objects.hash(commands, dryRun, explain, timeout, retryFailed, masterNodeTimeout);
}
}

View File

@ -22,13 +22,12 @@ package org.elasticsearch.action.admin.cluster.reroute;
import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;
import org.elasticsearch.client.ElasticsearchClient;
import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand;
import org.elasticsearch.common.bytes.BytesReference;
/**
* Builder for a cluster reroute request
*/
public class ClusterRerouteRequestBuilder extends AcknowledgedRequestBuilder<ClusterRerouteRequest, ClusterRerouteResponse, ClusterRerouteRequestBuilder> {
public class ClusterRerouteRequestBuilder
extends AcknowledgedRequestBuilder<ClusterRerouteRequest, ClusterRerouteResponse, ClusterRerouteRequestBuilder> {
public ClusterRerouteRequestBuilder(ElasticsearchClient client, ClusterRerouteAction action) {
super(client, action, new ClusterRerouteRequest());
}
@ -61,10 +60,11 @@ public class ClusterRerouteRequestBuilder extends AcknowledgedRequestBuilder<Clu
}
/**
* Sets the source for the request
* Sets the retry failed flag (defaults to <tt>false</tt>). If true, the
* request will retry allocating shards that can't currently be allocated due to too many allocation failures.
*/
public ClusterRerouteRequestBuilder setSource(BytesReference source) throws Exception {
request.source(source);
public ClusterRerouteRequestBuilder setRetryFailed(boolean retryFailed) {
request.setRetryFailed(retryFailed);
return this;
}
}
}

View File

@ -33,6 +33,7 @@ import org.elasticsearch.cluster.routing.allocation.RoutingExplanations;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Priority;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.logging.ESLogger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
@ -68,38 +69,55 @@ public class TransportClusterRerouteAction extends TransportMasterNodeAction<Clu
@Override
protected void masterOperation(final ClusterRerouteRequest request, final ClusterState state, final ActionListener<ClusterRerouteResponse> listener) {
clusterService.submitStateUpdateTask("cluster_reroute (api)", new AckedClusterStateUpdateTask<ClusterRerouteResponse>(Priority.IMMEDIATE, request, listener) {
private volatile ClusterState clusterStateToSend;
private volatile RoutingExplanations explanations;
@Override
protected ClusterRerouteResponse newResponse(boolean acknowledged) {
return new ClusterRerouteResponse(acknowledged, clusterStateToSend, explanations);
}
@Override
public void onAckTimeout() {
listener.onResponse(new ClusterRerouteResponse(false, clusterStateToSend, new RoutingExplanations()));
}
@Override
public void onFailure(String source, Throwable t) {
logger.debug("failed to perform [{}]", t, source);
super.onFailure(source, t);
}
@Override
public ClusterState execute(ClusterState currentState) {
RoutingAllocation.Result routingResult = allocationService.reroute(currentState, request.commands, request.explain());
ClusterState newState = ClusterState.builder(currentState).routingResult(routingResult).build();
clusterStateToSend = newState;
explanations = routingResult.explanations();
if (request.dryRun) {
return currentState;
}
return newState;
}
});
clusterService.submitStateUpdateTask("cluster_reroute (api)", new ClusterRerouteResponseAckedClusterStateUpdateTask(logger,
allocationService, request, listener));
}
}
static class ClusterRerouteResponseAckedClusterStateUpdateTask extends AckedClusterStateUpdateTask<ClusterRerouteResponse> {
private final ClusterRerouteRequest request;
private final ActionListener<ClusterRerouteResponse> listener;
private final ESLogger logger;
private final AllocationService allocationService;
private volatile ClusterState clusterStateToSend;
private volatile RoutingExplanations explanations;
ClusterRerouteResponseAckedClusterStateUpdateTask(ESLogger logger, AllocationService allocationService, ClusterRerouteRequest request,
ActionListener<ClusterRerouteResponse> listener) {
super(Priority.IMMEDIATE, request, listener);
this.request = request;
this.listener = listener;
this.logger = logger;
this.allocationService = allocationService;
}
@Override
protected ClusterRerouteResponse newResponse(boolean acknowledged) {
return new ClusterRerouteResponse(acknowledged, clusterStateToSend, explanations);
}
@Override
public void onAckTimeout() {
listener.onResponse(new ClusterRerouteResponse(false, clusterStateToSend, new RoutingExplanations()));
}
@Override
public void onFailure(String source, Throwable t) {
logger.debug("failed to perform [{}]", t, source);
super.onFailure(source, t);
}
@Override
public ClusterState execute(ClusterState currentState) {
RoutingAllocation.Result routingResult = allocationService.reroute(currentState, request.getCommands(), request.explain(),
request.isRetryFailed());
ClusterState newState = ClusterState.builder(currentState).routingResult(routingResult).build();
clusterStateToSend = newState;
explanations = routingResult.explanations();
if (request.dryRun()) {
return currentState;
}
return newState;
}
}
}

View File

@ -26,6 +26,7 @@ import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.shard.ShardId;
import java.io.IOException;
@ -33,16 +34,14 @@ import java.io.IOException;
*/
public class ClusterSearchShardsGroup implements Streamable, ToXContent {
private Index index;
private int shardId;
private ShardId shardId;
ShardRouting[] shards;
ClusterSearchShardsGroup() {
}
public ClusterSearchShardsGroup(Index index, int shardId, ShardRouting[] shards) {
this.index = index;
public ClusterSearchShardsGroup(ShardId shardId, ShardRouting[] shards) {
this.shardId = shardId;
this.shards = shards;
}
@ -54,11 +53,11 @@ public class ClusterSearchShardsGroup implements Streamable, ToXContent {
}
public String getIndex() {
return index.getName();
return shardId.getIndexName();
}
public int getShardId() {
return shardId;
return shardId.id();
}
public ShardRouting[] getShards() {
@ -67,18 +66,16 @@ public class ClusterSearchShardsGroup implements Streamable, ToXContent {
@Override
public void readFrom(StreamInput in) throws IOException {
index = new Index(in);
shardId = in.readVInt();
shardId = ShardId.readShardId(in);
shards = new ShardRouting[in.readVInt()];
for (int i = 0; i < shards.length; i++) {
shards[i] = ShardRouting.readShardRoutingEntry(in, index, shardId);
shards[i] = new ShardRouting(shardId, in);
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
index.writeTo(out);
out.writeVInt(shardId);
shardId.writeTo(out);
out.writeVInt(shards.length);
for (ShardRouting shardRouting : shards) {
shardRouting.writeToThin(out);

View File

@ -34,6 +34,7 @@ import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
@ -78,8 +79,7 @@ public class TransportClusterSearchShardsAction extends TransportMasterNodeReadA
ClusterSearchShardsGroup[] groupResponses = new ClusterSearchShardsGroup[groupShardsIterator.size()];
int currentGroup = 0;
for (ShardIterator shardIt : groupShardsIterator) {
Index index = shardIt.shardId().getIndex();
int shardId = shardIt.shardId().getId();
ShardId shardId = shardIt.shardId();
ShardRouting[] shardRoutings = new ShardRouting[shardIt.size()];
int currentShard = 0;
shardIt.reset();
@ -87,7 +87,7 @@ public class TransportClusterSearchShardsAction extends TransportMasterNodeReadA
shardRoutings[currentShard++] = shard;
nodeIds.add(shard.currentNodeId());
}
groupResponses[currentGroup++] = new ClusterSearchShardsGroup(index, shardId, shardRoutings);
groupResponses[currentGroup++] = new ClusterSearchShardsGroup(shardId, shardRoutings);
}
DiscoveryNode[] nodes = new DiscoveryNode[nodeIds.size()];
int currentNode = 0;

View File

@ -25,7 +25,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.snapshots.SnapshotInfo;
@ -58,13 +57,13 @@ public class CreateSnapshotResponse extends ActionResponse implements ToXContent
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
snapshotInfo = SnapshotInfo.readOptionalSnapshotInfo(in);
snapshotInfo = in.readOptionalWriteable(SnapshotInfo::new);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeOptionalStreamable(snapshotInfo);
out.writeOptionalWriteable(snapshotInfo);
}
/**
@ -82,18 +81,13 @@ public class CreateSnapshotResponse extends ActionResponse implements ToXContent
return snapshotInfo.status();
}
static final class Fields {
static final XContentBuilderString SNAPSHOT = new XContentBuilderString("snapshot");
static final XContentBuilderString ACCEPTED = new XContentBuilderString("accepted");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
if (snapshotInfo != null) {
builder.field(Fields.SNAPSHOT);
builder.field("snapshot");
snapshotInfo.toXContent(builder, params);
} else {
builder.field(Fields.ACCEPTED, true);
builder.field("accepted", true);
}
return builder;
}

View File

@ -26,10 +26,10 @@ import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
@ -72,7 +72,7 @@ public class TransportCreateSnapshotAction extends TransportMasterNodeAction<Cre
@Override
protected void masterOperation(final CreateSnapshotRequest request, ClusterState state, final ActionListener<CreateSnapshotResponse> listener) {
SnapshotsService.SnapshotRequest snapshotRequest =
new SnapshotsService.SnapshotRequest("create_snapshot [" + request.snapshot() + "]", request.snapshot(), request.repository())
new SnapshotsService.SnapshotRequest(request.repository(), request.snapshot(), "create_snapshot [" + request.snapshot() + "]")
.indices(request.indices())
.indicesOptions(request.indicesOptions())
.partial(request.partial())
@ -84,19 +84,19 @@ public class TransportCreateSnapshotAction extends TransportMasterNodeAction<Cre
public void onResponse() {
if (request.waitForCompletion()) {
snapshotsService.addListener(new SnapshotsService.SnapshotCompletionListener() {
SnapshotId snapshotId = new SnapshotId(request.repository(), request.snapshot());
@Override
public void onSnapshotCompletion(SnapshotId snapshotId, SnapshotInfo snapshot) {
if (this.snapshotId.equals(snapshotId)) {
listener.onResponse(new CreateSnapshotResponse(snapshot));
public void onSnapshotCompletion(Snapshot snapshot, SnapshotInfo snapshotInfo) {
if (snapshot.getRepository().equals(request.repository()) &&
snapshot.getSnapshotId().getName().equals(request.snapshot())) {
listener.onResponse(new CreateSnapshotResponse(snapshotInfo));
snapshotsService.removeListener(this);
}
}
@Override
public void onSnapshotFailure(SnapshotId snapshotId, Throwable t) {
if (this.snapshotId.equals(snapshotId)) {
public void onSnapshotFailure(Snapshot snapshot, Throwable t) {
if (snapshot.getRepository().equals(request.repository()) &&
snapshot.getSnapshotId().getName().equals(request.snapshot())) {
listener.onFailure(t);
snapshotsService.removeListener(this);
}

View File

@ -26,7 +26,6 @@ import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
@ -66,8 +65,7 @@ public class TransportDeleteSnapshotAction extends TransportMasterNodeAction<Del
@Override
protected void masterOperation(final DeleteSnapshotRequest request, ClusterState state, final ActionListener<DeleteSnapshotResponse> listener) {
SnapshotId snapshotIds = new SnapshotId(request.repository(), request.snapshot());
snapshotsService.deleteSnapshot(snapshotIds, new SnapshotsService.DeleteSnapshotListener() {
snapshotsService.deleteSnapshot(request.repository(), request.snapshot(), new SnapshotsService.DeleteSnapshotListener() {
@Override
public void onResponse() {
listener.onResponse(new DeleteSnapshotResponse(true));

View File

@ -24,7 +24,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.snapshots.SnapshotInfo;
import java.io.IOException;
@ -43,7 +42,7 @@ public class GetSnapshotsResponse extends ActionResponse implements ToXContent {
}
GetSnapshotsResponse(List<SnapshotInfo> snapshots) {
this.snapshots = snapshots;
this.snapshots = Collections.unmodifiableList(snapshots);
}
/**
@ -61,7 +60,7 @@ public class GetSnapshotsResponse extends ActionResponse implements ToXContent {
int size = in.readVInt();
List<SnapshotInfo> builder = new ArrayList<>();
for (int i = 0; i < size; i++) {
builder.add(SnapshotInfo.readSnapshotInfo(in));
builder.add(new SnapshotInfo(in));
}
snapshots = Collections.unmodifiableList(builder);
}
@ -75,13 +74,9 @@ public class GetSnapshotsResponse extends ActionResponse implements ToXContent {
}
}
static final class Fields {
static final XContentBuilderString SNAPSHOTS = new XContentBuilderString("snapshots");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray(Fields.SNAPSHOTS);
builder.startArray("snapshots");
for (SnapshotInfo snapshotInfo : snapshots) {
snapshotInfo.toXContent(builder, params);
}

View File

@ -26,21 +26,22 @@ import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.snapshots.SnapshotId;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotMissingException;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
/**
@ -53,7 +54,8 @@ public class TransportGetSnapshotsAction extends TransportMasterNodeAction<GetSn
public TransportGetSnapshotsAction(Settings settings, TransportService transportService, ClusterService clusterService,
ThreadPool threadPool, SnapshotsService snapshotsService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, GetSnapshotsAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, GetSnapshotsRequest::new);
super(settings, GetSnapshotsAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,
GetSnapshotsRequest::new);
this.snapshotsService = snapshotsService;
}
@ -73,42 +75,51 @@ public class TransportGetSnapshotsAction extends TransportMasterNodeAction<GetSn
}
@Override
protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, final ActionListener<GetSnapshotsResponse> listener) {
protected void masterOperation(final GetSnapshotsRequest request, ClusterState state,
final ActionListener<GetSnapshotsResponse> listener) {
try {
final String repository = request.repository();
List<SnapshotInfo> snapshotInfoBuilder = new ArrayList<>();
if (isAllSnapshots(request.snapshots())) {
List<Snapshot> snapshots = snapshotsService.snapshots(request.repository(), request.ignoreUnavailable());
for (Snapshot snapshot : snapshots) {
snapshotInfoBuilder.add(new SnapshotInfo(snapshot));
}
snapshotInfoBuilder.addAll(snapshotsService.currentSnapshots(repository));
snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository,
snapshotsService.snapshotIds(repository),
request.ignoreUnavailable()));
} else if (isCurrentSnapshots(request.snapshots())) {
List<Snapshot> snapshots = snapshotsService.currentSnapshots(request.repository());
for (Snapshot snapshot : snapshots) {
snapshotInfoBuilder.add(new SnapshotInfo(snapshot));
}
snapshotInfoBuilder.addAll(snapshotsService.currentSnapshots(repository));
} else {
Set<String> snapshotsToGet = new LinkedHashSet<>(); // to keep insertion order
List<Snapshot> snapshots = null;
final Map<String, SnapshotId> allSnapshotIds = new HashMap<>();
for (SnapshotInfo snapshotInfo : snapshotsService.currentSnapshots(repository)) {
SnapshotId snapshotId = snapshotInfo.snapshotId();
allSnapshotIds.put(snapshotId.getName(), snapshotId);
}
for (SnapshotId snapshotId : snapshotsService.snapshotIds(repository)) {
allSnapshotIds.put(snapshotId.getName(), snapshotId);
}
final Set<SnapshotId> toResolve = new LinkedHashSet<>(); // maintain order
for (String snapshotOrPattern : request.snapshots()) {
if (Regex.isSimpleMatchPattern(snapshotOrPattern) == false) {
snapshotsToGet.add(snapshotOrPattern);
} else {
if (snapshots == null) { // lazily load snapshots
snapshots = snapshotsService.snapshots(request.repository(), request.ignoreUnavailable());
if (allSnapshotIds.containsKey(snapshotOrPattern)) {
toResolve.add(allSnapshotIds.get(snapshotOrPattern));
} else if (request.ignoreUnavailable() == false) {
throw new SnapshotMissingException(repository, snapshotOrPattern);
}
for (Snapshot snapshot : snapshots) {
if (Regex.simpleMatch(snapshotOrPattern, snapshot.name())) {
snapshotsToGet.add(snapshot.name());
} else {
for (Map.Entry<String, SnapshotId> entry : allSnapshotIds.entrySet()) {
if (Regex.simpleMatch(snapshotOrPattern, entry.getKey())) {
toResolve.add(entry.getValue());
}
}
}
}
for (String snapshot : snapshotsToGet) {
SnapshotId snapshotId = new SnapshotId(request.repository(), snapshot);
snapshotInfoBuilder.add(new SnapshotInfo(snapshotsService.snapshot(snapshotId)));
if (toResolve.isEmpty() && request.ignoreUnavailable() == false) {
throw new SnapshotMissingException(repository, request.snapshots()[0]);
}
snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository, new ArrayList<>(toResolve), request.ignoreUnavailable()));
}
listener.onResponse(new GetSnapshotsResponse(Collections.unmodifiableList(snapshotInfoBuilder)));
listener.onResponse(new GetSnapshotsResponse(snapshotInfoBuilder));
} catch (Throwable t) {
listener.onFailure(t);
}

View File

@ -25,7 +25,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.snapshots.RestoreInfo;
@ -74,18 +73,13 @@ public class RestoreSnapshotResponse extends ActionResponse implements ToXConten
return restoreInfo.status();
}
static final class Fields {
static final XContentBuilderString SNAPSHOT = new XContentBuilderString("snapshot");
static final XContentBuilderString ACCEPTED = new XContentBuilderString("accepted");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
if (restoreInfo != null) {
builder.field(Fields.SNAPSHOT);
builder.field("snapshot");
restoreInfo.toXContent(builder, params);
} else {
builder.field(Fields.ACCEPTED, true);
builder.field("accepted", true);
}
return builder;
}

View File

@ -26,12 +26,12 @@ import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.snapshots.RestoreInfo;
import org.elasticsearch.snapshots.RestoreService;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
@ -72,23 +72,22 @@ public class TransportRestoreSnapshotAction extends TransportMasterNodeAction<Re
}
@Override
protected void masterOperation(final RestoreSnapshotRequest request, ClusterState state, final ActionListener<RestoreSnapshotResponse> listener) {
RestoreService.RestoreRequest restoreRequest = new RestoreService.RestoreRequest(
"restore_snapshot[" + request.snapshot() + "]", request.repository(), request.snapshot(),
protected void masterOperation(final RestoreSnapshotRequest request, final ClusterState state, final ActionListener<RestoreSnapshotResponse> listener) {
RestoreService.RestoreRequest restoreRequest = new RestoreService.RestoreRequest(request.repository(), request.snapshot(),
request.indices(), request.indicesOptions(), request.renamePattern(), request.renameReplacement(),
request.settings(), request.masterNodeTimeout(), request.includeGlobalState(), request.partial(), request.includeAliases(),
request.indexSettings(), request.ignoreIndexSettings());
request.indexSettings(), request.ignoreIndexSettings(), "restore_snapshot[" + request.snapshot() + "]");
restoreService.restoreSnapshot(restoreRequest, new ActionListener<RestoreInfo>() {
@Override
public void onResponse(RestoreInfo restoreInfo) {
if (restoreInfo == null && request.waitForCompletion()) {
restoreService.addListener(new ActionListener<RestoreService.RestoreCompletionResponse>() {
SnapshotId snapshotId = new SnapshotId(request.repository(), request.snapshot());
@Override
public void onResponse(RestoreService.RestoreCompletionResponse restoreCompletionResponse) {
if (this.snapshotId.equals(restoreCompletionResponse.getSnapshotId())) {
final Snapshot snapshot = restoreCompletionResponse.getSnapshot();
if (snapshot.getRepository().equals(request.repository()) &&
snapshot.getSnapshotId().getName().equals(request.snapshot())) {
listener.onResponse(new RestoreSnapshotResponse(restoreCompletionResponse.getRestoreInfo()));
restoreService.removeListener(this);
}

View File

@ -24,7 +24,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus;
@ -135,9 +134,9 @@ public class SnapshotIndexShardStatus extends BroadcastShardResponse implements
}
static final class Fields {
static final XContentBuilderString STAGE = new XContentBuilderString("stage");
static final XContentBuilderString REASON = new XContentBuilderString("reason");
static final XContentBuilderString NODE = new XContentBuilderString("node");
static final String STAGE = "stage";
static final String REASON = "reason";
static final String NODE = "node";
}
@Override

View File

@ -21,7 +21,6 @@ package org.elasticsearch.action.admin.cluster.snapshots.status;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import java.io.IOException;
import java.util.Collection;
@ -91,12 +90,12 @@ public class SnapshotIndexStatus implements Iterable<SnapshotIndexShardStatus>,
}
static final class Fields {
static final XContentBuilderString SHARDS = new XContentBuilderString("shards");
static final String SHARDS = "shards";
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject(getIndex(), XContentBuilder.FieldCaseConversion.NONE);
builder.startObject(getIndex());
shardsStats.toXContent(builder, params);
stats.toXContent(builder, params);
builder.startObject(Fields.SHARDS);

View File

@ -21,7 +21,6 @@ package org.elasticsearch.action.admin.cluster.snapshots.status;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import java.io.IOException;
import java.util.Collection;
@ -106,13 +105,13 @@ public class SnapshotShardsStats implements ToXContent {
}
static final class Fields {
static final XContentBuilderString SHARDS_STATS = new XContentBuilderString("shards_stats");
static final XContentBuilderString INITIALIZING = new XContentBuilderString("initializing");
static final XContentBuilderString STARTED = new XContentBuilderString("started");
static final XContentBuilderString FINALIZING = new XContentBuilderString("finalizing");
static final XContentBuilderString DONE = new XContentBuilderString("done");
static final XContentBuilderString FAILED = new XContentBuilderString("failed");
static final XContentBuilderString TOTAL = new XContentBuilderString("total");
static final String SHARDS_STATS = "shards_stats";
static final String INITIALIZING = "initializing";
static final String STARTED = "started";
static final String FINALIZING = "finalizing";
static final String DONE = "done";
static final String FAILED = "failed";
static final String TOTAL = "total";
}
@Override

View File

@ -24,7 +24,6 @@ import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus;
import java.io.IOException;
@ -130,16 +129,16 @@ public class SnapshotStats implements Streamable, ToXContent {
}
static final class Fields {
static final XContentBuilderString STATS = new XContentBuilderString("stats");
static final XContentBuilderString NUMBER_OF_FILES = new XContentBuilderString("number_of_files");
static final XContentBuilderString PROCESSED_FILES = new XContentBuilderString("processed_files");
static final XContentBuilderString TOTAL_SIZE_IN_BYTES = new XContentBuilderString("total_size_in_bytes");
static final XContentBuilderString TOTAL_SIZE = new XContentBuilderString("total_size");
static final XContentBuilderString PROCESSED_SIZE_IN_BYTES = new XContentBuilderString("processed_size_in_bytes");
static final XContentBuilderString PROCESSED_SIZE = new XContentBuilderString("processed_size");
static final XContentBuilderString START_TIME_IN_MILLIS = new XContentBuilderString("start_time_in_millis");
static final XContentBuilderString TIME_IN_MILLIS = new XContentBuilderString("time_in_millis");
static final XContentBuilderString TIME = new XContentBuilderString("time");
static final String STATS = "stats";
static final String NUMBER_OF_FILES = "number_of_files";
static final String PROCESSED_FILES = "processed_files";
static final String TOTAL_SIZE_IN_BYTES = "total_size_in_bytes";
static final String TOTAL_SIZE = "total_size";
static final String PROCESSED_SIZE_IN_BYTES = "processed_size_in_bytes";
static final String PROCESSED_SIZE = "processed_size";
static final String START_TIME_IN_MILLIS = "start_time_in_millis";
static final String TIME_IN_MILLIS = "time_in_millis";
static final String TIME = "time";
}
@Override

View File

@ -20,13 +20,12 @@
package org.elasticsearch.action.admin.cluster.snapshots.status;
import org.elasticsearch.cluster.SnapshotsInProgress.State;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
@ -36,6 +35,7 @@ import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import static java.util.Collections.unmodifiableMap;
@ -45,7 +45,7 @@ import static java.util.Collections.unmodifiableMap;
*/
public class SnapshotStatus implements ToXContent, Streamable {
private SnapshotId snapshotId;
private Snapshot snapshot;
private State state;
@ -57,11 +57,10 @@ public class SnapshotStatus implements ToXContent, Streamable {
private SnapshotStats stats;
SnapshotStatus(SnapshotId snapshotId, State state, List<SnapshotIndexShardStatus> shards) {
this.snapshotId = snapshotId;
this.state = state;
this.shards = shards;
SnapshotStatus(final Snapshot snapshot, final State state, final List<SnapshotIndexShardStatus> shards) {
this.snapshot = Objects.requireNonNull(snapshot);
this.state = Objects.requireNonNull(state);
this.shards = Objects.requireNonNull(shards);
shardsStats = new SnapshotShardsStats(shards);
updateShardStats();
}
@ -70,10 +69,10 @@ public class SnapshotStatus implements ToXContent, Streamable {
}
/**
* Returns snapshot id
* Returns snapshot
*/
public SnapshotId getSnapshotId() {
return snapshotId;
public Snapshot getSnapshot() {
return snapshot;
}
/**
@ -125,7 +124,7 @@ public class SnapshotStatus implements ToXContent, Streamable {
@Override
public void readFrom(StreamInput in) throws IOException {
snapshotId = SnapshotId.readSnapshotId(in);
snapshot = new Snapshot(in);
state = State.fromValue(in.readByte());
int size = in.readVInt();
List<SnapshotIndexShardStatus> builder = new ArrayList<>();
@ -138,7 +137,7 @@ public class SnapshotStatus implements ToXContent, Streamable {
@Override
public void writeTo(StreamOutput out) throws IOException {
snapshotId.writeTo(out);
snapshot.writeTo(out);
out.writeByte(state.value());
out.writeVInt(shards.size());
for (SnapshotIndexShardStatus shard : shards) {
@ -171,7 +170,6 @@ public class SnapshotStatus implements ToXContent, Streamable {
}
}
/**
* Returns number of files in the snapshot
*/
@ -179,22 +177,22 @@ public class SnapshotStatus implements ToXContent, Streamable {
return stats;
}
static final class Fields {
static final XContentBuilderString SNAPSHOT = new XContentBuilderString("snapshot");
static final XContentBuilderString REPOSITORY = new XContentBuilderString("repository");
static final XContentBuilderString STATE = new XContentBuilderString("state");
static final XContentBuilderString INDICES = new XContentBuilderString("indices");
}
private static final String SNAPSHOT = "snapshot";
private static final String REPOSITORY = "repository";
private static final String UUID = "uuid";
private static final String STATE = "state";
private static final String INDICES = "indices";
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject();
builder.field(Fields.SNAPSHOT, snapshotId.getSnapshot());
builder.field(Fields.REPOSITORY, snapshotId.getRepository());
builder.field(Fields.STATE, state.name());
builder.field(SNAPSHOT, snapshot.getSnapshotId().getName());
builder.field(REPOSITORY, snapshot.getRepository());
builder.field(UUID, snapshot.getSnapshotId().getUUID());
builder.field(STATE, state.name());
shardsStats.toXContent(builder, params);
stats.toXContent(builder, params);
builder.startObject(Fields.INDICES);
builder.startObject(INDICES);
for (SnapshotIndexStatus indexStatus : getIndices().values()) {
indexStatus.toXContent(builder, params);
}

View File

@ -24,7 +24,6 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import java.io.IOException;
import java.util.ArrayList;
@ -74,13 +73,9 @@ public class SnapshotsStatusResponse extends ActionResponse implements ToXConten
}
}
static final class Fields {
static final XContentBuilderString SNAPSHOTS = new XContentBuilderString("snapshots");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startArray(Fields.SNAPSHOTS);
builder.startArray("snapshots");
for (SnapshotStatus snapshot : snapshots) {
snapshot.toXContent(builder, params);
}

View File

@ -29,7 +29,7 @@ import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
@ -43,18 +43,20 @@ import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.atomic.AtomicReferenceArray;
import static java.util.Collections.unmodifiableMap;
/**
* Transport client that collects snapshot shard statuses from data nodes
*/
public class TransportNodesSnapshotsStatus extends TransportNodesAction<TransportNodesSnapshotsStatus.Request, TransportNodesSnapshotsStatus.NodesSnapshotStatus, TransportNodesSnapshotsStatus.NodeRequest, TransportNodesSnapshotsStatus.NodeSnapshotStatus> {
public class TransportNodesSnapshotsStatus extends TransportNodesAction<TransportNodesSnapshotsStatus.Request,
TransportNodesSnapshotsStatus.NodesSnapshotStatus,
TransportNodesSnapshotsStatus.NodeRequest,
TransportNodesSnapshotsStatus.NodeSnapshotStatus> {
public static final String ACTION_NAME = SnapshotsStatusAction.NAME + "[nodes]";
@ -66,7 +68,7 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
SnapshotShardsService snapshotShardsService, ActionFilters actionFilters,
IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ACTION_NAME, clusterName, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver,
Request::new, NodeRequest::new, ThreadPool.Names.GENERIC);
Request::new, NodeRequest::new, ThreadPool.Names.GENERIC, NodeSnapshotStatus.class);
this.snapshotShardsService = snapshotShardsService;
}
@ -86,30 +88,17 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
}
@Override
protected NodesSnapshotStatus newResponse(Request request, AtomicReferenceArray responses) {
final List<NodeSnapshotStatus> nodesList = new ArrayList<>();
final List<FailedNodeException> failures = new ArrayList<>();
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof NodeSnapshotStatus) { // will also filter out null response for unallocated ones
nodesList.add((NodeSnapshotStatus) resp);
} else if (resp instanceof FailedNodeException) {
failures.add((FailedNodeException) resp);
} else {
logger.warn("unknown response type [{}], expected NodeSnapshotStatus or FailedNodeException", resp);
}
}
return new NodesSnapshotStatus(clusterName, nodesList.toArray(new NodeSnapshotStatus[nodesList.size()]),
failures.toArray(new FailedNodeException[failures.size()]));
protected NodesSnapshotStatus newResponse(Request request, List<NodeSnapshotStatus> responses, List<FailedNodeException> failures) {
return new NodesSnapshotStatus(clusterName, responses, failures);
}
@Override
protected NodeSnapshotStatus nodeOperation(NodeRequest request) {
Map<SnapshotId, Map<ShardId, SnapshotIndexShardStatus>> snapshotMapBuilder = new HashMap<>();
Map<Snapshot, Map<ShardId, SnapshotIndexShardStatus>> snapshotMapBuilder = new HashMap<>();
try {
String nodeId = clusterService.localNode().getId();
for (SnapshotId snapshotId : request.snapshotIds) {
Map<ShardId, IndexShardSnapshotStatus> shardsStatus = snapshotShardsService.currentSnapshotShards(snapshotId);
for (Snapshot snapshot : request.snapshots) {
Map<ShardId, IndexShardSnapshotStatus> shardsStatus = snapshotShardsService.currentSnapshotShards(snapshot);
if (shardsStatus == null) {
continue;
}
@ -125,7 +114,7 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
}
shardMapBuilder.put(shardEntry.getKey(), shardStatus);
}
snapshotMapBuilder.put(snapshotId, unmodifiableMap(shardMapBuilder));
snapshotMapBuilder.put(snapshot, unmodifiableMap(shardMapBuilder));
}
return new NodeSnapshotStatus(clusterService.localNode(), unmodifiableMap(snapshotMapBuilder));
} catch (Exception e) {
@ -140,7 +129,7 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
public static class Request extends BaseNodesRequest<Request> {
private SnapshotId[] snapshotIds;
private Snapshot[] snapshots;
public Request() {
}
@ -149,8 +138,8 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
super(nodesIds);
}
public Request snapshotIds(SnapshotId[] snapshotIds) {
this.snapshotIds = snapshotIds;
public Request snapshots(Snapshot[] snapshots) {
this.snapshots = snapshots;
return this;
}
@ -169,91 +158,63 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
public static class NodesSnapshotStatus extends BaseNodesResponse<NodeSnapshotStatus> {
private FailedNodeException[] failures;
NodesSnapshotStatus() {
}
public NodesSnapshotStatus(ClusterName clusterName, NodeSnapshotStatus[] nodes, FailedNodeException[] failures) {
super(clusterName, nodes);
this.failures = failures;
public NodesSnapshotStatus(ClusterName clusterName, List<NodeSnapshotStatus> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public FailedNodeException[] failures() {
return failures;
protected List<NodeSnapshotStatus> readNodesFrom(StreamInput in) throws IOException {
return in.readStreamableList(NodeSnapshotStatus::new);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
nodes = new NodeSnapshotStatus[in.readVInt()];
for (int i = 0; i < nodes.length; i++) {
nodes[i] = new NodeSnapshotStatus();
nodes[i].readFrom(in);
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVInt(nodes.length);
for (NodeSnapshotStatus response : nodes) {
response.writeTo(out);
}
protected void writeNodesTo(StreamOutput out, List<NodeSnapshotStatus> nodes) throws IOException {
out.writeStreamableList(nodes);
}
}
public static class NodeRequest extends BaseNodeRequest {
private SnapshotId[] snapshotIds;
private List<Snapshot> snapshots;
public NodeRequest() {
}
NodeRequest(String nodeId, TransportNodesSnapshotsStatus.Request request) {
super(nodeId);
snapshotIds = request.snapshotIds;
snapshots = Arrays.asList(request.snapshots);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
int n = in.readVInt();
snapshotIds = new SnapshotId[n];
for (int i = 0; i < n; i++) {
snapshotIds[i] = SnapshotId.readSnapshotId(in);
}
snapshots = in.readList(Snapshot::new);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
if (snapshotIds != null) {
out.writeVInt(snapshotIds.length);
for (int i = 0; i < snapshotIds.length; i++) {
snapshotIds[i].writeTo(out);
}
} else {
out.writeVInt(0);
}
out.writeList(snapshots);
}
}
public static class NodeSnapshotStatus extends BaseNodeResponse {
private Map<SnapshotId, Map<ShardId, SnapshotIndexShardStatus>> status;
private Map<Snapshot, Map<ShardId, SnapshotIndexShardStatus>> status;
NodeSnapshotStatus() {
}
public NodeSnapshotStatus(DiscoveryNode node, Map<SnapshotId, Map<ShardId, SnapshotIndexShardStatus>> status) {
public NodeSnapshotStatus(DiscoveryNode node, Map<Snapshot, Map<ShardId, SnapshotIndexShardStatus>> status) {
super(node);
this.status = status;
}
public Map<SnapshotId, Map<ShardId, SnapshotIndexShardStatus>> status() {
public Map<Snapshot, Map<ShardId, SnapshotIndexShardStatus>> status() {
return status;
}
@ -261,9 +222,9 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
int numberOfSnapshots = in.readVInt();
Map<SnapshotId, Map<ShardId, SnapshotIndexShardStatus>> snapshotMapBuilder = new HashMap<>(numberOfSnapshots);
Map<Snapshot, Map<ShardId, SnapshotIndexShardStatus>> snapshotMapBuilder = new HashMap<>(numberOfSnapshots);
for (int i = 0; i < numberOfSnapshots; i++) {
SnapshotId snapshotId = SnapshotId.readSnapshotId(in);
Snapshot snapshot = new Snapshot(in);
int numberOfShards = in.readVInt();
Map<ShardId, SnapshotIndexShardStatus> shardMapBuilder = new HashMap<>(numberOfShards);
for (int j = 0; j < numberOfShards; j++) {
@ -271,7 +232,7 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
SnapshotIndexShardStatus status = SnapshotIndexShardStatus.readShardSnapshotStatus(in);
shardMapBuilder.put(shardId, status);
}
snapshotMapBuilder.put(snapshotId, unmodifiableMap(shardMapBuilder));
snapshotMapBuilder.put(snapshot, unmodifiableMap(shardMapBuilder));
}
status = unmodifiableMap(snapshotMapBuilder);
}
@ -281,7 +242,7 @@ public class TransportNodesSnapshotsStatus extends TransportNodesAction<Transpor
super.writeTo(out);
if (status != null) {
out.writeVInt(status.size());
for (Map.Entry<SnapshotId, Map<ShardId, SnapshotIndexShardStatus>> entry : status.entrySet()) {
for (Map.Entry<Snapshot, Map<ShardId, SnapshotIndexShardStatus>> entry : status.entrySet()) {
entry.getKey().writeTo(out);
out.writeVInt(entry.getValue().size());
for (Map.Entry<ShardId, SnapshotIndexShardStatus> shardEntry : entry.getValue().entrySet()) {

View File

@ -29,26 +29,32 @@ import org.elasticsearch.cluster.SnapshotsInProgress;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.metadata.SnapshotId;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus;
import org.elasticsearch.snapshots.Snapshot;
import org.elasticsearch.snapshots.SnapshotId;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotMissingException;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
*/
@ -87,8 +93,8 @@ public class TransportSnapshotsStatusAction extends TransportMasterNodeAction<Sn
protected void masterOperation(final SnapshotsStatusRequest request,
final ClusterState state,
final ActionListener<SnapshotsStatusResponse> listener) throws Exception {
List<SnapshotsInProgress.Entry> currentSnapshots = snapshotsService.currentSnapshots(request.repository(), request.snapshots());
List<SnapshotsInProgress.Entry> currentSnapshots =
snapshotsService.currentSnapshots(request.repository(), Arrays.asList(request.snapshots()));
if (currentSnapshots.isEmpty()) {
listener.onResponse(buildResponse(request, currentSnapshots, null));
return;
@ -105,19 +111,19 @@ public class TransportSnapshotsStatusAction extends TransportMasterNodeAction<Sn
if (!nodesIds.isEmpty()) {
// There are still some snapshots running - check their progress
SnapshotId[] snapshotIds = new SnapshotId[currentSnapshots.size()];
Snapshot[] snapshots = new Snapshot[currentSnapshots.size()];
for (int i = 0; i < currentSnapshots.size(); i++) {
snapshotIds[i] = currentSnapshots.get(i).snapshotId();
snapshots[i] = currentSnapshots.get(i).snapshot();
}
TransportNodesSnapshotsStatus.Request nodesRequest = new TransportNodesSnapshotsStatus.Request(nodesIds.toArray(new String[nodesIds.size()]))
.snapshotIds(snapshotIds).timeout(request.masterNodeTimeout());
.snapshots(snapshots).timeout(request.masterNodeTimeout());
transportNodesSnapshotsStatus.execute(nodesRequest, new ActionListener<TransportNodesSnapshotsStatus.NodesSnapshotStatus>() {
@Override
public void onResponse(TransportNodesSnapshotsStatus.NodesSnapshotStatus nodeSnapshotStatuses) {
try {
List<SnapshotsInProgress.Entry> currentSnapshots =
snapshotsService.currentSnapshots(request.repository(), request.snapshots());
snapshotsService.currentSnapshots(request.repository(), Arrays.asList(request.snapshots()));
listener.onResponse(buildResponse(request, currentSnapshots, nodeSnapshotStatuses));
} catch (Throwable e) {
listener.onFailure(e);
@ -136,12 +142,12 @@ public class TransportSnapshotsStatusAction extends TransportMasterNodeAction<Sn
}
private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, List<SnapshotsInProgress.Entry> currentSnapshots,
private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, List<SnapshotsInProgress.Entry> currentSnapshotEntries,
TransportNodesSnapshotsStatus.NodesSnapshotStatus nodeSnapshotStatuses) throws IOException {
// First process snapshot that are currently processed
List<SnapshotStatus> builder = new ArrayList<>();
Set<SnapshotId> currentSnapshotIds = new HashSet<>();
if (!currentSnapshots.isEmpty()) {
Set<String> currentSnapshotNames = new HashSet<>();
if (!currentSnapshotEntries.isEmpty()) {
Map<String, TransportNodesSnapshotsStatus.NodeSnapshotStatus> nodeSnapshotStatusMap;
if (nodeSnapshotStatuses != null) {
nodeSnapshotStatusMap = nodeSnapshotStatuses.getNodesMap();
@ -149,8 +155,8 @@ public class TransportSnapshotsStatusAction extends TransportMasterNodeAction<Sn
nodeSnapshotStatusMap = new HashMap<>();
}
for (SnapshotsInProgress.Entry entry : currentSnapshots) {
currentSnapshotIds.add(entry.snapshotId());
for (SnapshotsInProgress.Entry entry : currentSnapshotEntries) {
currentSnapshotNames.add(entry.snapshot().getSnapshotId().getName());
List<SnapshotIndexShardStatus> shardStatusBuilder = new ArrayList<>();
for (ObjectObjectCursor<ShardId, SnapshotsInProgress.ShardSnapshotStatus> shardEntry : entry.shards()) {
SnapshotsInProgress.ShardSnapshotStatus status = shardEntry.value;
@ -158,7 +164,7 @@ public class TransportSnapshotsStatusAction extends TransportMasterNodeAction<Sn
// We should have information about this shard from the shard:
TransportNodesSnapshotsStatus.NodeSnapshotStatus nodeStatus = nodeSnapshotStatusMap.get(status.nodeId());
if (nodeStatus != null) {
Map<ShardId, SnapshotIndexShardStatus> shardStatues = nodeStatus.status().get(entry.snapshotId());
Map<ShardId, SnapshotIndexShardStatus> shardStatues = nodeStatus.status().get(entry.snapshot());
if (shardStatues != null) {
SnapshotIndexShardStatus shardStatus = shardStatues.get(shardEntry.key);
if (shardStatus != null) {
@ -190,41 +196,50 @@ public class TransportSnapshotsStatusAction extends TransportMasterNodeAction<Sn
SnapshotIndexShardStatus shardStatus = new SnapshotIndexShardStatus(shardEntry.key, stage);
shardStatusBuilder.add(shardStatus);
}
builder.add(new SnapshotStatus(entry.snapshotId(), entry.state(), Collections.unmodifiableList(shardStatusBuilder)));
builder.add(new SnapshotStatus(entry.snapshot(), entry.state(), Collections.unmodifiableList(shardStatusBuilder)));
}
}
// Now add snapshots on disk that are not currently running
if (Strings.hasText(request.repository())) {
if (request.snapshots() != null && request.snapshots().length > 0) {
for (String snapshotName : request.snapshots()) {
SnapshotId snapshotId = new SnapshotId(request.repository(), snapshotName);
if (currentSnapshotIds.contains(snapshotId)) {
// This is a snapshot the is currently running - skipping
final String repositoryName = request.repository();
if (Strings.hasText(repositoryName) && request.snapshots() != null && request.snapshots().length > 0) {
final Set<String> requestedSnapshotNames = Sets.newHashSet(request.snapshots());
final Map<String, SnapshotId> matchedSnapshotIds = snapshotsService.snapshotIds(repositoryName).stream()
.filter(s -> requestedSnapshotNames.contains(s.getName()))
.collect(Collectors.toMap(SnapshotId::getName, Function.identity()));
for (final String snapshotName : request.snapshots()) {
SnapshotId snapshotId = matchedSnapshotIds.get(snapshotName);
if (snapshotId == null) {
if (currentSnapshotNames.contains(snapshotName)) {
// we've already found this snapshot in the current snapshot entries, so skip over
continue;
} else {
// neither in the current snapshot entries nor found in the repository
throw new SnapshotMissingException(repositoryName, snapshotName);
}
Snapshot snapshot = snapshotsService.snapshot(snapshotId);
List<SnapshotIndexShardStatus> shardStatusBuilder = new ArrayList<>();
if (snapshot.state().completed()) {
Map<ShardId, IndexShardSnapshotStatus> shardStatues = snapshotsService.snapshotShards(snapshotId);
for (Map.Entry<ShardId, IndexShardSnapshotStatus> shardStatus : shardStatues.entrySet()) {
shardStatusBuilder.add(new SnapshotIndexShardStatus(shardStatus.getKey(), shardStatus.getValue()));
}
final SnapshotsInProgress.State state;
switch (snapshot.state()) {
case FAILED:
state = SnapshotsInProgress.State.FAILED;
break;
case SUCCESS:
case PARTIAL:
// Translating both PARTIAL and SUCCESS to SUCCESS for now
// TODO: add the differentiation on the metadata level in the next major release
state = SnapshotsInProgress.State.SUCCESS;
break;
default:
throw new IllegalArgumentException("Unknown snapshot state " + snapshot.state());
}
builder.add(new SnapshotStatus(snapshotId, state, Collections.unmodifiableList(shardStatusBuilder)));
}
SnapshotInfo snapshotInfo = snapshotsService.snapshot(repositoryName, snapshotId);
List<SnapshotIndexShardStatus> shardStatusBuilder = new ArrayList<>();
if (snapshotInfo.state().completed()) {
Map<ShardId, IndexShardSnapshotStatus> shardStatues =
snapshotsService.snapshotShards(request.repository(), snapshotInfo);
for (Map.Entry<ShardId, IndexShardSnapshotStatus> shardStatus : shardStatues.entrySet()) {
shardStatusBuilder.add(new SnapshotIndexShardStatus(shardStatus.getKey(), shardStatus.getValue()));
}
final SnapshotsInProgress.State state;
switch (snapshotInfo.state()) {
case FAILED:
state = SnapshotsInProgress.State.FAILED;
break;
case SUCCESS:
case PARTIAL:
// Translating both PARTIAL and SUCCESS to SUCCESS for now
// TODO: add the differentiation on the metadata level in the next major release
state = SnapshotsInProgress.State.SUCCESS;
break;
default:
throw new IllegalArgumentException("Unknown snapshot state " + snapshotInfo.state());
}
builder.add(new SnapshotStatus(new Snapshot(repositoryName, snapshotInfo.snapshotId()), state, Collections.unmodifiableList(shardStatusBuilder)));
}
}
}

View File

@ -47,7 +47,7 @@ public class TransportClusterStateAction extends TransportMasterNodeReadAction<C
@Inject
public TransportClusterStateAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
ClusterName clusterName, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ClusterStateAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterStateRequest::new);
super(settings, ClusterStateAction.NAME, false, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver, ClusterStateRequest::new);
this.clusterName = clusterName;
}

View File

@ -22,23 +22,19 @@ package org.elasticsearch.action.admin.cluster.stats;
import com.carrotsearch.hppc.ObjectObjectHashMap;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import org.elasticsearch.action.admin.indices.stats.CommonStats;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.index.cache.query.QueryCacheStats;
import org.elasticsearch.index.engine.SegmentsStats;
import org.elasticsearch.index.fielddata.FieldDataStats;
import org.elasticsearch.index.percolator.PercolatorQueryCacheStats;
import org.elasticsearch.index.shard.DocsStats;
import org.elasticsearch.index.store.StoreStats;
import org.elasticsearch.search.suggest.completion.CompletionStats;
import java.io.IOException;
import java.util.List;
public class ClusterStatsIndices implements ToXContent, Streamable {
public class ClusterStatsIndices implements ToXContent {
private int indexCount;
private ShardStats shards;
@ -48,12 +44,8 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
private QueryCacheStats queryCache;
private CompletionStats completion;
private SegmentsStats segments;
private PercolatorQueryCacheStats percolatorCache;
private ClusterStatsIndices() {
}
public ClusterStatsIndices(ClusterStatsNodeResponse[] nodeResponses) {
public ClusterStatsIndices(List<ClusterStatsNodeResponse> nodeResponses) {
ObjectObjectHashMap<String, ShardStats> countsPerIndex = new ObjectObjectHashMap<>();
this.docs = new DocsStats();
@ -62,7 +54,6 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
this.queryCache = new QueryCacheStats();
this.completion = new CompletionStats();
this.segments = new SegmentsStats();
this.percolatorCache = new PercolatorQueryCacheStats();
for (ClusterStatsNodeResponse r : nodeResponses) {
for (org.elasticsearch.action.admin.indices.stats.ShardStats shardStats : r.shardsStats()) {
@ -85,7 +76,6 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
queryCache.add(shardCommonStats.queryCache);
completion.add(shardCommonStats.completion);
segments.add(shardCommonStats.segments);
percolatorCache.add(shardCommonStats.percolatorCache);
}
}
@ -128,44 +118,8 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
return segments;
}
public PercolatorQueryCacheStats getPercolatorCache() {
return percolatorCache;
}
@Override
public void readFrom(StreamInput in) throws IOException {
indexCount = in.readVInt();
shards = ShardStats.readShardStats(in);
docs = DocsStats.readDocStats(in);
store = StoreStats.readStoreStats(in);
fieldData = FieldDataStats.readFieldDataStats(in);
queryCache = QueryCacheStats.readQueryCacheStats(in);
completion = CompletionStats.readCompletionStats(in);
segments = SegmentsStats.readSegmentsStats(in);
percolatorCache = PercolatorQueryCacheStats.readPercolateStats(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(indexCount);
shards.writeTo(out);
docs.writeTo(out);
store.writeTo(out);
fieldData.writeTo(out);
queryCache.writeTo(out);
completion.writeTo(out);
segments.writeTo(out);
percolatorCache.writeTo(out);
}
public static ClusterStatsIndices readIndicesStats(StreamInput in) throws IOException {
ClusterStatsIndices indicesStats = new ClusterStatsIndices();
indicesStats.readFrom(in);
return indicesStats;
}
static final class Fields {
static final XContentBuilderString COUNT = new XContentBuilderString("count");
static final String COUNT = "count";
}
@Override
@ -178,11 +132,10 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
queryCache.toXContent(builder, params);
completion.toXContent(builder, params);
segments.toXContent(builder, params);
percolatorCache.toXContent(builder, params);
return builder;
}
public static class ShardStats implements ToXContent, Streamable {
public static class ShardStats implements ToXContent {
int indices;
int total;
@ -327,52 +280,18 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
}
}
public static ShardStats readShardStats(StreamInput in) throws IOException {
ShardStats c = new ShardStats();
c.readFrom(in);
return c;
}
@Override
public void readFrom(StreamInput in) throws IOException {
indices = in.readVInt();
total = in.readVInt();
primaries = in.readVInt();
minIndexShards = in.readVInt();
maxIndexShards = in.readVInt();
minIndexPrimaryShards = in.readVInt();
maxIndexPrimaryShards = in.readVInt();
minIndexReplication = in.readDouble();
totalIndexReplication = in.readDouble();
maxIndexReplication = in.readDouble();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(indices);
out.writeVInt(total);
out.writeVInt(primaries);
out.writeVInt(minIndexShards);
out.writeVInt(maxIndexShards);
out.writeVInt(minIndexPrimaryShards);
out.writeVInt(maxIndexPrimaryShards);
out.writeDouble(minIndexReplication);
out.writeDouble(totalIndexReplication);
out.writeDouble(maxIndexReplication);
}
static final class Fields {
static final XContentBuilderString SHARDS = new XContentBuilderString("shards");
static final XContentBuilderString TOTAL = new XContentBuilderString("total");
static final XContentBuilderString PRIMARIES = new XContentBuilderString("primaries");
static final XContentBuilderString REPLICATION = new XContentBuilderString("replication");
static final XContentBuilderString MIN = new XContentBuilderString("min");
static final XContentBuilderString MAX = new XContentBuilderString("max");
static final XContentBuilderString AVG = new XContentBuilderString("avg");
static final XContentBuilderString INDEX = new XContentBuilderString("index");
static final String SHARDS = "shards";
static final String TOTAL = "total";
static final String PRIMARIES = "primaries";
static final String REPLICATION = "replication";
static final String MIN = "min";
static final String MAX = "max";
static final String AVG = "avg";
static final String INDEX = "index";
}
private void addIntMinMax(XContentBuilderString field, int min, int max, double avg, XContentBuilder builder) throws IOException {
private void addIntMinMax(String field, int min, int max, double avg, XContentBuilder builder) throws IOException {
builder.startObject(field);
builder.field(Fields.MIN, min);
builder.field(Fields.MAX, max);
@ -380,7 +299,7 @@ public class ClusterStatsIndices implements ToXContent, Streamable {
builder.endObject();
}
private void addDoubleMinMax(XContentBuilderString field, double min, double max, double avg, XContentBuilder builder) throws IOException {
private void addDoubleMinMax(String field, double min, double max, double avg, XContentBuilder builder) throws IOException {
builder.startObject(field);
builder.field(Fields.MIN, min);
builder.field(Fields.MAX, max);

View File

@ -21,21 +21,17 @@ package org.elasticsearch.action.admin.cluster.stats;
import com.carrotsearch.hppc.ObjectIntHashMap;
import com.carrotsearch.hppc.cursors.ObjectIntCursor;
import org.elasticsearch.Version;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.monitor.fs.FsInfo;
import org.elasticsearch.monitor.jvm.JvmInfo;
import org.elasticsearch.plugins.PluginInfo;
@ -49,7 +45,7 @@ import java.util.List;
import java.util.Map;
import java.util.Set;
public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNodes> {
public class ClusterStatsNodes implements ToXContent {
private final Counts counts;
private final Set<Version> versions;
@ -59,33 +55,12 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
private final FsInfo.Path fs;
private final Set<PluginInfo> plugins;
ClusterStatsNodes(StreamInput in) throws IOException {
this.counts = new Counts(in);
int size = in.readVInt();
this.versions = new HashSet<>(size);
for (int i = 0; i < size; i++) {
this.versions.add(Version.readVersion(in));
}
this.os = new OsStats(in);
this.process = new ProcessStats(in);
this.jvm = new JvmStats(in);
this.fs = FsInfo.Path.readInfoFrom(in);
size = in.readVInt();
this.plugins = new HashSet<>(size);
for (int i = 0; i < size; i++) {
this.plugins.add(PluginInfo.readFromStream(in));
}
}
ClusterStatsNodes(ClusterStatsNodeResponse[] nodeResponses) {
ClusterStatsNodes(List<ClusterStatsNodeResponse> nodeResponses) {
this.versions = new HashSet<>();
this.fs = new FsInfo.Path();
this.plugins = new HashSet<>();
Set<InetAddress> seenAddresses = new HashSet<>(nodeResponses.length);
Set<InetAddress> seenAddresses = new HashSet<>(nodeResponses.size());
List<NodeInfo> nodeInfos = new ArrayList<>();
List<NodeStats> nodeStats = new ArrayList<>();
for (ClusterStatsNodeResponse nodeResponse : nodeResponses) {
@ -141,35 +116,14 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
return plugins;
}
@Override
public ClusterStatsNodes readFrom(StreamInput in) throws IOException {
return new ClusterStatsNodes(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
counts.writeTo(out);
out.writeVInt(versions.size());
for (Version v : versions) Version.writeVersion(v, out);
os.writeTo(out);
process.writeTo(out);
jvm.writeTo(out);
fs.writeTo(out);
out.writeVInt(plugins.size());
for (PluginInfo p : plugins) {
p.writeTo(out);
}
}
static final class Fields {
static final XContentBuilderString COUNT = new XContentBuilderString("count");
static final XContentBuilderString VERSIONS = new XContentBuilderString("versions");
static final XContentBuilderString OS = new XContentBuilderString("os");
static final XContentBuilderString PROCESS = new XContentBuilderString("process");
static final XContentBuilderString JVM = new XContentBuilderString("jvm");
static final XContentBuilderString FS = new XContentBuilderString("fs");
static final XContentBuilderString PLUGINS = new XContentBuilderString("plugins");
static final String COUNT = "count";
static final String VERSIONS = "versions";
static final String OS = "os";
static final String PROCESS = "process";
static final String JVM = "jvm";
static final String FS = "fs";
static final String PLUGINS = "plugins";
}
@Override
@ -207,18 +161,12 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
return builder;
}
public static class Counts implements Writeable<Counts>, ToXContent {
public static class Counts implements ToXContent {
static final String COORDINATING_ONLY = "coordinating_only";
private final int total;
private final Map<String, Integer> roles;
@SuppressWarnings("unchecked")
private Counts(StreamInput in) throws IOException {
this.total = in.readVInt();
this.roles = (Map<String, Integer>)in.readGenericValue();
}
private Counts(List<NodeInfo> nodeInfos) {
this.roles = new HashMap<>();
for (DiscoveryNode.Role role : DiscoveryNode.Role.values()) {
@ -250,19 +198,8 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
return roles;
}
@Override
public Counts readFrom(StreamInput in) throws IOException {
return new Counts(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(total);
out.writeGenericValue(roles);
}
static final class Fields {
static final XContentBuilderString TOTAL = new XContentBuilderString("total");
static final String TOTAL = "total";
}
@Override
@ -275,22 +212,14 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
}
}
public static class OsStats implements ToXContent, Writeable<OsStats> {
public static class OsStats implements ToXContent {
final int availableProcessors;
final int allocatedProcessors;
final ObjectIntHashMap<String> names;
@SuppressWarnings("unchecked")
private OsStats(StreamInput in) throws IOException {
this.availableProcessors = in.readVInt();
this.allocatedProcessors = in.readVInt();
int size = in.readVInt();
this.names = new ObjectIntHashMap<>();
for (int i = 0; i < size; i++) {
names.addTo(in.readString(), in.readVInt());
}
}
/**
* Build the stats from information about each node.
*/
private OsStats(List<NodeInfo> nodeInfos) {
this.names = new ObjectIntHashMap<>();
int availableProcessors = 0;
@ -315,28 +244,12 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
return allocatedProcessors;
}
@Override
public OsStats readFrom(StreamInput in) throws IOException {
return new OsStats(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(availableProcessors);
out.writeVInt(allocatedProcessors);
out.writeVInt(names.size());
for (ObjectIntCursor<String> name : names) {
out.writeString(name.key);
out.writeVInt(name.value);
}
}
static final class Fields {
static final XContentBuilderString AVAILABLE_PROCESSORS = new XContentBuilderString("available_processors");
static final XContentBuilderString ALLOCATED_PROCESSORS = new XContentBuilderString("allocated_processors");
static final XContentBuilderString NAME = new XContentBuilderString("name");
static final XContentBuilderString NAMES = new XContentBuilderString("names");
static final XContentBuilderString COUNT = new XContentBuilderString("count");
static final String AVAILABLE_PROCESSORS = "available_processors";
static final String ALLOCATED_PROCESSORS = "allocated_processors";
static final String NAME = "name";
static final String NAMES = "names";
static final String COUNT = "count";
}
@Override
@ -355,7 +268,7 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
}
}
public static class ProcessStats implements ToXContent, Writeable<ProcessStats> {
public static class ProcessStats implements ToXContent {
final int count;
final int cpuPercent;
@ -363,14 +276,9 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
final long minOpenFileDescriptors;
final long maxOpenFileDescriptors;
private ProcessStats(StreamInput in) throws IOException {
this.count = in.readVInt();
this.cpuPercent = in.readVInt();
this.totalOpenFileDescriptors = in.readVLong();
this.minOpenFileDescriptors = in.readLong();
this.maxOpenFileDescriptors = in.readLong();
}
/**
* Build from looking at a list of node statistics.
*/
private ProcessStats(List<NodeStats> nodeStatsList) {
int count = 0;
int cpuPercent = 0;
@ -429,27 +337,13 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
return minOpenFileDescriptors;
}
@Override
public ProcessStats readFrom(StreamInput in) throws IOException {
return new ProcessStats(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(count);
out.writeVInt(cpuPercent);
out.writeVLong(totalOpenFileDescriptors);
out.writeLong(minOpenFileDescriptors);
out.writeLong(maxOpenFileDescriptors);
}
static final class Fields {
static final XContentBuilderString CPU = new XContentBuilderString("cpu");
static final XContentBuilderString PERCENT = new XContentBuilderString("percent");
static final XContentBuilderString OPEN_FILE_DESCRIPTORS = new XContentBuilderString("open_file_descriptors");
static final XContentBuilderString MIN = new XContentBuilderString("min");
static final XContentBuilderString MAX = new XContentBuilderString("max");
static final XContentBuilderString AVG = new XContentBuilderString("avg");
static final String CPU = "cpu";
static final String PERCENT = "percent";
static final String OPEN_FILE_DESCRIPTORS = "open_file_descriptors";
static final String MIN = "min";
static final String MAX = "max";
static final String AVG = "avg";
}
@Override
@ -466,7 +360,7 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
}
}
public static class JvmStats implements Writeable<JvmStats>, ToXContent {
public static class JvmStats implements ToXContent {
private final ObjectIntHashMap<JvmVersion> versions;
private final long threads;
@ -474,18 +368,9 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
private final long heapUsed;
private final long heapMax;
private JvmStats(StreamInput in) throws IOException {
int size = in.readVInt();
this.versions = new ObjectIntHashMap<>(size);
for (int i = 0; i < size; i++) {
this.versions.addTo(JvmVersion.readJvmVersion(in), in.readVInt());
}
this.threads = in.readVLong();
this.maxUptime = in.readVLong();
this.heapUsed = in.readVLong();
this.heapMax = in.readVLong();
}
/**
* Build from lists of information about each node.
*/
private JvmStats(List<NodeInfo> nodeInfos, List<NodeStats> nodeStatsList) {
this.versions = new ObjectIntHashMap<>();
long threads = 0;
@ -548,39 +433,21 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
return new ByteSizeValue(heapMax);
}
@Override
public JvmStats readFrom(StreamInput in) throws IOException {
return new JvmStats(in);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(versions.size());
for (ObjectIntCursor<JvmVersion> v : versions) {
v.key.writeTo(out);
out.writeVInt(v.value);
}
out.writeVLong(threads);
out.writeVLong(maxUptime);
out.writeVLong(heapUsed);
out.writeVLong(heapMax);
}
static final class Fields {
static final XContentBuilderString VERSIONS = new XContentBuilderString("versions");
static final XContentBuilderString VERSION = new XContentBuilderString("version");
static final XContentBuilderString VM_NAME = new XContentBuilderString("vm_name");
static final XContentBuilderString VM_VERSION = new XContentBuilderString("vm_version");
static final XContentBuilderString VM_VENDOR = new XContentBuilderString("vm_vendor");
static final XContentBuilderString COUNT = new XContentBuilderString("count");
static final XContentBuilderString THREADS = new XContentBuilderString("threads");
static final XContentBuilderString MAX_UPTIME = new XContentBuilderString("max_uptime");
static final XContentBuilderString MAX_UPTIME_IN_MILLIS = new XContentBuilderString("max_uptime_in_millis");
static final XContentBuilderString MEM = new XContentBuilderString("mem");
static final XContentBuilderString HEAP_USED = new XContentBuilderString("heap_used");
static final XContentBuilderString HEAP_USED_IN_BYTES = new XContentBuilderString("heap_used_in_bytes");
static final XContentBuilderString HEAP_MAX = new XContentBuilderString("heap_max");
static final XContentBuilderString HEAP_MAX_IN_BYTES = new XContentBuilderString("heap_max_in_bytes");
static final String VERSIONS = "versions";
static final String VERSION = "version";
static final String VM_NAME = "vm_name";
static final String VM_VERSION = "vm_version";
static final String VM_VENDOR = "vm_vendor";
static final String COUNT = "count";
static final String THREADS = "threads";
static final String MAX_UPTIME = "max_uptime";
static final String MAX_UPTIME_IN_MILLIS = "max_uptime_in_millis";
static final String MEM = "mem";
static final String HEAP_USED = "heap_used";
static final String HEAP_USED_IN_BYTES = "heap_used_in_bytes";
static final String HEAP_MAX = "heap_max";
static final String HEAP_MAX_IN_BYTES = "heap_max_in_bytes";
}
@Override
@ -607,7 +474,7 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
}
}
public static class JvmVersion implements Streamable {
public static class JvmVersion {
String version;
String vmName;
String vmVersion;
@ -620,9 +487,6 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
vmVendor = jvmInfo.getVmVendor();
}
JvmVersion() {
}
@Override
public boolean equals(Object o) {
if (this == o) {
@ -641,27 +505,5 @@ public class ClusterStatsNodes implements ToXContent, Writeable<ClusterStatsNode
public int hashCode() {
return vmVersion.hashCode();
}
public static JvmVersion readJvmVersion(StreamInput in) throws IOException {
JvmVersion jvm = new JvmVersion();
jvm.readFrom(in);
return jvm;
}
@Override
public void readFrom(StreamInput in) throws IOException {
version = in.readString();
vmName = in.readString();
vmVersion = in.readString();
vmVendor = in.readString();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(version);
out.writeString(vmName);
out.writeString(vmVersion);
out.writeString(vmVendor);
}
}
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
@ -26,13 +27,11 @@ import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentBuilderString;
import org.elasticsearch.common.xcontent.XContentFactory;
import java.io.IOException;
import java.util.Iterator;
import java.util.List;
import java.util.Locale;
import java.util.Map;
/**
*
@ -49,8 +48,9 @@ public class ClusterStatsResponse extends BaseNodesResponse<ClusterStatsNodeResp
ClusterStatsResponse() {
}
public ClusterStatsResponse(long timestamp, ClusterName clusterName, String clusterUUID, ClusterStatsNodeResponse[] nodes) {
super(clusterName, null);
public ClusterStatsResponse(long timestamp, ClusterName clusterName, String clusterUUID,
List<ClusterStatsNodeResponse> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
this.timestamp = timestamp;
this.clusterUUID = clusterUUID;
nodesStats = new ClusterStatsNodes(nodes);
@ -80,77 +80,53 @@ public class ClusterStatsResponse extends BaseNodesResponse<ClusterStatsNodeResp
return indicesStats;
}
@Override
public ClusterStatsNodeResponse[] getNodes() {
throw new UnsupportedOperationException();
}
@Override
public Map<String, ClusterStatsNodeResponse> getNodesMap() {
throw new UnsupportedOperationException();
}
@Override
public ClusterStatsNodeResponse getAt(int position) {
throw new UnsupportedOperationException();
}
@Override
public Iterator<ClusterStatsNodeResponse> iterator() {
throw new UnsupportedOperationException();
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
timestamp = in.readVLong();
status = null;
if (in.readBoolean()) {
// it may be that the master switched on us while doing the operation. In this case the status may be null.
status = ClusterHealthStatus.fromValue(in.readByte());
}
clusterUUID = in.readString();
nodesStats = new ClusterStatsNodes(in);
indicesStats = ClusterStatsIndices.readIndicesStats(in);
// it may be that the master switched on us while doing the operation. In this case the status may be null.
status = in.readOptionalWriteable(ClusterHealthStatus::readFrom);
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeVLong(timestamp);
if (status == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
out.writeByte(status.value());
}
out.writeString(clusterUUID);
nodesStats.writeTo(out);
indicesStats.writeTo(out);
out.writeOptionalWriteable(status);
}
static final class Fields {
static final XContentBuilderString NODES = new XContentBuilderString("nodes");
static final XContentBuilderString INDICES = new XContentBuilderString("indices");
static final XContentBuilderString UUID = new XContentBuilderString("uuid");
static final XContentBuilderString CLUSTER_NAME = new XContentBuilderString("cluster_name");
static final XContentBuilderString STATUS = new XContentBuilderString("status");
@Override
protected List<ClusterStatsNodeResponse> readNodesFrom(StreamInput in) throws IOException {
List<ClusterStatsNodeResponse> nodes = in.readList(ClusterStatsNodeResponse::readNodeResponse);
// built from nodes rather than from the stream directly
nodesStats = new ClusterStatsNodes(nodes);
indicesStats = new ClusterStatsIndices(nodes);
return nodes;
}
@Override
protected void writeNodesTo(StreamOutput out, List<ClusterStatsNodeResponse> nodes) throws IOException {
// nodeStats and indicesStats are rebuilt from nodes
out.writeStreamableList(nodes);
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.field("timestamp", getTimestamp());
builder.field(Fields.CLUSTER_NAME, getClusterName().value());
if (params.paramAsBoolean("output_uuid", false)) {
builder.field(Fields.UUID, clusterUUID);
builder.field("uuid", clusterUUID);
}
if (status != null) {
builder.field(Fields.STATUS, status.name().toLowerCase(Locale.ROOT));
builder.field("status", status.name().toLowerCase(Locale.ROOT));
}
builder.startObject(Fields.INDICES);
builder.startObject("indices");
indicesStats.toXContent(builder, params);
builder.endObject();
builder.startObject(Fields.NODES);
builder.startObject("nodes");
nodesStats.toXContent(builder, params);
builder.endObject();
return builder;

View File

@ -19,6 +19,7 @@
package org.elasticsearch.action.admin.cluster.stats;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;
import org.elasticsearch.action.admin.indices.stats.CommonStats;
@ -46,7 +47,6 @@ import org.elasticsearch.transport.TransportService;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicReferenceArray;
/**
*
@ -55,8 +55,7 @@ public class TransportClusterStatsAction extends TransportNodesAction<ClusterSta
TransportClusterStatsAction.ClusterStatsNodeRequest, ClusterStatsNodeResponse> {
private static final CommonStatsFlags SHARD_STATS_FLAGS = new CommonStatsFlags(CommonStatsFlags.Flag.Docs, CommonStatsFlags.Flag.Store,
CommonStatsFlags.Flag.FieldData, CommonStatsFlags.Flag.QueryCache, CommonStatsFlags.Flag.Completion, CommonStatsFlags.Flag.Segments,
CommonStatsFlags.Flag.PercolatorCache);
CommonStatsFlags.Flag.FieldData, CommonStatsFlags.Flag.QueryCache, CommonStatsFlags.Flag.Completion, CommonStatsFlags.Flag.Segments);
private final NodeService nodeService;
private final IndicesService indicesService;
@ -68,22 +67,17 @@ public class TransportClusterStatsAction extends TransportNodesAction<ClusterSta
NodeService nodeService, IndicesService indicesService,
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, ClusterStatsAction.NAME, clusterName, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, ClusterStatsRequest::new, ClusterStatsNodeRequest::new, ThreadPool.Names.MANAGEMENT);
indexNameExpressionResolver, ClusterStatsRequest::new, ClusterStatsNodeRequest::new, ThreadPool.Names.MANAGEMENT,
ClusterStatsNodeResponse.class);
this.nodeService = nodeService;
this.indicesService = indicesService;
}
@Override
protected ClusterStatsResponse newResponse(ClusterStatsRequest clusterStatsRequest, AtomicReferenceArray responses) {
final List<ClusterStatsNodeResponse> nodeStats = new ArrayList<>(responses.length());
for (int i = 0; i < responses.length(); i++) {
Object resp = responses.get(i);
if (resp instanceof ClusterStatsNodeResponse) {
nodeStats.add((ClusterStatsNodeResponse) resp);
}
}
return new ClusterStatsResponse(System.currentTimeMillis(), clusterName,
clusterService.state().metaData().clusterUUID(), nodeStats.toArray(new ClusterStatsNodeResponse[nodeStats.size()]));
protected ClusterStatsResponse newResponse(ClusterStatsRequest request,
List<ClusterStatsNodeResponse> responses, List<FailedNodeException> failures) {
return new ClusterStatsResponse(System.currentTimeMillis(), clusterName, clusterService.state().metaData().clusterUUID(),
responses, failures);
}
@Override
@ -105,7 +99,7 @@ public class TransportClusterStatsAction extends TransportNodesAction<ClusterSta
for (IndexShard indexShard : indexService) {
if (indexShard.routingEntry() != null && indexShard.routingEntry().active()) {
// only report on fully started shards
shardsStats.add(new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesService.getIndicesQueryCache(), indexService.cache().getPercolatorQueryCache(), indexShard, SHARD_STATS_FLAGS), indexShard.commitStats()));
shardsStats.add(new ShardStats(indexShard.routingEntry(), indexShard.shardPath(), new CommonStats(indicesService.getIndicesQueryCache(), indexShard, SHARD_STATS_FLAGS), indexShard.commitStats()));
}
}
}

View File

@ -17,29 +17,30 @@
* under the License.
*/
package org.elasticsearch.action.indexedscripts.delete;
package org.elasticsearch.action.admin.cluster.storedscripts;
import org.elasticsearch.action.Action;
import org.elasticsearch.client.ElasticsearchClient;
/**
*/
public class DeleteIndexedScriptAction extends Action<DeleteIndexedScriptRequest, DeleteIndexedScriptResponse, DeleteIndexedScriptRequestBuilder> {
public class DeleteStoredScriptAction extends Action<DeleteStoredScriptRequest, DeleteStoredScriptResponse,
DeleteStoredScriptRequestBuilder> {
public static final DeleteIndexedScriptAction INSTANCE = new DeleteIndexedScriptAction();
public static final String NAME = "indices:data/write/script/delete";
public static final DeleteStoredScriptAction INSTANCE = new DeleteStoredScriptAction();
public static final String NAME = "cluster:admin/script/delete";
private DeleteIndexedScriptAction() {
private DeleteStoredScriptAction() {
super(NAME);
}
@Override
public DeleteIndexedScriptResponse newResponse() {
return new DeleteIndexedScriptResponse();
public DeleteStoredScriptResponse newResponse() {
return new DeleteStoredScriptResponse();
}
@Override
public DeleteIndexedScriptRequestBuilder newRequestBuilder(ElasticsearchClient client) {
return new DeleteIndexedScriptRequestBuilder(client, this);
public DeleteStoredScriptRequestBuilder newRequestBuilder(ElasticsearchClient client) {
return new DeleteStoredScriptRequestBuilder(client, this);
}
}

View File

@ -0,0 +1,96 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.storedscripts;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import static org.elasticsearch.action.ValidateActions.addValidationError;
public class DeleteStoredScriptRequest extends AcknowledgedRequest<DeleteStoredScriptRequest> {
private String id;
private String scriptLang;
DeleteStoredScriptRequest() {
}
public DeleteStoredScriptRequest(String scriptLang, String id) {
this.scriptLang = scriptLang;
this.id = id;
}
@Override
public ActionRequestValidationException validate() {
ActionRequestValidationException validationException = null;
if (id == null) {
validationException = addValidationError("id is missing", validationException);
} else if (id.contains("#")) {
validationException = addValidationError("id can't contain: '#'", validationException);
}
if (scriptLang == null) {
validationException = addValidationError("lang is missing", validationException);
} else if (scriptLang.contains("#")) {
validationException = addValidationError("lang can't contain: '#'", validationException);
}
return validationException;
}
public String scriptLang() {
return scriptLang;
}
public DeleteStoredScriptRequest scriptLang(String type) {
this.scriptLang = type;
return this;
}
public String id() {
return id;
}
public DeleteStoredScriptRequest id(String id) {
this.id = id;
return this;
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
scriptLang = in.readString();
id = in.readString();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(scriptLang);
out.writeString(id);
}
@Override
public String toString() {
return "delete script {[" + scriptLang + "][" + id + "]}";
}
}

View File

@ -0,0 +1,42 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.storedscripts;
import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder;
import org.elasticsearch.client.ElasticsearchClient;
public class DeleteStoredScriptRequestBuilder extends AcknowledgedRequestBuilder<DeleteStoredScriptRequest,
DeleteStoredScriptResponse, DeleteStoredScriptRequestBuilder> {
public DeleteStoredScriptRequestBuilder(ElasticsearchClient client, DeleteStoredScriptAction action) {
super(client, action, new DeleteStoredScriptRequest());
}
public DeleteStoredScriptRequestBuilder setScriptLang(String scriptLang) {
request.scriptLang(scriptLang);
return this;
}
public DeleteStoredScriptRequestBuilder setId(String id) {
request.id(id);
return this;
}
}

Some files were not shown because too many files have changed in this diff Show More