Merge branch 'master' into feature/seq_no
* master: (516 commits) Avoid angering Log4j in TransportNodesActionTests Add trace logging when aquiring and releasing operation locks for replication requests Fix handler name on message not fully read Remove accidental import. Improve log message in TransportNodesAction Clean up of Script. Update Joda Time to version 2.9.5 (#21468) Remove unused ClusterService dependency from SearchPhaseController (#21421) Remove max_local_storage_nodes from elasticsearch.yml (#21467) Wait for all reindex subtasks before rethrottling Correcting a typo-Maan to Man-in README.textile (#21466) Fix InternalSearchHit#hasSource to return the proper boolean value (#21441) Replace all index date-math examples with the URI encoded form Fix typos (#21456) Adapt ES_JVM_OPTIONS packaging test to ubuntu-1204 Add null check in InternalSearchHit#sourceRef to prevent NPE (#21431) Add VirtualBox version check (#21370) Export ES_JVM_OPTIONS for SysV init Skip reindex rethrottle tests with workers Make forbidden APIs be quieter about classpath warnings (#21443) ...
This commit is contained in:
commit
d3417fb022
|
@ -33,6 +33,7 @@ dependency-reduced-pom.xml
|
|||
# testing stuff
|
||||
**/.local*
|
||||
.vagrant/
|
||||
/logs/
|
||||
|
||||
# osx stuff
|
||||
.DS_Store
|
||||
|
|
|
@ -88,7 +88,8 @@ Contributing to the Elasticsearch codebase
|
|||
**Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)
|
||||
|
||||
Make sure you have [Gradle](http://gradle.org) installed, as
|
||||
Elasticsearch uses it as its build system.
|
||||
Elasticsearch uses it as its build system. Gradle must be version 2.13 _exactly_ in
|
||||
order to build successfully.
|
||||
|
||||
Eclipse users can automatically configure their IDE: `gradle eclipse`
|
||||
then `File: Import: Existing Projects into Workspace`. Select the
|
||||
|
|
|
@ -123,7 +123,7 @@ There are many more options to perform search, after all, it's a search product
|
|||
|
||||
h3. Multi Tenant - Indices and Types
|
||||
|
||||
Maan, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.
|
||||
Man, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.
|
||||
|
||||
Elasticsearch supports multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.
|
||||
|
||||
|
@ -200,7 +200,7 @@ We have just covered a very small portion of what Elasticsearch is all about. Fo
|
|||
|
||||
h3. Building from Source
|
||||
|
||||
Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have a modern version of Gradle installed - 2.13 should do.
|
||||
Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have version 2.13 of Gradle installed.
|
||||
|
||||
In order to create a distribution, simply run the @gradle assemble@ command in the cloned directory.
|
||||
|
||||
|
|
|
@ -16,22 +16,6 @@ following:
|
|||
gradle assemble
|
||||
-----------------------------
|
||||
|
||||
== Other test options
|
||||
|
||||
To disable and enable network transport, set the `tests.es.node.mode` system property.
|
||||
|
||||
Use network transport:
|
||||
|
||||
------------------------------------
|
||||
-Dtests.es.node.mode=network
|
||||
------------------------------------
|
||||
|
||||
Use local transport (default since 1.3):
|
||||
|
||||
-------------------------------------
|
||||
-Dtests.es.node.mode=local
|
||||
-------------------------------------
|
||||
|
||||
=== Running Elasticsearch from a checkout
|
||||
|
||||
In order to run Elasticsearch from source without building a package, you can
|
||||
|
@ -41,6 +25,12 @@ run it using Gradle:
|
|||
gradle run
|
||||
-------------------------------------
|
||||
|
||||
or to attach a remote debugger, run it as:
|
||||
|
||||
-------------------------------------
|
||||
gradle run --debug-jvm
|
||||
-------------------------------------
|
||||
|
||||
=== Test case filtering.
|
||||
|
||||
- `tests.class` is a class-filtering shell-like glob pattern,
|
||||
|
@ -363,7 +353,6 @@ These are the linux flavors the Vagrantfile currently supports:
|
|||
|
||||
* ubuntu-1204 aka precise
|
||||
* ubuntu-1404 aka trusty
|
||||
* ubuntu-1504 aka vivid
|
||||
* ubuntu-1604 aka xenial
|
||||
* debian-8 aka jessie, the current debian stable distribution
|
||||
* centos-6
|
||||
|
|
|
@ -30,13 +30,6 @@ Vagrant.configure(2) do |config|
|
|||
config.vm.box = "elastic/ubuntu-14.04-x86_64"
|
||||
ubuntu_common config
|
||||
end
|
||||
config.vm.define "ubuntu-1504" do |config|
|
||||
config.vm.box = "elastic/ubuntu-15.04-x86_64"
|
||||
ubuntu_common config, extra: <<-SHELL
|
||||
# Install Jayatana so we can work around it being present.
|
||||
[ -f /usr/share/java/jayatanaag.jar ] || install jayatana
|
||||
SHELL
|
||||
end
|
||||
config.vm.define "ubuntu-1604" do |config|
|
||||
config.vm.box = "elastic/ubuntu-16.04-x86_64"
|
||||
ubuntu_common config, extra: <<-SHELL
|
||||
|
@ -156,6 +149,7 @@ def dnf_common(config)
|
|||
update_command: "dnf check-update",
|
||||
update_tracking_file: "/var/cache/dnf/last_update",
|
||||
install_command: "dnf install -y",
|
||||
install_command_retries: 5,
|
||||
java_package: "java-1.8.0-openjdk-devel")
|
||||
if Vagrant.has_plugin?("vagrant-cachier")
|
||||
# Autodetect doesn't work....
|
||||
|
@ -205,6 +199,7 @@ def provision(config,
|
|||
update_command: 'required',
|
||||
update_tracking_file: 'required',
|
||||
install_command: 'required',
|
||||
install_command_retries: 0,
|
||||
java_package: 'required',
|
||||
extra: '')
|
||||
# Vagrant run ruby 2.0.0 which doesn't have required named parameters....
|
||||
|
@ -215,9 +210,27 @@ def provision(config,
|
|||
config.vm.provision "bats dependencies", type: "shell", inline: <<-SHELL
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
# Retry install command up to $2 times, if failed
|
||||
retry_installcommand() {
|
||||
n=0
|
||||
while true; do
|
||||
#{install_command} $1 && break
|
||||
let n=n+1
|
||||
if [ $n -ge $2 ]; then
|
||||
echo "==> Exhausted retries to install $1"
|
||||
return 1
|
||||
fi
|
||||
echo "==> Retrying installing $1, attempt $((n+1))"
|
||||
# Add a small delay to increase chance of metalink providing updated list of mirrors
|
||||
sleep 5
|
||||
done
|
||||
}
|
||||
|
||||
installed() {
|
||||
command -v $1 2>&1 >/dev/null
|
||||
}
|
||||
|
||||
install() {
|
||||
# Only apt-get update if we haven't in the last day
|
||||
if [ ! -f #{update_tracking_file} ] || [ "x$(find #{update_tracking_file} -mtime +0)" == "x#{update_tracking_file}" ]; then
|
||||
|
@ -226,8 +239,14 @@ def provision(config,
|
|||
touch #{update_tracking_file}
|
||||
fi
|
||||
echo "==> Installing $1"
|
||||
#{install_command} $1
|
||||
if [ #{install_command_retries} -eq 0 ]
|
||||
then
|
||||
#{install_command} $1
|
||||
else
|
||||
retry_installcommand $1 #{install_command_retries}
|
||||
fi
|
||||
}
|
||||
|
||||
ensure() {
|
||||
installed $1 || install $1
|
||||
}
|
||||
|
|
|
@ -58,57 +58,57 @@ public class AllocationBenchmark {
|
|||
// support to constrain the combinations of benchmark parameters and we do not want to rely on OptionsBuilder as each benchmark would
|
||||
// need its own main method and we cannot execute more than one class with a main method per JAR.
|
||||
@Param({
|
||||
// indices, shards, replicas, nodes
|
||||
" 10, 1, 0, 1",
|
||||
" 10, 3, 0, 1",
|
||||
" 10, 10, 0, 1",
|
||||
" 100, 1, 0, 1",
|
||||
" 100, 3, 0, 1",
|
||||
" 100, 10, 0, 1",
|
||||
// indices| shards| replicas| nodes
|
||||
" 10| 1| 0| 1",
|
||||
" 10| 3| 0| 1",
|
||||
" 10| 10| 0| 1",
|
||||
" 100| 1| 0| 1",
|
||||
" 100| 3| 0| 1",
|
||||
" 100| 10| 0| 1",
|
||||
|
||||
" 10, 1, 0, 10",
|
||||
" 10, 3, 0, 10",
|
||||
" 10, 10, 0, 10",
|
||||
" 100, 1, 0, 10",
|
||||
" 100, 3, 0, 10",
|
||||
" 100, 10, 0, 10",
|
||||
" 10| 1| 0| 10",
|
||||
" 10| 3| 0| 10",
|
||||
" 10| 10| 0| 10",
|
||||
" 100| 1| 0| 10",
|
||||
" 100| 3| 0| 10",
|
||||
" 100| 10| 0| 10",
|
||||
|
||||
" 10, 1, 1, 10",
|
||||
" 10, 3, 1, 10",
|
||||
" 10, 10, 1, 10",
|
||||
" 100, 1, 1, 10",
|
||||
" 100, 3, 1, 10",
|
||||
" 100, 10, 1, 10",
|
||||
" 10| 1| 1| 10",
|
||||
" 10| 3| 1| 10",
|
||||
" 10| 10| 1| 10",
|
||||
" 100| 1| 1| 10",
|
||||
" 100| 3| 1| 10",
|
||||
" 100| 10| 1| 10",
|
||||
|
||||
" 10, 1, 2, 10",
|
||||
" 10, 3, 2, 10",
|
||||
" 10, 10, 2, 10",
|
||||
" 100, 1, 2, 10",
|
||||
" 100, 3, 2, 10",
|
||||
" 100, 10, 2, 10",
|
||||
" 10| 1| 2| 10",
|
||||
" 10| 3| 2| 10",
|
||||
" 10| 10| 2| 10",
|
||||
" 100| 1| 2| 10",
|
||||
" 100| 3| 2| 10",
|
||||
" 100| 10| 2| 10",
|
||||
|
||||
" 10, 1, 0, 50",
|
||||
" 10, 3, 0, 50",
|
||||
" 10, 10, 0, 50",
|
||||
" 100, 1, 0, 50",
|
||||
" 100, 3, 0, 50",
|
||||
" 100, 10, 0, 50",
|
||||
" 10| 1| 0| 50",
|
||||
" 10| 3| 0| 50",
|
||||
" 10| 10| 0| 50",
|
||||
" 100| 1| 0| 50",
|
||||
" 100| 3| 0| 50",
|
||||
" 100| 10| 0| 50",
|
||||
|
||||
" 10, 1, 1, 50",
|
||||
" 10, 3, 1, 50",
|
||||
" 10, 10, 1, 50",
|
||||
" 100, 1, 1, 50",
|
||||
" 100, 3, 1, 50",
|
||||
" 100, 10, 1, 50",
|
||||
" 10| 1| 1| 50",
|
||||
" 10| 3| 1| 50",
|
||||
" 10| 10| 1| 50",
|
||||
" 100| 1| 1| 50",
|
||||
" 100| 3| 1| 50",
|
||||
" 100| 10| 1| 50",
|
||||
|
||||
" 10, 1, 2, 50",
|
||||
" 10, 3, 2, 50",
|
||||
" 10, 10, 2, 50",
|
||||
" 100, 1, 2, 50",
|
||||
" 100, 3, 2, 50",
|
||||
" 100, 10, 2, 50"
|
||||
" 10| 1| 2| 50",
|
||||
" 10| 3| 2| 50",
|
||||
" 10| 10| 2| 50",
|
||||
" 100| 1| 2| 50",
|
||||
" 100| 3| 2| 50",
|
||||
" 100| 10| 2| 50"
|
||||
})
|
||||
public String indicesShardsReplicasNodes = "10,1,0,1";
|
||||
public String indicesShardsReplicasNodes = "10|1|0|1";
|
||||
|
||||
public int numTags = 2;
|
||||
|
||||
|
@ -117,7 +117,7 @@ public class AllocationBenchmark {
|
|||
|
||||
@Setup
|
||||
public void setUp() throws Exception {
|
||||
final String[] params = indicesShardsReplicasNodes.split(",");
|
||||
final String[] params = indicesShardsReplicasNodes.split("\\|");
|
||||
|
||||
int numIndices = toInt(params[0]);
|
||||
int numShards = toInt(params[1]);
|
||||
|
|
|
@ -31,15 +31,18 @@ import org.elasticsearch.cluster.routing.allocation.decider.AllocationDecider;
|
|||
import org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders;
|
||||
import org.elasticsearch.common.settings.ClusterSettings;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.transport.LocalTransportAddress;
|
||||
import org.elasticsearch.common.transport.TransportAddress;
|
||||
import org.elasticsearch.common.util.set.Sets;
|
||||
import org.elasticsearch.gateway.GatewayAllocator;
|
||||
|
||||
import java.lang.reflect.InvocationTargetException;
|
||||
import java.net.InetAddress;
|
||||
import java.net.UnknownHostException;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
public final class Allocators {
|
||||
private static class NoopGatewayAllocator extends GatewayAllocator {
|
||||
|
@ -91,8 +94,11 @@ public final class Allocators {
|
|||
|
||||
}
|
||||
|
||||
private static final AtomicInteger portGenerator = new AtomicInteger();
|
||||
|
||||
public static DiscoveryNode newNode(String nodeId, Map<String, String> attributes) {
|
||||
return new DiscoveryNode("", nodeId, LocalTransportAddress.buildUnique(), attributes, Sets.newHashSet(DiscoveryNode.Role.MASTER,
|
||||
return new DiscoveryNode("", nodeId, new TransportAddress(TransportAddress.META_ADDRESS,
|
||||
portGenerator.incrementAndGet()), attributes, Sets.newHashSet(DiscoveryNode.Role.MASTER,
|
||||
DiscoveryNode.Role.DATA), Version.CURRENT);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -28,6 +28,7 @@ import org.gradle.api.Task
|
|||
import org.gradle.api.XmlProvider
|
||||
import org.gradle.api.artifacts.Configuration
|
||||
import org.gradle.api.artifacts.ModuleDependency
|
||||
import org.gradle.api.artifacts.ModuleVersionIdentifier
|
||||
import org.gradle.api.artifacts.ProjectDependency
|
||||
import org.gradle.api.artifacts.ResolvedArtifact
|
||||
import org.gradle.api.artifacts.dsl.RepositoryHandler
|
||||
|
@ -294,12 +295,15 @@ class BuildPlugin implements Plugin<Project> {
|
|||
* Returns a closure which can be used with a MavenPom for fixing problems with gradle generated poms.
|
||||
*
|
||||
* <ul>
|
||||
* <li>Remove transitive dependencies (using wildcard exclusions, fixed in gradle 2.14)</li>
|
||||
* <li>Set compile time deps back to compile from runtime (known issue with maven-publish plugin)
|
||||
* <li>Remove transitive dependencies. We currently exclude all artifacts explicitly instead of using wildcards
|
||||
* as Ivy incorrectly translates POMs with * excludes to Ivy XML with * excludes which results in the main artifact
|
||||
* being excluded as well (see https://issues.apache.org/jira/browse/IVY-1531). Note that Gradle 2.14+ automatically
|
||||
* translates non-transitive dependencies to * excludes. We should revisit this when upgrading Gradle.</li>
|
||||
* <li>Set compile time deps back to compile from runtime (known issue with maven-publish plugin)</li>
|
||||
* </ul>
|
||||
*/
|
||||
private static Closure fixupDependencies(Project project) {
|
||||
// TODO: remove this when enforcing gradle 2.14+, it now properly handles exclusions
|
||||
// TODO: revisit this when upgrading to Gradle 2.14+, see Javadoc comment above
|
||||
return { XmlProvider xml ->
|
||||
// first find if we have dependencies at all, and grab the node
|
||||
NodeList depsNodes = xml.asNode().get('dependencies')
|
||||
|
@ -334,10 +338,19 @@ class BuildPlugin implements Plugin<Project> {
|
|||
continue
|
||||
}
|
||||
|
||||
// we now know we have something to exclude, so add a wildcard exclusion element
|
||||
Node exclusion = depNode.appendNode('exclusions').appendNode('exclusion')
|
||||
exclusion.appendNode('groupId', '*')
|
||||
exclusion.appendNode('artifactId', '*')
|
||||
// we now know we have something to exclude, so add exclusions for all artifacts except the main one
|
||||
Node exclusions = depNode.appendNode('exclusions')
|
||||
for (ResolvedArtifact artifact : artifacts) {
|
||||
ModuleVersionIdentifier moduleVersionIdentifier = artifact.moduleVersion.id;
|
||||
String depGroupId = moduleVersionIdentifier.group
|
||||
String depArtifactId = moduleVersionIdentifier.name
|
||||
// add exclusions for all artifacts except the main one
|
||||
if (depGroupId != groupId || depArtifactId != artifactId) {
|
||||
Node exclusion = exclusions.appendNode('exclusion')
|
||||
exclusion.appendNode('groupId', depGroupId)
|
||||
exclusion.appendNode('artifactId', depArtifactId)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -393,7 +406,7 @@ class BuildPlugin implements Plugin<Project> {
|
|||
}
|
||||
|
||||
options.encoding = 'UTF-8'
|
||||
//options.incremental = true
|
||||
options.incremental = true
|
||||
|
||||
if (project.javaVersion == JavaVersion.VERSION_1_9) {
|
||||
// hack until gradle supports java 9's new "--release" arg
|
||||
|
|
|
@ -38,7 +38,7 @@ public class DocsTestPlugin extends RestTestPlugin {
|
|||
* the last released version for docs. */
|
||||
'\\{version\\}':
|
||||
VersionProperties.elasticsearch.replace('-SNAPSHOT', ''),
|
||||
'\\{lucene_version\\}' : VersionProperties.lucene,
|
||||
'\\{lucene_version\\}' : VersionProperties.lucene.replaceAll('-snapshot-\\w+$', ''),
|
||||
]
|
||||
Task listSnippets = project.tasks.create('listSnippets', SnippetsTask)
|
||||
listSnippets.group 'Docs'
|
||||
|
|
|
@ -39,6 +39,7 @@ public class SnippetsTask extends DefaultTask {
|
|||
private static final String SKIP = /skip:([^\]]+)/
|
||||
private static final String SETUP = /setup:([^ \]]+)/
|
||||
private static final String WARNING = /warning:(.+)/
|
||||
private static final String CAT = /(_cat)/
|
||||
private static final String TEST_SYNTAX =
|
||||
/(?:$CATCH|$SUBSTITUTION|$SKIP|(continued)|$SETUP|$WARNING) ?/
|
||||
|
||||
|
@ -221,8 +222,17 @@ public class SnippetsTask extends DefaultTask {
|
|||
substitutions = []
|
||||
}
|
||||
String loc = "$file:$lineNumber"
|
||||
parse(loc, matcher.group(2), /$SUBSTITUTION ?/) {
|
||||
substitutions.add([it.group(1), it.group(2)])
|
||||
parse(loc, matcher.group(2), /(?:$SUBSTITUTION|$CAT) ?/) {
|
||||
if (it.group(1) != null) {
|
||||
// TESTRESPONSE[s/adsf/jkl/]
|
||||
substitutions.add([it.group(1), it.group(2)])
|
||||
} else if (it.group(3) != null) {
|
||||
// TESTRESPONSE[_cat]
|
||||
substitutions.add(['^', '/'])
|
||||
substitutions.add(['\n$', '\\\\s*/'])
|
||||
substitutions.add(['( +)', '$1\\\\s+'])
|
||||
substitutions.add(['\n', '\\\\s*\n '])
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
|
|
|
@ -49,8 +49,7 @@ public class PluginBuildPlugin extends BuildPlugin {
|
|||
project.afterEvaluate {
|
||||
boolean isModule = project.path.startsWith(':modules:')
|
||||
String name = project.pluginProperties.extension.name
|
||||
project.jar.baseName = name
|
||||
project.bundlePlugin.baseName = name
|
||||
project.archivesBaseName = name
|
||||
|
||||
if (project.pluginProperties.extension.hasClientJar) {
|
||||
// for plugins which work with the transport client, we copy the jar
|
||||
|
@ -232,6 +231,7 @@ public class PluginBuildPlugin extends BuildPlugin {
|
|||
* ahold of the actual task. Furthermore, this entire hack only exists so we can make publishing to
|
||||
* maven local work, since we publish to maven central externally. */
|
||||
zipReal(MavenPublication) {
|
||||
artifactId = project.pluginProperties.extension.name
|
||||
pom.withXml { XmlProvider xml ->
|
||||
Node root = xml.asNode()
|
||||
root.appendNode('name', project.pluginProperties.extension.name)
|
||||
|
|
|
@ -143,6 +143,10 @@ public class ThirdPartyAuditTask extends AntTask {
|
|||
if (m.matches()) {
|
||||
missingClasses.add(m.group(1).replace('.', '/') + ".class");
|
||||
}
|
||||
|
||||
// Reset the priority of the event to DEBUG, so it doesn't
|
||||
// pollute the build output
|
||||
event.setMessage(event.getMessage(), Project.MSG_DEBUG);
|
||||
} else if (event.getPriority() == Project.MSG_ERR) {
|
||||
Matcher m = VIOLATION_PATTERN.matcher(event.getMessage());
|
||||
if (m.matches()) {
|
||||
|
|
|
@ -62,6 +62,15 @@ class ClusterConfiguration {
|
|||
@Input
|
||||
boolean debug = false
|
||||
|
||||
/**
|
||||
* if <code>true</code> each node will be configured with <tt>discovery.zen.minimum_master_nodes</tt> set
|
||||
* to the total number of nodes in the cluster. This will also cause that each node has `0s` state recovery
|
||||
* timeout which can lead to issues if for instance an existing clusterstate is expected to be recovered
|
||||
* before any tests start
|
||||
*/
|
||||
@Input
|
||||
boolean useMinimumMasterNodes = true
|
||||
|
||||
@Input
|
||||
String jvmArgs = "-Xms" + System.getProperty('tests.heap.size', '512m') +
|
||||
" " + "-Xmx" + System.getProperty('tests.heap.size', '512m') +
|
||||
|
@ -95,11 +104,13 @@ class ClusterConfiguration {
|
|||
@Input
|
||||
Closure waitCondition = { NodeInfo node, AntBuilder ant ->
|
||||
File tmpFile = new File(node.cwd, 'wait.success')
|
||||
ant.echo("==> [${new Date()}] checking health: http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=${numNodes}")
|
||||
String waitUrl = "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=${numNodes}&wait_for_status=yellow"
|
||||
ant.echo(message: "==> [${new Date()}] checking health: ${waitUrl}",
|
||||
level: 'info')
|
||||
// checking here for wait_for_nodes to be >= the number of nodes because its possible
|
||||
// this cluster is attempting to connect to nodes created by another task (same cluster name),
|
||||
// so there will be more nodes in that case in the cluster state
|
||||
ant.get(src: "http://${node.httpUri()}/_cluster/health?wait_for_nodes=>=${numNodes}",
|
||||
ant.get(src: waitUrl,
|
||||
dest: tmpFile.toString(),
|
||||
ignoreerrors: true, // do not fail on error, so logging buffers can be flushed by the wait task
|
||||
retries: 10)
|
||||
|
|
|
@ -73,8 +73,8 @@ class ClusterFormationTasks {
|
|||
}
|
||||
// this is our current version distribution configuration we use for all kinds of REST tests etc.
|
||||
String distroConfigName = "${task.name}_elasticsearchDistro"
|
||||
Configuration distro = project.configurations.create(distroConfigName)
|
||||
configureDistributionDependency(project, config.distribution, distro, VersionProperties.elasticsearch)
|
||||
Configuration currentDistro = project.configurations.create(distroConfigName)
|
||||
configureDistributionDependency(project, config.distribution, currentDistro, VersionProperties.elasticsearch)
|
||||
if (config.bwcVersion != null && config.numBwcNodes > 0) {
|
||||
// if we have a cluster that has a BWC cluster we also need to configure a dependency on the BWC version
|
||||
// this version uses the same distribution etc. and only differs in the version we depend on.
|
||||
|
@ -85,11 +85,11 @@ class ClusterFormationTasks {
|
|||
}
|
||||
configureDistributionDependency(project, config.distribution, project.configurations.elasticsearchBwcDistro, config.bwcVersion)
|
||||
}
|
||||
|
||||
for (int i = 0; i < config.numNodes; ++i) {
|
||||
for (int i = 0; i < config.numNodes; i++) {
|
||||
// we start N nodes and out of these N nodes there might be M bwc nodes.
|
||||
// for each of those nodes we might have a different configuratioon
|
||||
String elasticsearchVersion = VersionProperties.elasticsearch
|
||||
Configuration distro = currentDistro
|
||||
if (i < config.numBwcNodes) {
|
||||
elasticsearchVersion = config.bwcVersion
|
||||
distro = project.configurations.elasticsearchBwcDistro
|
||||
|
@ -252,9 +252,17 @@ class ClusterFormationTasks {
|
|||
'path.repo' : "${node.sharedDir}/repo",
|
||||
'path.shared_data' : "${node.sharedDir}/",
|
||||
// Define a node attribute so we can test that it exists
|
||||
'node.attr.testattr' : 'test',
|
||||
'node.attr.testattr' : 'test',
|
||||
'repositories.url.allowed_urls': 'http://snapshot.test*'
|
||||
]
|
||||
// we set min master nodes to the total number of nodes in the cluster and
|
||||
// basically skip initial state recovery to allow the cluster to form using a realistic master election
|
||||
// this means all nodes must be up, join the seed node and do a master election. This will also allow new and
|
||||
// old nodes in the BWC case to become the master
|
||||
if (node.config.useMinimumMasterNodes && node.config.numNodes > 1) {
|
||||
esConfig['discovery.zen.minimum_master_nodes'] = node.config.numNodes
|
||||
esConfig['discovery.initial_state_timeout'] = '0s' // don't wait for state.. just start up quickly
|
||||
}
|
||||
esConfig['node.max_local_storage_nodes'] = node.config.numNodes
|
||||
esConfig['http.port'] = node.config.httpPort
|
||||
esConfig['transport.tcp.port'] = node.config.transportPort
|
||||
|
|
|
@ -55,7 +55,9 @@ public class RestIntegTestTask extends RandomizedTestingTask {
|
|||
parallelism = '1'
|
||||
include('**/*IT.class')
|
||||
systemProperty('tests.rest.load_packaged', 'false')
|
||||
systemProperty('tests.rest.cluster', "${-> nodes[0].httpUri()}")
|
||||
// we pass all nodes to the rest cluster to allow the clients to round-robin between them
|
||||
// this is more realistic than just talking to a single node
|
||||
systemProperty('tests.rest.cluster', "${-> nodes.collect{it.httpUri()}.join(",")}")
|
||||
systemProperty('tests.config.dir', "${-> nodes[0].confDir}")
|
||||
// TODO: our "client" qa tests currently use the rest-test plugin. instead they should have their own plugin
|
||||
// that sets up the test cluster and passes this transport uri instead of http uri. Until then, we pass
|
||||
|
|
|
@ -16,6 +16,7 @@ public class RunTask extends DefaultTask {
|
|||
clusterConfig.httpPort = 9200
|
||||
clusterConfig.transportPort = 9300
|
||||
clusterConfig.daemonize = false
|
||||
clusterConfig.distribution = 'zip'
|
||||
project.afterEvaluate {
|
||||
ClusterFormationTasks.setup(project, this, clusterConfig)
|
||||
}
|
||||
|
|
|
@ -38,7 +38,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]ClusterUpdateSettingsRequestBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]SettingsUpdater.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]settings[/\\]TransportClusterUpdateSettingsAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]shards[/\\]ClusterSearchShardsAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]shards[/\\]ClusterSearchShardsRequestBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]shards[/\\]TransportClusterSearchShardsAction.java" checks="LineLength" />
|
||||
|
@ -64,8 +63,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]validate[/\\]template[/\\]RenderSearchTemplateRequestBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]cluster[/\\]validate[/\\]template[/\\]TransportRenderSearchTemplateAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]alias[/\\]Alias.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]alias[/\\]IndicesAliasesRequest.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]alias[/\\]IndicesAliasesRequestBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]alias[/\\]TransportIndicesAliasesAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]alias[/\\]exists[/\\]TransportAliasesExistAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]alias[/\\]get[/\\]BaseAliasesRequestBuilder.java" checks="LineLength" />
|
||||
|
@ -117,8 +114,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]delete[/\\]TransportDeleteIndexTemplateAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]get[/\\]GetIndexTemplatesRequestBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]get[/\\]TransportGetIndexTemplatesAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]put[/\\]PutIndexTemplateRequest.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]put[/\\]PutIndexTemplateRequestBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]put[/\\]TransportPutIndexTemplateAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]upgrade[/\\]get[/\\]IndexUpgradeStatus.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]upgrade[/\\]get[/\\]TransportUpgradeStatusAction.java" checks="LineLength" />
|
||||
|
@ -146,7 +141,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]GetRequest.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]MultiGetRequest.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]TransportGetAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]TransportMultiGetAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]get[/\\]TransportShardMultiGetAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]index[/\\]IndexRequest.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]index[/\\]IndexRequestBuilder.java" checks="LineLength" />
|
||||
|
@ -230,7 +224,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]support[/\\]AbstractClient.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]transport[/\\]TransportClient.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]client[/\\]transport[/\\]support[/\\]TransportProxyClient.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterModule.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterState.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateObserver.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]ClusterStateUpdateTask.java" checks="LineLength" />
|
||||
|
@ -245,7 +238,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]AutoExpandReplicas.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]IndexMetaData.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]IndexNameExpressionResolver.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]IndexTemplateMetaData.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]MappingMetaData.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]MetaData.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]metadata[/\\]MetaDataCreateIndexService.java" checks="LineLength" />
|
||||
|
@ -258,7 +250,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]IndexRoutingTable.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]IndexShardRoutingTable.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]OperationRouting.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]RoutingNode.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]RoutingNodes.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]RoutingService.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]cluster[/\\]routing[/\\]RoutingTable.java" checks="LineLength" />
|
||||
|
@ -299,7 +290,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]store[/\\]ByteArrayIndexInput.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]lucene[/\\]uid[/\\]Versions.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]network[/\\]Cidrs.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]network[/\\]NetworkModule.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]network[/\\]NetworkService.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]recycler[/\\]Recyclers.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]BigArrays.java" checks="LineLength" />
|
||||
|
@ -310,23 +300,14 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]concurrent[/\\]PrioritizedEsThreadPoolExecutor.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]concurrent[/\\]ThreadBarrier.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]util[/\\]concurrent[/\\]ThreadContext.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]XContentBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]XContentFactory.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]XContentHelper.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]smile[/\\]SmileXContent.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]common[/\\]xcontent[/\\]support[/\\]XContentMapValues.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]Discovery.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoveryModule.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoveryService.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoverySettings.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]local[/\\]LocalDiscovery.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ZenDiscovery.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]elect[/\\]ElectMasterService.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]fd[/\\]FaultDetection.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]membership[/\\]MembershipAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ping[/\\]ZenPing.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PendingClusterStatesQueue.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PublishClusterStateAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]ESFileStore.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]AsyncShardFetch.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]GatewayAllocator.java" checks="LineLength" />
|
||||
|
@ -388,7 +369,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]CompletionFieldMapper.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]LegacyDateFieldMapper.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]LegacyDoubleFieldMapper.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]LegacyFloatFieldMapper.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]LegacyNumberFieldMapper.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]StringFieldMapper.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]LegacyTokenCountFieldMapper.java" checks="LineLength" />
|
||||
|
@ -419,7 +399,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]QueryParsers.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]search[/\\]MatchQuery.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]search[/\\]MultiMatchQuery.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]search[/\\]geo[/\\]GeoDistanceRangeQuery.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]search[/\\]geo[/\\]IndexedGeoBoundingBoxQuery.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]CommitPoint.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]shard[/\\]IndexEventListener.java" checks="LineLength" />
|
||||
|
@ -463,7 +442,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]ttl[/\\]IndicesTTLService.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]GcNames.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]HotThreads.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmStats.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]Node.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]node[/\\]internal[/\\]InternalSettingsPreparer.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsService.java" checks="LineLength" />
|
||||
|
@ -481,8 +459,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]RestController.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestCountAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestIndicesAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestNodesAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestPendingClusterTasksAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestShardsAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]rest[/\\]action[/\\]cat[/\\]RestThreadPoolAction.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]ScriptContextRegistry.java" checks="LineLength" />
|
||||
|
@ -553,8 +529,6 @@
|
|||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]dfs[/\\]AggregatedDfs.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]dfs[/\\]DfsSearchResult.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]FetchPhase.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]FetchSearchResult.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]fetch[/\\]FetchSubPhase.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]internal[/\\]InternalSearchHit.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]internal[/\\]ShardSearchTransportRequest.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]search[/\\]lookup[/\\]FieldLookup.java" checks="LineLength" />
|
||||
|
@ -595,7 +569,6 @@
|
|||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]get[/\\]GetIndexIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreRequestIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]shards[/\\]IndicesShardStoreResponseTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]admin[/\\]indices[/\\]template[/\\]put[/\\]MetaDataIndexTemplateServiceTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkProcessorIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]BulkRequestTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]action[/\\]bulk[/\\]RetryTests.java" checks="LineLength" />
|
||||
|
@ -723,7 +696,6 @@
|
|||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]DiscoveryWithServiceDisruptionsIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]ZenUnicastDiscoveryIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]ZenDiscoveryUnitTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]discovery[/\\]zen[/\\]publish[/\\]PublishClusterStateActionTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]document[/\\]DocumentActionsIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]EnvironmentTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]env[/\\]NodeEnvironmentTests.java" checks="LineLength" />
|
||||
|
@ -742,7 +714,6 @@
|
|||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]ReplicaShardAllocatorTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]ReusePeerRecoverySharedTest.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]get[/\\]GetActionIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexModuleTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexServiceTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexWithShadowReplicasIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexingSlowLogTests.java" checks="LineLength" />
|
||||
|
@ -791,7 +762,6 @@
|
|||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]ExternalFieldMapperTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]GeoEncodingTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]GeoPointFieldMapperTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]GeoPointFieldTypeTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]GeoShapeFieldMapperTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]GeohashMappingGeoPointTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]mapper[/\\]IdFieldMapperTests.java" checks="LineLength" />
|
||||
|
@ -819,14 +789,11 @@
|
|||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoolQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]BoostingQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]CommonTermsQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]FieldMaskingSpanQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]GeoDistanceQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]HasChildQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]HasParentQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MoreLikeThisQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]MultiMatchQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RandomQueryBuilder.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]RangeQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanMultiTermQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]SpanNotQueryBuilderTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]query[/\\]support[/\\]QueryInnerHitsTests.java" checks="LineLength" />
|
||||
|
@ -853,7 +820,6 @@
|
|||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indexlifecycle[/\\]IndexLifecycleActionIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndexingMemoryControllerTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesLifecycleListenerSingleNodeTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]IndicesOptionsIntegrationIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]AnalyzeActionIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]analyze[/\\]HunspellServiceIT.java" checks="LineLength" />
|
||||
|
@ -881,8 +847,6 @@
|
|||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]stats[/\\]IndexStatsIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreIntegrationIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]store[/\\]IndicesStoreTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]indices[/\\]template[/\\]SimpleIndexTemplateIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]mget[/\\]SimpleMgetIT.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]monitor[/\\]jvm[/\\]JvmGcMonitorServiceSettingsTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginInfoTests.java" checks="LineLength" />
|
||||
<suppress files="core[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]plugins[/\\]PluginsServiceTests.java" checks="LineLength" />
|
||||
|
@ -981,7 +945,6 @@
|
|||
<suppress files="modules[/\\]lang-expression[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]expression[/\\]IndexedExpressionTests.java" checks="LineLength" />
|
||||
<suppress files="modules[/\\]lang-expression[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]expression[/\\]MoreExpressionTests.java" checks="LineLength" />
|
||||
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovyScriptEngineService.java" checks="LineLength" />
|
||||
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovyScriptTests.java" checks="LineLength" />
|
||||
<suppress files="modules[/\\]lang-groovy[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]script[/\\]groovy[/\\]GroovySecurityTests.java" checks="LineLength" />
|
||||
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequest.java" checks="LineLength" />
|
||||
<suppress files="modules[/\\]percolator[/\\]src[/\\]main[/\\]java[/\\]org[/\\]elasticsearch[/\\]percolator[/\\]MultiPercolateRequestBuilder.java" checks="LineLength" />
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
elasticsearch = 6.0.0-alpha1
|
||||
lucene = 6.2.0
|
||||
lucene = 6.3.0-snapshot-a66a445
|
||||
|
||||
# optional dependencies
|
||||
spatial4j = 0.6
|
||||
jts = 1.13
|
||||
jackson = 2.8.1
|
||||
snakeyaml = 1.15
|
||||
log4j = 2.6.2
|
||||
log4j = 2.7
|
||||
slf4j = 1.6.2
|
||||
jna = 4.2.2
|
||||
|
||||
# test dependencies
|
||||
randomizedrunner = 2.3.2
|
||||
randomizedrunner = 2.4.0
|
||||
junit = 4.11
|
||||
httpclient = 4.5.2
|
||||
httpcore = 4.4.5
|
||||
|
@ -20,4 +20,4 @@ commonscodec = 1.10
|
|||
hamcrest = 1.3
|
||||
securemock = 1.2
|
||||
# benchmark dependencies
|
||||
jmh = 1.14
|
||||
jmh = 1.15
|
||||
|
|
|
@ -27,7 +27,7 @@ import org.elasticsearch.client.benchmark.ops.bulk.BulkRequestExecutor;
|
|||
import org.elasticsearch.client.benchmark.ops.search.SearchRequestExecutor;
|
||||
import org.elasticsearch.client.transport.TransportClient;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.transport.InetSocketTransportAddress;
|
||||
import org.elasticsearch.common.transport.TransportAddress;
|
||||
import org.elasticsearch.index.query.QueryBuilders;
|
||||
import org.elasticsearch.plugin.noop.NoopPlugin;
|
||||
import org.elasticsearch.plugin.noop.action.bulk.NoopBulkAction;
|
||||
|
@ -51,7 +51,7 @@ public final class TransportClientBenchmark extends AbstractBenchmark<TransportC
|
|||
@Override
|
||||
protected TransportClient client(String benchmarkTargetHost) throws Exception {
|
||||
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY, NoopPlugin.class);
|
||||
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(benchmarkTargetHost), 9300));
|
||||
client.addTransportAddress(new TransportAddress(InetAddress.getByName(benchmarkTargetHost), 9300));
|
||||
return client;
|
||||
}
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
package org.elasticsearch.plugin.noop.action.bulk;
|
||||
|
||||
import org.elasticsearch.action.DocWriteResponse;
|
||||
import org.elasticsearch.action.DocWriteRequest;
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse;
|
||||
import org.elasticsearch.action.bulk.BulkRequest;
|
||||
import org.elasticsearch.action.bulk.BulkShardRequest;
|
||||
|
@ -39,6 +40,8 @@ import org.elasticsearch.rest.RestRequest;
|
|||
import org.elasticsearch.rest.RestResponse;
|
||||
import org.elasticsearch.rest.action.RestBuilderListener;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
import static org.elasticsearch.rest.RestRequest.Method.POST;
|
||||
import static org.elasticsearch.rest.RestRequest.Method.PUT;
|
||||
import static org.elasticsearch.rest.RestStatus.OK;
|
||||
|
@ -57,7 +60,7 @@ public class RestNoopBulkAction extends BaseRestHandler {
|
|||
}
|
||||
|
||||
@Override
|
||||
public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws Exception {
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
BulkRequest bulkRequest = Requests.bulkRequest();
|
||||
String defaultIndex = request.param("index");
|
||||
String defaultType = request.param("type");
|
||||
|
@ -75,13 +78,14 @@ public class RestNoopBulkAction extends BaseRestHandler {
|
|||
bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, null, defaultPipeline, null, true);
|
||||
|
||||
// short circuit the call to the transport layer
|
||||
BulkRestBuilderListener listener = new BulkRestBuilderListener(channel, request);
|
||||
listener.onResponse(bulkRequest);
|
||||
|
||||
return channel -> {
|
||||
BulkRestBuilderListener listener = new BulkRestBuilderListener(channel, request);
|
||||
listener.onResponse(bulkRequest);
|
||||
};
|
||||
}
|
||||
|
||||
private static class BulkRestBuilderListener extends RestBuilderListener<BulkRequest> {
|
||||
private final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, "update",
|
||||
private final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, DocWriteRequest.OpType.UPDATE,
|
||||
new UpdateResponse(new ShardId("mock", "", 1), "mock_type", "1", 1L, DocWriteResponse.Result.CREATED));
|
||||
|
||||
private final RestRequest request;
|
||||
|
|
|
@ -20,6 +20,7 @@ package org.elasticsearch.plugin.noop.action.bulk;
|
|||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.DocWriteResponse;
|
||||
import org.elasticsearch.action.DocWriteRequest;
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse;
|
||||
import org.elasticsearch.action.bulk.BulkRequest;
|
||||
import org.elasticsearch.action.bulk.BulkResponse;
|
||||
|
@ -34,7 +35,7 @@ import org.elasticsearch.threadpool.ThreadPool;
|
|||
import org.elasticsearch.transport.TransportService;
|
||||
|
||||
public class TransportNoopBulkAction extends HandledTransportAction<BulkRequest, BulkResponse> {
|
||||
private static final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, "update",
|
||||
private static final BulkItemResponse ITEM_RESPONSE = new BulkItemResponse(1, DocWriteRequest.OpType.UPDATE,
|
||||
new UpdateResponse(new ShardId("mock", "", 1), "mock_type", "1", 1L, DocWriteResponse.Result.CREATED));
|
||||
|
||||
@Inject
|
||||
|
|
|
@ -23,7 +23,6 @@ import org.elasticsearch.client.node.NodeClient;
|
|||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.rest.BaseRestHandler;
|
||||
import org.elasticsearch.rest.RestChannel;
|
||||
import org.elasticsearch.rest.RestController;
|
||||
import org.elasticsearch.rest.RestRequest;
|
||||
import org.elasticsearch.rest.action.RestStatusToXContentListener;
|
||||
|
@ -47,8 +46,8 @@ public class RestNoopSearchAction extends BaseRestHandler {
|
|||
}
|
||||
|
||||
@Override
|
||||
public void handleRequest(final RestRequest request, final RestChannel channel, final NodeClient client) throws IOException {
|
||||
public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
|
||||
SearchRequest searchRequest = new SearchRequest();
|
||||
client.execute(NoopSearchAction.INSTANCE, searchRequest, new RestStatusToXContentListener<>(channel));
|
||||
return channel -> client.execute(NoopSearchAction.INSTANCE, searchRequest, new RestStatusToXContentListener<>(channel));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -38,25 +38,15 @@ import java.io.IOException;
|
|||
/**
|
||||
* Default implementation of {@link org.apache.http.nio.protocol.HttpAsyncResponseConsumer}. Buffers the whole
|
||||
* response content in heap memory, meaning that the size of the buffer is equal to the content-length of the response.
|
||||
* Limits the size of responses that can be read to {@link #DEFAULT_BUFFER_LIMIT} by default, configurable value.
|
||||
* Throws an exception in case the entity is longer than the configured buffer limit.
|
||||
* Limits the size of responses that can be read based on a configurable argument. Throws an exception in case the entity is longer
|
||||
* than the configured buffer limit.
|
||||
*/
|
||||
public class HeapBufferedAsyncResponseConsumer extends AbstractAsyncResponseConsumer<HttpResponse> {
|
||||
|
||||
//default buffer limit is 10MB
|
||||
public static final int DEFAULT_BUFFER_LIMIT = 10 * 1024 * 1024;
|
||||
|
||||
private final int bufferLimit;
|
||||
private final int bufferLimitBytes;
|
||||
private volatile HttpResponse response;
|
||||
private volatile SimpleInputBuffer buf;
|
||||
|
||||
/**
|
||||
* Creates a new instance of this consumer with a buffer limit of {@link #DEFAULT_BUFFER_LIMIT}
|
||||
*/
|
||||
public HeapBufferedAsyncResponseConsumer() {
|
||||
this.bufferLimit = DEFAULT_BUFFER_LIMIT;
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a new instance of this consumer with the provided buffer limit
|
||||
*/
|
||||
|
@ -64,7 +54,14 @@ public class HeapBufferedAsyncResponseConsumer extends AbstractAsyncResponseCons
|
|||
if (bufferLimit <= 0) {
|
||||
throw new IllegalArgumentException("bufferLimit must be greater than 0");
|
||||
}
|
||||
this.bufferLimit = bufferLimit;
|
||||
this.bufferLimitBytes = bufferLimit;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the limit of the buffer.
|
||||
*/
|
||||
public int getBufferLimit() {
|
||||
return bufferLimitBytes;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -75,9 +72,9 @@ public class HeapBufferedAsyncResponseConsumer extends AbstractAsyncResponseCons
|
|||
@Override
|
||||
protected void onEntityEnclosed(HttpEntity entity, ContentType contentType) throws IOException {
|
||||
long len = entity.getContentLength();
|
||||
if (len > bufferLimit) {
|
||||
if (len > bufferLimitBytes) {
|
||||
throw new ContentTooLongException("entity content is too long [" + len +
|
||||
"] for the configured buffer limit [" + bufferLimit + "]");
|
||||
"] for the configured buffer limit [" + bufferLimitBytes + "]");
|
||||
}
|
||||
if (len < 0) {
|
||||
len = 4096;
|
||||
|
|
|
@ -0,0 +1,65 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client;
|
||||
|
||||
import org.apache.http.HttpResponse;
|
||||
import org.apache.http.nio.protocol.HttpAsyncResponseConsumer;
|
||||
|
||||
import static org.elasticsearch.client.HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.DEFAULT_BUFFER_LIMIT;
|
||||
|
||||
/**
|
||||
* Factory used to create instances of {@link HttpAsyncResponseConsumer}. Each request retry needs its own instance of the
|
||||
* consumer object. Users can implement this interface and pass their own instance to the specialized
|
||||
* performRequest methods that accept an {@link HttpAsyncResponseConsumerFactory} instance as argument.
|
||||
*/
|
||||
interface HttpAsyncResponseConsumerFactory {
|
||||
|
||||
/**
|
||||
* Creates the default type of {@link HttpAsyncResponseConsumer}, based on heap buffering with a buffer limit of 100MB.
|
||||
*/
|
||||
HttpAsyncResponseConsumerFactory DEFAULT = new HeapBufferedResponseConsumerFactory(DEFAULT_BUFFER_LIMIT);
|
||||
|
||||
/**
|
||||
* Creates the {@link HttpAsyncResponseConsumer}, called once per request attempt.
|
||||
*/
|
||||
HttpAsyncResponseConsumer<HttpResponse> createHttpAsyncResponseConsumer();
|
||||
|
||||
/**
|
||||
* Default factory used to create instances of {@link HttpAsyncResponseConsumer}.
|
||||
* Creates one instance of {@link HeapBufferedAsyncResponseConsumer} for each request attempt, with a configurable
|
||||
* buffer limit which defaults to 100MB.
|
||||
*/
|
||||
class HeapBufferedResponseConsumerFactory implements HttpAsyncResponseConsumerFactory {
|
||||
|
||||
//default buffer limit is 100MB
|
||||
static final int DEFAULT_BUFFER_LIMIT = 100 * 1024 * 1024;
|
||||
|
||||
private final int bufferLimit;
|
||||
|
||||
public HeapBufferedResponseConsumerFactory(int bufferLimitBytes) {
|
||||
this.bufferLimit = bufferLimitBytes;
|
||||
}
|
||||
|
||||
@Override
|
||||
public HttpAsyncResponseConsumer<HttpResponse> createHttpAsyncResponseConsumer() {
|
||||
return new HeapBufferedAsyncResponseConsumer(bufferLimit);
|
||||
}
|
||||
}
|
||||
}
|
|
@ -143,7 +143,7 @@ public class RestClient implements Closeable {
|
|||
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
|
||||
*/
|
||||
public Response performRequest(String method, String endpoint, Header... headers) throws IOException {
|
||||
return performRequest(method, endpoint, Collections.<String, String>emptyMap(), (HttpEntity)null, headers);
|
||||
return performRequest(method, endpoint, Collections.<String, String>emptyMap(), null, headers);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -165,9 +165,9 @@ public class RestClient implements Closeable {
|
|||
|
||||
/**
|
||||
* Sends a request to the Elasticsearch cluster that the client points to and waits for the corresponding response
|
||||
* to be returned. Shortcut to {@link #performRequest(String, String, Map, HttpEntity, HttpAsyncResponseConsumer, Header...)}
|
||||
* which doesn't require specifying an {@link HttpAsyncResponseConsumer} instance, {@link HeapBufferedAsyncResponseConsumer}
|
||||
* will be used to consume the response body.
|
||||
* to be returned. Shortcut to {@link #performRequest(String, String, Map, HttpEntity, HttpAsyncResponseConsumerFactory, Header...)}
|
||||
* which doesn't require specifying an {@link HttpAsyncResponseConsumerFactory} instance,
|
||||
* {@link HttpAsyncResponseConsumerFactory} will be used to create the needed instances of {@link HttpAsyncResponseConsumer}.
|
||||
*
|
||||
* @param method the http method
|
||||
* @param endpoint the path of the request (without host and port)
|
||||
|
@ -181,8 +181,7 @@ public class RestClient implements Closeable {
|
|||
*/
|
||||
public Response performRequest(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, Header... headers) throws IOException {
|
||||
HttpAsyncResponseConsumer<HttpResponse> responseConsumer = new HeapBufferedAsyncResponseConsumer();
|
||||
return performRequest(method, endpoint, params, entity, responseConsumer, headers);
|
||||
return performRequest(method, endpoint, params, entity, HttpAsyncResponseConsumerFactory.DEFAULT, headers);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -196,8 +195,9 @@ public class RestClient implements Closeable {
|
|||
* @param endpoint the path of the request (without host and port)
|
||||
* @param params the query_string parameters
|
||||
* @param entity the body of the request, null if not applicable
|
||||
* @param responseConsumer the {@link HttpAsyncResponseConsumer} callback. Controls how the response
|
||||
* body gets streamed from a non-blocking HTTP connection on the client side.
|
||||
* @param httpAsyncResponseConsumerFactory the {@link HttpAsyncResponseConsumerFactory} used to create one
|
||||
* {@link HttpAsyncResponseConsumer} callback per retry. Controls how the response body gets streamed from a non-blocking HTTP
|
||||
* connection on the client side.
|
||||
* @param headers the optional request headers
|
||||
* @return the response returned by Elasticsearch
|
||||
* @throws IOException in case of a problem or the connection was aborted
|
||||
|
@ -205,10 +205,10 @@ public class RestClient implements Closeable {
|
|||
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
|
||||
*/
|
||||
public Response performRequest(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, HttpAsyncResponseConsumer<HttpResponse> responseConsumer,
|
||||
HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory,
|
||||
Header... headers) throws IOException {
|
||||
SyncResponseListener listener = new SyncResponseListener(maxRetryTimeoutMillis);
|
||||
performRequestAsync(method, endpoint, params, entity, responseConsumer, listener, headers);
|
||||
performRequestAsync(method, endpoint, params, entity, httpAsyncResponseConsumerFactory, listener, headers);
|
||||
return listener.get();
|
||||
}
|
||||
|
||||
|
@ -245,9 +245,9 @@ public class RestClient implements Closeable {
|
|||
/**
|
||||
* Sends a request to the Elasticsearch cluster that the client points to. Doesn't wait for the response, instead
|
||||
* the provided {@link ResponseListener} will be notified upon completion or failure.
|
||||
* Shortcut to {@link #performRequestAsync(String, String, Map, HttpEntity, HttpAsyncResponseConsumer, ResponseListener, Header...)}
|
||||
* which doesn't require specifying an {@link HttpAsyncResponseConsumer} instance, {@link HeapBufferedAsyncResponseConsumer}
|
||||
* will be used to consume the response body.
|
||||
* Shortcut to {@link #performRequestAsync(String, String, Map, HttpEntity, HttpAsyncResponseConsumerFactory, ResponseListener,
|
||||
* Header...)} which doesn't require specifying an {@link HttpAsyncResponseConsumerFactory} instance,
|
||||
* {@link HttpAsyncResponseConsumerFactory} will be used to create the needed instances of {@link HttpAsyncResponseConsumer}.
|
||||
*
|
||||
* @param method the http method
|
||||
* @param endpoint the path of the request (without host and port)
|
||||
|
@ -258,8 +258,7 @@ public class RestClient implements Closeable {
|
|||
*/
|
||||
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, ResponseListener responseListener, Header... headers) {
|
||||
HttpAsyncResponseConsumer<HttpResponse> responseConsumer = new HeapBufferedAsyncResponseConsumer();
|
||||
performRequestAsync(method, endpoint, params, entity, responseConsumer, responseListener, headers);
|
||||
performRequestAsync(method, endpoint, params, entity, HttpAsyncResponseConsumerFactory.DEFAULT, responseListener, headers);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -274,29 +273,31 @@ public class RestClient implements Closeable {
|
|||
* @param endpoint the path of the request (without host and port)
|
||||
* @param params the query_string parameters
|
||||
* @param entity the body of the request, null if not applicable
|
||||
* @param responseConsumer the {@link HttpAsyncResponseConsumer} callback. Controls how the response
|
||||
* body gets streamed from a non-blocking HTTP connection on the client side.
|
||||
* @param httpAsyncResponseConsumerFactory the {@link HttpAsyncResponseConsumerFactory} used to create one
|
||||
* {@link HttpAsyncResponseConsumer} callback per retry. Controls how the response body gets streamed from a non-blocking HTTP
|
||||
* connection on the client side.
|
||||
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
|
||||
* @param headers the optional request headers
|
||||
*/
|
||||
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
|
||||
HttpEntity entity, HttpAsyncResponseConsumer<HttpResponse> responseConsumer,
|
||||
HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory,
|
||||
ResponseListener responseListener, Header... headers) {
|
||||
URI uri = buildUri(pathPrefix, endpoint, params);
|
||||
HttpRequestBase request = createHttpRequest(method, uri, entity);
|
||||
setHeaders(request, headers);
|
||||
FailureTrackingResponseListener failureTrackingResponseListener = new FailureTrackingResponseListener(responseListener);
|
||||
long startTime = System.nanoTime();
|
||||
performRequestAsync(startTime, nextHost().iterator(), request, responseConsumer, failureTrackingResponseListener);
|
||||
performRequestAsync(startTime, nextHost().iterator(), request, httpAsyncResponseConsumerFactory, failureTrackingResponseListener);
|
||||
}
|
||||
|
||||
private void performRequestAsync(final long startTime, final Iterator<HttpHost> hosts, final HttpRequestBase request,
|
||||
final HttpAsyncResponseConsumer<HttpResponse> responseConsumer,
|
||||
final HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory,
|
||||
final FailureTrackingResponseListener listener) {
|
||||
final HttpHost host = hosts.next();
|
||||
//we stream the request body if the entity allows for it
|
||||
HttpAsyncRequestProducer requestProducer = HttpAsyncMethods.create(host, request);
|
||||
client.execute(requestProducer, responseConsumer, new FutureCallback<HttpResponse>() {
|
||||
HttpAsyncResponseConsumer<HttpResponse> asyncResponseConsumer = httpAsyncResponseConsumerFactory.createHttpAsyncResponseConsumer();
|
||||
client.execute(requestProducer, asyncResponseConsumer, new FutureCallback<HttpResponse>() {
|
||||
@Override
|
||||
public void completed(HttpResponse httpResponse) {
|
||||
try {
|
||||
|
@ -346,7 +347,7 @@ public class RestClient implements Closeable {
|
|||
} else {
|
||||
listener.trackFailure(exception);
|
||||
request.reset();
|
||||
performRequestAsync(startTime, hosts, request, responseConsumer, listener);
|
||||
performRequestAsync(startTime, hosts, request, httpAsyncResponseConsumerFactory, listener);
|
||||
}
|
||||
} else {
|
||||
listener.onDefinitiveFailure(exception);
|
||||
|
@ -510,6 +511,7 @@ public class RestClient implements Closeable {
|
|||
|
||||
private static URI buildUri(String pathPrefix, String path, Map<String, String> params) {
|
||||
Objects.requireNonNull(params, "params must not be null");
|
||||
Objects.requireNonNull(path, "path must not be null");
|
||||
try {
|
||||
String fullPath;
|
||||
if (pathPrefix != null) {
|
||||
|
|
|
@ -32,7 +32,6 @@ import org.apache.http.nio.ContentDecoder;
|
|||
import org.apache.http.nio.IOControl;
|
||||
import org.apache.http.protocol.HttpContext;
|
||||
|
||||
import static org.elasticsearch.client.HeapBufferedAsyncResponseConsumer.DEFAULT_BUFFER_LIMIT;
|
||||
import static org.junit.Assert.assertEquals;
|
||||
import static org.junit.Assert.assertSame;
|
||||
import static org.junit.Assert.assertTrue;
|
||||
|
@ -45,13 +44,14 @@ public class HeapBufferedAsyncResponseConsumerTests extends RestClientTestCase {
|
|||
|
||||
//maximum buffer that this test ends up allocating is 50MB
|
||||
private static final int MAX_TEST_BUFFER_SIZE = 50 * 1024 * 1024;
|
||||
private static final int TEST_BUFFER_LIMIT = 10 * 1024 * 1024;
|
||||
|
||||
public void testResponseProcessing() throws Exception {
|
||||
ContentDecoder contentDecoder = mock(ContentDecoder.class);
|
||||
IOControl ioControl = mock(IOControl.class);
|
||||
HttpContext httpContext = mock(HttpContext.class);
|
||||
|
||||
HeapBufferedAsyncResponseConsumer consumer = spy(new HeapBufferedAsyncResponseConsumer());
|
||||
HeapBufferedAsyncResponseConsumer consumer = spy(new HeapBufferedAsyncResponseConsumer(TEST_BUFFER_LIMIT));
|
||||
|
||||
ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1);
|
||||
StatusLine statusLine = new BasicStatusLine(protocolVersion, 200, "OK");
|
||||
|
@ -74,8 +74,8 @@ public class HeapBufferedAsyncResponseConsumerTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
public void testDefaultBufferLimit() throws Exception {
|
||||
HeapBufferedAsyncResponseConsumer consumer = new HeapBufferedAsyncResponseConsumer();
|
||||
bufferLimitTest(consumer, DEFAULT_BUFFER_LIMIT);
|
||||
HeapBufferedAsyncResponseConsumer consumer = new HeapBufferedAsyncResponseConsumer(TEST_BUFFER_LIMIT);
|
||||
bufferLimitTest(consumer, TEST_BUFFER_LIMIT);
|
||||
}
|
||||
|
||||
public void testConfiguredBufferLimit() throws Exception {
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
package org.elasticsearch.client;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomInts;
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomNumbers;
|
||||
import org.apache.http.HttpEntity;
|
||||
import org.apache.http.HttpEntityEnclosingRequest;
|
||||
import org.apache.http.HttpHost;
|
||||
|
@ -62,7 +62,7 @@ public class RequestLoggerTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
HttpRequestBase request;
|
||||
int requestType = RandomInts.randomIntBetween(getRandom(), 0, 7);
|
||||
int requestType = RandomNumbers.randomIntBetween(getRandom(), 0, 7);
|
||||
switch(requestType) {
|
||||
case 0:
|
||||
request = new HttpGetWithEntity(uri);
|
||||
|
@ -99,7 +99,7 @@ public class RequestLoggerTests extends RestClientTestCase {
|
|||
expected += " -d '" + requestBody + "'";
|
||||
HttpEntityEnclosingRequest enclosingRequest = (HttpEntityEnclosingRequest) request;
|
||||
HttpEntity entity;
|
||||
switch(RandomInts.randomIntBetween(getRandom(), 0, 3)) {
|
||||
switch(RandomNumbers.randomIntBetween(getRandom(), 0, 3)) {
|
||||
case 0:
|
||||
entity = new StringEntity(requestBody, StandardCharsets.UTF_8);
|
||||
break;
|
||||
|
@ -128,12 +128,12 @@ public class RequestLoggerTests extends RestClientTestCase {
|
|||
|
||||
public void testTraceResponse() throws IOException {
|
||||
ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1);
|
||||
int statusCode = RandomInts.randomIntBetween(getRandom(), 200, 599);
|
||||
int statusCode = RandomNumbers.randomIntBetween(getRandom(), 200, 599);
|
||||
String reasonPhrase = "REASON";
|
||||
BasicStatusLine statusLine = new BasicStatusLine(protocolVersion, statusCode, reasonPhrase);
|
||||
String expected = "# " + statusLine.toString();
|
||||
BasicHttpResponse httpResponse = new BasicHttpResponse(statusLine);
|
||||
int numHeaders = RandomInts.randomIntBetween(getRandom(), 0, 3);
|
||||
int numHeaders = RandomNumbers.randomIntBetween(getRandom(), 0, 3);
|
||||
for (int i = 0; i < numHeaders; i++) {
|
||||
httpResponse.setHeader("header" + i, "value");
|
||||
expected += "\n# header" + i + ": value";
|
||||
|
|
|
@ -0,0 +1,210 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.client;
|
||||
|
||||
import com.sun.net.httpserver.HttpExchange;
|
||||
import com.sun.net.httpserver.HttpHandler;
|
||||
import com.sun.net.httpserver.HttpServer;
|
||||
import org.apache.http.HttpHost;
|
||||
import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.Before;
|
||||
import org.junit.BeforeClass;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.net.InetAddress;
|
||||
import java.net.InetSocketAddress;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.elasticsearch.client.RestClientTestUtil.getAllStatusCodes;
|
||||
import static org.elasticsearch.client.RestClientTestUtil.randomErrorNoRetryStatusCode;
|
||||
import static org.elasticsearch.client.RestClientTestUtil.randomOkStatusCode;
|
||||
import static org.junit.Assert.assertEquals;
|
||||
import static org.junit.Assert.assertTrue;
|
||||
|
||||
/**
|
||||
* Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}.
|
||||
* Works against real http servers, multiple hosts. Also tests failover by randomly shutting down hosts.
|
||||
*/
|
||||
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
|
||||
@IgnoreJRERequirement
|
||||
public class RestClientMultipleHostsIntegTests extends RestClientTestCase {
|
||||
|
||||
private static HttpServer[] httpServers;
|
||||
private static RestClient restClient;
|
||||
private static String pathPrefix;
|
||||
|
||||
@BeforeClass
|
||||
public static void startHttpServer() throws Exception {
|
||||
String pathPrefixWithoutLeadingSlash;
|
||||
if (randomBoolean()) {
|
||||
pathPrefixWithoutLeadingSlash = "testPathPrefix/" + randomAsciiOfLengthBetween(1, 5);
|
||||
pathPrefix = "/" + pathPrefixWithoutLeadingSlash;
|
||||
} else {
|
||||
pathPrefix = pathPrefixWithoutLeadingSlash = "";
|
||||
}
|
||||
int numHttpServers = randomIntBetween(2, 4);
|
||||
httpServers = new HttpServer[numHttpServers];
|
||||
HttpHost[] httpHosts = new HttpHost[numHttpServers];
|
||||
for (int i = 0; i < numHttpServers; i++) {
|
||||
HttpServer httpServer = createHttpServer();
|
||||
httpServers[i] = httpServer;
|
||||
httpHosts[i] = new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort());
|
||||
}
|
||||
RestClientBuilder restClientBuilder = RestClient.builder(httpHosts);
|
||||
if (pathPrefix.length() > 0) {
|
||||
restClientBuilder.setPathPrefix((randomBoolean() ? "/" : "") + pathPrefixWithoutLeadingSlash);
|
||||
}
|
||||
restClient = restClientBuilder.build();
|
||||
}
|
||||
|
||||
private static HttpServer createHttpServer() throws Exception {
|
||||
HttpServer httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0);
|
||||
httpServer.start();
|
||||
//returns a different status code depending on the path
|
||||
for (int statusCode : getAllStatusCodes()) {
|
||||
httpServer.createContext(pathPrefix + "/" + statusCode, new ResponseHandler(statusCode));
|
||||
}
|
||||
return httpServer;
|
||||
}
|
||||
|
||||
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
|
||||
@IgnoreJRERequirement
|
||||
private static class ResponseHandler implements HttpHandler {
|
||||
private final int statusCode;
|
||||
|
||||
ResponseHandler(int statusCode) {
|
||||
this.statusCode = statusCode;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void handle(HttpExchange httpExchange) throws IOException {
|
||||
httpExchange.getRequestBody().close();
|
||||
httpExchange.sendResponseHeaders(statusCode, -1);
|
||||
httpExchange.close();
|
||||
}
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void stopHttpServers() throws IOException {
|
||||
restClient.close();
|
||||
restClient = null;
|
||||
for (HttpServer httpServer : httpServers) {
|
||||
httpServer.stop(0);
|
||||
}
|
||||
httpServers = null;
|
||||
}
|
||||
|
||||
@Before
|
||||
public void stopRandomHost() {
|
||||
//verify that shutting down some hosts doesn't matter as long as one working host is left behind
|
||||
if (httpServers.length > 1 && randomBoolean()) {
|
||||
List<HttpServer> updatedHttpServers = new ArrayList<>(httpServers.length - 1);
|
||||
int nodeIndex = randomInt(httpServers.length - 1);
|
||||
for (int i = 0; i < httpServers.length; i++) {
|
||||
HttpServer httpServer = httpServers[i];
|
||||
if (i == nodeIndex) {
|
||||
httpServer.stop(0);
|
||||
} else {
|
||||
updatedHttpServers.add(httpServer);
|
||||
}
|
||||
}
|
||||
httpServers = updatedHttpServers.toArray(new HttpServer[updatedHttpServers.size()]);
|
||||
}
|
||||
}
|
||||
|
||||
public void testSyncRequests() throws IOException {
|
||||
int numRequests = randomIntBetween(5, 20);
|
||||
for (int i = 0; i < numRequests; i++) {
|
||||
final String method = RestClientTestUtil.randomHttpMethod(getRandom());
|
||||
//we don't test status codes that are subject to retries as they interfere with hosts being stopped
|
||||
final int statusCode = randomBoolean() ? randomOkStatusCode(getRandom()) : randomErrorNoRetryStatusCode(getRandom());
|
||||
Response response;
|
||||
try {
|
||||
response = restClient.performRequest(method, "/" + statusCode);
|
||||
} catch(ResponseException responseException) {
|
||||
response = responseException.getResponse();
|
||||
}
|
||||
assertEquals(method, response.getRequestLine().getMethod());
|
||||
assertEquals(statusCode, response.getStatusLine().getStatusCode());
|
||||
assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + statusCode, response.getRequestLine().getUri());
|
||||
}
|
||||
}
|
||||
|
||||
public void testAsyncRequests() throws Exception {
|
||||
int numRequests = randomIntBetween(5, 20);
|
||||
final CountDownLatch latch = new CountDownLatch(numRequests);
|
||||
final List<TestResponse> responses = new CopyOnWriteArrayList<>();
|
||||
for (int i = 0; i < numRequests; i++) {
|
||||
final String method = RestClientTestUtil.randomHttpMethod(getRandom());
|
||||
//we don't test status codes that are subject to retries as they interfere with hosts being stopped
|
||||
final int statusCode = randomBoolean() ? randomOkStatusCode(getRandom()) : randomErrorNoRetryStatusCode(getRandom());
|
||||
restClient.performRequestAsync(method, "/" + statusCode, new ResponseListener() {
|
||||
@Override
|
||||
public void onSuccess(Response response) {
|
||||
responses.add(new TestResponse(method, statusCode, response));
|
||||
latch.countDown();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception exception) {
|
||||
responses.add(new TestResponse(method, statusCode, exception));
|
||||
latch.countDown();
|
||||
}
|
||||
});
|
||||
}
|
||||
assertTrue(latch.await(5, TimeUnit.SECONDS));
|
||||
|
||||
assertEquals(numRequests, responses.size());
|
||||
for (TestResponse testResponse : responses) {
|
||||
Response response = testResponse.getResponse();
|
||||
assertEquals(testResponse.method, response.getRequestLine().getMethod());
|
||||
assertEquals(testResponse.statusCode, response.getStatusLine().getStatusCode());
|
||||
assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + testResponse.statusCode,
|
||||
response.getRequestLine().getUri());
|
||||
}
|
||||
}
|
||||
|
||||
private static class TestResponse {
|
||||
private final String method;
|
||||
private final int statusCode;
|
||||
private final Object response;
|
||||
|
||||
TestResponse(String method, int statusCode, Object response) {
|
||||
this.method = method;
|
||||
this.statusCode = statusCode;
|
||||
this.response = response;
|
||||
}
|
||||
|
||||
Response getResponse() {
|
||||
if (response instanceof Response) {
|
||||
return (Response) response;
|
||||
}
|
||||
if (response instanceof ResponseException) {
|
||||
return ((ResponseException) response).getResponse();
|
||||
}
|
||||
throw new AssertionError("unexpected response " + response.getClass());
|
||||
}
|
||||
}
|
||||
}
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
package org.elasticsearch.client;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomInts;
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomNumbers;
|
||||
import org.apache.http.Header;
|
||||
import org.apache.http.HttpHost;
|
||||
import org.apache.http.HttpResponse;
|
||||
|
@ -95,7 +95,7 @@ public class RestClientMultipleHostsTests extends RestClientTestCase {
|
|||
return null;
|
||||
}
|
||||
});
|
||||
int numHosts = RandomInts.randomIntBetween(getRandom(), 2, 5);
|
||||
int numHosts = RandomNumbers.randomIntBetween(getRandom(), 2, 5);
|
||||
httpHosts = new HttpHost[numHosts];
|
||||
for (int i = 0; i < numHosts; i++) {
|
||||
httpHosts[i] = new HttpHost("localhost", 9200 + i);
|
||||
|
@ -105,7 +105,7 @@ public class RestClientMultipleHostsTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
public void testRoundRobinOkStatusCodes() throws IOException {
|
||||
int numIters = RandomInts.randomIntBetween(getRandom(), 1, 5);
|
||||
int numIters = RandomNumbers.randomIntBetween(getRandom(), 1, 5);
|
||||
for (int i = 0; i < numIters; i++) {
|
||||
Set<HttpHost> hostsSet = new HashSet<>();
|
||||
Collections.addAll(hostsSet, httpHosts);
|
||||
|
@ -121,7 +121,7 @@ public class RestClientMultipleHostsTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
public void testRoundRobinNoRetryErrors() throws IOException {
|
||||
int numIters = RandomInts.randomIntBetween(getRandom(), 1, 5);
|
||||
int numIters = RandomNumbers.randomIntBetween(getRandom(), 1, 5);
|
||||
for (int i = 0; i < numIters; i++) {
|
||||
Set<HttpHost> hostsSet = new HashSet<>();
|
||||
Collections.addAll(hostsSet, httpHosts);
|
||||
|
@ -198,7 +198,7 @@ public class RestClientMultipleHostsTests extends RestClientTestCase {
|
|||
assertEquals("every host should have been used but some weren't: " + hostsSet, 0, hostsSet.size());
|
||||
}
|
||||
|
||||
int numIters = RandomInts.randomIntBetween(getRandom(), 2, 5);
|
||||
int numIters = RandomNumbers.randomIntBetween(getRandom(), 2, 5);
|
||||
for (int i = 1; i <= numIters; i++) {
|
||||
//check that one different host is resurrected at each new attempt
|
||||
Set<HttpHost> hostsSet = new HashSet<>();
|
||||
|
@ -228,7 +228,7 @@ public class RestClientMultipleHostsTests extends RestClientTestCase {
|
|||
if (getRandom().nextBoolean()) {
|
||||
//mark one host back alive through a successful request and check that all requests after that are sent to it
|
||||
HttpHost selectedHost = null;
|
||||
int iters = RandomInts.randomIntBetween(getRandom(), 2, 10);
|
||||
int iters = RandomNumbers.randomIntBetween(getRandom(), 2, 10);
|
||||
for (int y = 0; y < iters; y++) {
|
||||
int statusCode = randomErrorNoRetryStatusCode(getRandom());
|
||||
Response response;
|
||||
|
@ -269,7 +269,7 @@ public class RestClientMultipleHostsTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
private static String randomErrorRetryEndpoint() {
|
||||
switch(RandomInts.randomIntBetween(getRandom(), 0, 3)) {
|
||||
switch(RandomNumbers.randomIntBetween(getRandom(), 0, 3)) {
|
||||
case 0:
|
||||
return "/" + randomErrorRetryStatusCode(getRandom());
|
||||
case 1:
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
package org.elasticsearch.client;
|
||||
|
||||
import com.sun.net.httpserver.Headers;
|
||||
import com.sun.net.httpserver.HttpContext;
|
||||
import com.sun.net.httpserver.HttpExchange;
|
||||
import com.sun.net.httpserver.HttpHandler;
|
||||
import com.sun.net.httpserver.HttpServer;
|
||||
|
@ -45,19 +44,13 @@ import java.util.HashSet;
|
|||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.CopyOnWriteArrayList;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.elasticsearch.client.RestClientTestUtil.getAllStatusCodes;
|
||||
import static org.elasticsearch.client.RestClientTestUtil.getHttpMethods;
|
||||
import static org.elasticsearch.client.RestClientTestUtil.randomStatusCode;
|
||||
import static org.hamcrest.CoreMatchers.equalTo;
|
||||
import static org.junit.Assert.assertEquals;
|
||||
import static org.junit.Assert.assertNotNull;
|
||||
import static org.junit.Assert.assertThat;
|
||||
import static org.junit.Assert.assertTrue;
|
||||
import static org.junit.Assert.fail;
|
||||
|
||||
/**
|
||||
* Integration test to check interaction between {@link RestClient} and {@link org.apache.http.client.HttpClient}.
|
||||
|
@ -65,28 +58,42 @@ import static org.junit.Assert.fail;
|
|||
*/
|
||||
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
|
||||
@IgnoreJRERequirement
|
||||
public class RestClientIntegTests extends RestClientTestCase {
|
||||
public class RestClientSingleHostIntegTests extends RestClientTestCase {
|
||||
|
||||
private static HttpServer httpServer;
|
||||
private static RestClient restClient;
|
||||
private static String pathPrefix;
|
||||
private static Header[] defaultHeaders;
|
||||
|
||||
@BeforeClass
|
||||
public static void startHttpServer() throws Exception {
|
||||
httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0);
|
||||
String pathPrefixWithoutLeadingSlash;
|
||||
if (randomBoolean()) {
|
||||
pathPrefixWithoutLeadingSlash = "testPathPrefix/" + randomAsciiOfLengthBetween(1, 5);
|
||||
pathPrefix = "/" + pathPrefixWithoutLeadingSlash;
|
||||
} else {
|
||||
pathPrefix = pathPrefixWithoutLeadingSlash = "";
|
||||
}
|
||||
|
||||
httpServer = createHttpServer();
|
||||
int numHeaders = randomIntBetween(0, 5);
|
||||
defaultHeaders = generateHeaders("Header-default", "Header-array", numHeaders);
|
||||
RestClientBuilder restClientBuilder = RestClient.builder(
|
||||
new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort())).setDefaultHeaders(defaultHeaders);
|
||||
if (pathPrefix.length() > 0) {
|
||||
restClientBuilder.setPathPrefix((randomBoolean() ? "/" : "") + pathPrefixWithoutLeadingSlash);
|
||||
}
|
||||
restClient = restClientBuilder.build();
|
||||
}
|
||||
|
||||
private static HttpServer createHttpServer() throws Exception {
|
||||
HttpServer httpServer = HttpServer.create(new InetSocketAddress(InetAddress.getLoopbackAddress(), 0), 0);
|
||||
httpServer.start();
|
||||
//returns a different status code depending on the path
|
||||
for (int statusCode : getAllStatusCodes()) {
|
||||
createStatusCodeContext(httpServer, statusCode);
|
||||
httpServer.createContext(pathPrefix + "/" + statusCode, new ResponseHandler(statusCode));
|
||||
}
|
||||
int numHeaders = randomIntBetween(0, 5);
|
||||
defaultHeaders = generateHeaders("Header-default", "Header-array", numHeaders);
|
||||
restClient = RestClient.builder(new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort()))
|
||||
.setDefaultHeaders(defaultHeaders).build();
|
||||
}
|
||||
|
||||
private static void createStatusCodeContext(HttpServer httpServer, final int statusCode) {
|
||||
httpServer.createContext("/" + statusCode, new ResponseHandler(statusCode));
|
||||
return httpServer;
|
||||
}
|
||||
|
||||
//animal-sniffer doesn't like our usage of com.sun.net.httpserver.* classes
|
||||
|
@ -157,7 +164,11 @@ public class RestClientIntegTests extends RestClientTestCase {
|
|||
} catch(ResponseException e) {
|
||||
esResponse = e.getResponse();
|
||||
}
|
||||
assertThat(esResponse.getStatusLine().getStatusCode(), equalTo(statusCode));
|
||||
|
||||
assertEquals(method, esResponse.getRequestLine().getMethod());
|
||||
assertEquals(statusCode, esResponse.getStatusLine().getStatusCode());
|
||||
assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + statusCode, esResponse.getRequestLine().getUri());
|
||||
|
||||
for (final Header responseHeader : esResponse.getHeaders()) {
|
||||
final String name = responseHeader.getName();
|
||||
final String value = responseHeader.getValue();
|
||||
|
@ -197,38 +208,6 @@ public class RestClientIntegTests extends RestClientTestCase {
|
|||
bodyTest("GET");
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure that pathPrefix works as expected.
|
||||
*/
|
||||
public void testPathPrefix() throws IOException {
|
||||
// guarantee no other test setup collides with this one and lets it sneak through
|
||||
final String uniqueContextSuffix = "/testPathPrefix";
|
||||
final String pathPrefix = "base/" + randomAsciiOfLengthBetween(1, 5) + "/";
|
||||
final int statusCode = randomStatusCode(getRandom());
|
||||
|
||||
final HttpContext context =
|
||||
httpServer.createContext("/" + pathPrefix + statusCode + uniqueContextSuffix, new ResponseHandler(statusCode));
|
||||
|
||||
try (final RestClient client =
|
||||
RestClient.builder(new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort()))
|
||||
.setPathPrefix((randomBoolean() ? "/" : "") + pathPrefix).build()) {
|
||||
|
||||
for (final String method : getHttpMethods()) {
|
||||
Response esResponse;
|
||||
try {
|
||||
esResponse = client.performRequest(method, "/" + statusCode + uniqueContextSuffix);
|
||||
} catch(ResponseException e) {
|
||||
esResponse = e.getResponse();
|
||||
}
|
||||
|
||||
assertThat(esResponse.getRequestLine().getUri(), equalTo("/" + pathPrefix + statusCode + uniqueContextSuffix));
|
||||
assertThat(esResponse.getStatusLine().getStatusCode(), equalTo(statusCode));
|
||||
}
|
||||
} finally {
|
||||
httpServer.removeContext(context);
|
||||
}
|
||||
}
|
||||
|
||||
private void bodyTest(String method) throws IOException {
|
||||
String requestBody = "{ \"field\": \"value\" }";
|
||||
StringEntity entity = new StringEntity(requestBody);
|
||||
|
@ -239,60 +218,9 @@ public class RestClientIntegTests extends RestClientTestCase {
|
|||
} catch(ResponseException e) {
|
||||
esResponse = e.getResponse();
|
||||
}
|
||||
assertEquals(method, esResponse.getRequestLine().getMethod());
|
||||
assertEquals(statusCode, esResponse.getStatusLine().getStatusCode());
|
||||
assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + statusCode, esResponse.getRequestLine().getUri());
|
||||
assertEquals(requestBody, EntityUtils.toString(esResponse.getEntity()));
|
||||
}
|
||||
|
||||
public void testAsyncRequests() throws Exception {
|
||||
int numRequests = randomIntBetween(5, 20);
|
||||
final CountDownLatch latch = new CountDownLatch(numRequests);
|
||||
final List<TestResponse> responses = new CopyOnWriteArrayList<>();
|
||||
for (int i = 0; i < numRequests; i++) {
|
||||
final String method = RestClientTestUtil.randomHttpMethod(getRandom());
|
||||
final int statusCode = randomStatusCode(getRandom());
|
||||
restClient.performRequestAsync(method, "/" + statusCode, new ResponseListener() {
|
||||
@Override
|
||||
public void onSuccess(Response response) {
|
||||
responses.add(new TestResponse(method, statusCode, response));
|
||||
latch.countDown();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception exception) {
|
||||
responses.add(new TestResponse(method, statusCode, exception));
|
||||
latch.countDown();
|
||||
}
|
||||
});
|
||||
}
|
||||
assertTrue(latch.await(5, TimeUnit.SECONDS));
|
||||
|
||||
assertEquals(numRequests, responses.size());
|
||||
for (TestResponse response : responses) {
|
||||
assertEquals(response.method, response.getResponse().getRequestLine().getMethod());
|
||||
assertEquals(response.statusCode, response.getResponse().getStatusLine().getStatusCode());
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
private static class TestResponse {
|
||||
private final String method;
|
||||
private final int statusCode;
|
||||
private final Object response;
|
||||
|
||||
TestResponse(String method, int statusCode, Object response) {
|
||||
this.method = method;
|
||||
this.statusCode = statusCode;
|
||||
this.response = response;
|
||||
}
|
||||
|
||||
Response getResponse() {
|
||||
if (response instanceof Response) {
|
||||
return (Response) response;
|
||||
}
|
||||
if (response instanceof ResponseException) {
|
||||
return ((ResponseException) response).getResponse();
|
||||
}
|
||||
throw new AssertionError("unexpected response " + response.getClass());
|
||||
}
|
||||
}
|
||||
}
|
|
@ -139,6 +139,17 @@ public class RestClientSingleHostTests extends RestClientTestCase {
|
|||
restClient = new RestClient(httpClient, 10000, defaultHeaders, new HttpHost[]{httpHost}, null, failureListener);
|
||||
}
|
||||
|
||||
public void testNullPath() throws IOException {
|
||||
for (String method : getHttpMethods()) {
|
||||
try {
|
||||
restClient.performRequest(method, null);
|
||||
fail("path set to null should fail!");
|
||||
} catch (NullPointerException e) {
|
||||
assertEquals("path must not be null", e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Verifies the content of the {@link HttpRequest} that's internally created and passed through to the http client
|
||||
*/
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
package org.elasticsearch.client.sniff;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomInts;
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomNumbers;
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomPicks;
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomStrings;
|
||||
import com.fasterxml.jackson.core.JsonFactory;
|
||||
|
@ -69,7 +69,7 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
|
|||
|
||||
@Before
|
||||
public void startHttpServer() throws IOException {
|
||||
this.sniffRequestTimeout = RandomInts.randomIntBetween(getRandom(), 1000, 10000);
|
||||
this.sniffRequestTimeout = RandomNumbers.randomIntBetween(getRandom(), 1000, 10000);
|
||||
this.scheme = RandomPicks.randomFrom(getRandom(), ElasticsearchHostsSniffer.Scheme.values());
|
||||
if (rarely()) {
|
||||
this.sniffResponse = SniffResponse.buildFailure();
|
||||
|
@ -101,7 +101,7 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
|
|||
assertEquals(e.getMessage(), "scheme cannot be null");
|
||||
}
|
||||
try {
|
||||
new ElasticsearchHostsSniffer(restClient, RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0),
|
||||
new ElasticsearchHostsSniffer(restClient, RandomNumbers.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0),
|
||||
ElasticsearchHostsSniffer.Scheme.HTTP);
|
||||
fail("should have failed");
|
||||
} catch (IllegalArgumentException e) {
|
||||
|
@ -175,7 +175,7 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
private static SniffResponse buildSniffResponse(ElasticsearchHostsSniffer.Scheme scheme) throws IOException {
|
||||
int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5);
|
||||
int numNodes = RandomNumbers.randomIntBetween(getRandom(), 1, 5);
|
||||
List<HttpHost> hosts = new ArrayList<>(numNodes);
|
||||
JsonFactory jsonFactory = new JsonFactory();
|
||||
StringWriter writer = new StringWriter();
|
||||
|
@ -205,7 +205,7 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
|
|||
boolean isHttpEnabled = rarely() == false;
|
||||
if (isHttpEnabled) {
|
||||
String host = "host" + i;
|
||||
int port = RandomInts.randomIntBetween(getRandom(), 9200, 9299);
|
||||
int port = RandomNumbers.randomIntBetween(getRandom(), 9200, 9299);
|
||||
HttpHost httpHost = new HttpHost(host, port, scheme.toString());
|
||||
hosts.add(httpHost);
|
||||
generator.writeObjectFieldStart("http");
|
||||
|
@ -228,7 +228,7 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
|
|||
}
|
||||
if (getRandom().nextBoolean()) {
|
||||
String[] roles = {"master", "data", "ingest"};
|
||||
int numRoles = RandomInts.randomIntBetween(getRandom(), 0, 3);
|
||||
int numRoles = RandomNumbers.randomIntBetween(getRandom(), 0, 3);
|
||||
Set<String> nodeRoles = new HashSet<>(numRoles);
|
||||
for (int j = 0; j < numRoles; j++) {
|
||||
String role;
|
||||
|
@ -242,7 +242,7 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
|
|||
}
|
||||
generator.writeEndArray();
|
||||
}
|
||||
int numAttributes = RandomInts.randomIntBetween(getRandom(), 0, 3);
|
||||
int numAttributes = RandomNumbers.randomIntBetween(getRandom(), 0, 3);
|
||||
Map<String, String> attributes = new HashMap<>(numAttributes);
|
||||
for (int j = 0; j < numAttributes; j++) {
|
||||
attributes.put("attr" + j, "value" + j);
|
||||
|
@ -291,6 +291,6 @@ public class ElasticsearchHostsSnifferTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
private static int randomErrorResponseCode() {
|
||||
return RandomInts.randomIntBetween(getRandom(), 400, 599);
|
||||
return RandomNumbers.randomIntBetween(getRandom(), 400, 599);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
package org.elasticsearch.client.sniff;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomInts;
|
||||
import com.carrotsearch.randomizedtesting.generators.RandomNumbers;
|
||||
import org.apache.http.HttpHost;
|
||||
import org.elasticsearch.client.RestClient;
|
||||
import org.elasticsearch.client.RestClientTestCase;
|
||||
|
@ -31,7 +31,7 @@ import static org.junit.Assert.fail;
|
|||
public class SnifferBuilderTests extends RestClientTestCase {
|
||||
|
||||
public void testBuild() throws Exception {
|
||||
int numNodes = RandomInts.randomIntBetween(getRandom(), 1, 5);
|
||||
int numNodes = RandomNumbers.randomIntBetween(getRandom(), 1, 5);
|
||||
HttpHost[] hosts = new HttpHost[numNodes];
|
||||
for (int i = 0; i < numNodes; i++) {
|
||||
hosts[i] = new HttpHost("localhost", 9200 + i);
|
||||
|
@ -46,14 +46,14 @@ public class SnifferBuilderTests extends RestClientTestCase {
|
|||
}
|
||||
|
||||
try {
|
||||
Sniffer.builder(client).setSniffIntervalMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
|
||||
Sniffer.builder(client).setSniffIntervalMillis(RandomNumbers.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
|
||||
fail("should have failed");
|
||||
} catch(IllegalArgumentException e) {
|
||||
assertEquals("sniffIntervalMillis must be greater than 0", e.getMessage());
|
||||
}
|
||||
|
||||
try {
|
||||
Sniffer.builder(client).setSniffAfterFailureDelayMillis(RandomInts.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
|
||||
Sniffer.builder(client).setSniffAfterFailureDelayMillis(RandomNumbers.randomIntBetween(getRandom(), Integer.MIN_VALUE, 0));
|
||||
fail("should have failed");
|
||||
} catch(IllegalArgumentException e) {
|
||||
assertEquals("sniffAfterFailureDelayMillis must be greater than 0", e.getMessage());
|
||||
|
@ -74,10 +74,10 @@ public class SnifferBuilderTests extends RestClientTestCase {
|
|||
|
||||
SnifferBuilder builder = Sniffer.builder(client);
|
||||
if (getRandom().nextBoolean()) {
|
||||
builder.setSniffIntervalMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
|
||||
builder.setSniffIntervalMillis(RandomNumbers.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
|
||||
}
|
||||
if (getRandom().nextBoolean()) {
|
||||
builder.setSniffAfterFailureDelayMillis(RandomInts.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
|
||||
builder.setSniffAfterFailureDelayMillis(RandomNumbers.randomIntBetween(getRandom(), 1, Integer.MAX_VALUE));
|
||||
}
|
||||
if (getRandom().nextBoolean()) {
|
||||
builder.setHostsSniffer(new MockHostsSniffer());
|
||||
|
|
|
@ -62,10 +62,7 @@ dependencies {
|
|||
compile 'com.carrotsearch:hppc:0.7.1'
|
||||
|
||||
// time handling, remove with java 8 time
|
||||
compile 'joda-time:joda-time:2.9.4'
|
||||
// joda 2.0 moved to using volatile fields for datetime
|
||||
// When updating to a new version, make sure to update our copy of BaseDateTime
|
||||
compile 'org.joda:joda-convert:1.2'
|
||||
compile 'joda-time:joda-time:2.9.5'
|
||||
|
||||
// json and yaml
|
||||
compile "org.yaml:snakeyaml:${versions.snakeyaml}"
|
||||
|
@ -158,6 +155,10 @@ thirdPartyAudit.excludes = [
|
|||
'com.fasterxml.jackson.databind.ObjectMapper',
|
||||
|
||||
// from log4j
|
||||
'com.beust.jcommander.IStringConverter',
|
||||
'com.beust.jcommander.JCommander',
|
||||
'com.conversantmedia.util.concurrent.DisruptorBlockingQueue',
|
||||
'com.conversantmedia.util.concurrent.SpinPolicy',
|
||||
'com.fasterxml.jackson.annotation.JsonInclude$Include',
|
||||
'com.fasterxml.jackson.databind.DeserializationContext',
|
||||
'com.fasterxml.jackson.databind.JsonMappingException',
|
||||
|
@ -176,6 +177,10 @@ thirdPartyAudit.excludes = [
|
|||
'com.fasterxml.jackson.dataformat.xml.JacksonXmlModule',
|
||||
'com.fasterxml.jackson.dataformat.xml.XmlMapper',
|
||||
'com.fasterxml.jackson.dataformat.xml.util.DefaultXmlPrettyPrinter',
|
||||
'com.fasterxml.jackson.databind.node.JsonNodeFactory',
|
||||
'com.fasterxml.jackson.databind.node.ObjectNode',
|
||||
'org.fusesource.jansi.Ansi',
|
||||
'org.fusesource.jansi.AnsiRenderer$Code',
|
||||
'com.lmax.disruptor.BlockingWaitStrategy',
|
||||
'com.lmax.disruptor.BusySpinWaitStrategy',
|
||||
'com.lmax.disruptor.EventFactory',
|
||||
|
@ -228,6 +233,8 @@ thirdPartyAudit.excludes = [
|
|||
'org.apache.kafka.clients.producer.Producer',
|
||||
'org.apache.kafka.clients.producer.ProducerRecord',
|
||||
'org.codehaus.stax2.XMLStreamWriter2',
|
||||
'org.jctools.queues.MessagePassingQueue$Consumer',
|
||||
'org.jctools.queues.MpscArrayQueue',
|
||||
'org.osgi.framework.AdaptPermission',
|
||||
'org.osgi.framework.AdminPermission',
|
||||
'org.osgi.framework.Bundle',
|
||||
|
@ -247,8 +254,10 @@ thirdPartyAudit.excludes = [
|
|||
'org.noggit.JSONParser',
|
||||
]
|
||||
|
||||
// dependency license are currently checked in distribution
|
||||
dependencyLicenses.enabled = false
|
||||
dependencyLicenses {
|
||||
mapping from: /lucene-.*/, to: 'lucene'
|
||||
mapping from: /jackson-.*/, to: 'jackson'
|
||||
}
|
||||
|
||||
if (isEclipse == false || project.path == ":core-tests") {
|
||||
task integTest(type: RandomizedTestingTask,
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
5f01da7306363fad2028b916f3eab926262de928
|
|
@ -0,0 +1 @@
|
|||
39f4e6c2d68d4ef8fd4b0883d165682dedd5be52
|
|
@ -0,0 +1 @@
|
|||
8de00e382a817981b737be84cb8def687d392963
|
|
@ -0,0 +1 @@
|
|||
a3f2b4e64c61a7fc1ed8f1e5ba371933404ed98a
|
|
@ -0,0 +1 @@
|
|||
61aacb657e44a9beabf95834e106bbb96373a703
|
|
@ -0,0 +1 @@
|
|||
600de75a81e259cab0384e546d9a1d527ddba6d6
|
|
@ -0,0 +1 @@
|
|||
188774468a56a8731ca639527d721060d26ffebd
|
|
@ -0,0 +1 @@
|
|||
5afd9271e3d8f645440f48ff2487545ae5573e7e
|
|
@ -0,0 +1 @@
|
|||
0f575175e26d4d3b1095f6300cbefbbb3ee994cd
|
|
@ -0,0 +1 @@
|
|||
ee898c3d318681c9f29c56e6d9b52876be96d814
|
|
@ -0,0 +1 @@
|
|||
ea6defd322456711394b4dabcda70a217e3caacd
|
|
@ -0,0 +1 @@
|
|||
ea2de7f9753a8e19a1ec9f25a3ea65d7ce909a0e
|
|
@ -0,0 +1 @@
|
|||
0b15c6f29bfb9ec14a4615013a94bfa43a63793d
|
|
@ -0,0 +1 @@
|
|||
d89d9fa1036c38144e0b8db079ae959353847c86
|
|
@ -0,0 +1 @@
|
|||
c003c1ab0a19a02b30156ce13372cff1001d6a7d
|
|
@ -0,0 +1 @@
|
|||
a3c570bf588d7c9ca43d074db9ce9c9b8408b930
|
|
@ -0,0 +1 @@
|
|||
de54ca61f5892cf2c88ac083b3332a827beca7ff
|
|
@ -0,0 +1 @@
|
|||
cacdf81b324acd335be63798d5a3dd16e7dff9a3
|
|
@ -0,0 +1 @@
|
|||
a5cb3723bc8e0db185fc43e57b648145de27fde8
|
|
@ -1,665 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache license, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the license for the specific language governing permissions and
|
||||
* limitations under the license.
|
||||
*/
|
||||
|
||||
package org.apache.logging.log4j.core.impl;
|
||||
|
||||
import java.io.Serializable;
|
||||
import java.net.URL;
|
||||
import java.security.CodeSource;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.Stack;
|
||||
|
||||
import org.apache.logging.log4j.core.util.Loader;
|
||||
import org.apache.logging.log4j.status.StatusLogger;
|
||||
import org.apache.logging.log4j.util.ReflectionUtil;
|
||||
import org.apache.logging.log4j.util.Strings;
|
||||
|
||||
/**
|
||||
* Wraps a Throwable to add packaging information about each stack trace element.
|
||||
*
|
||||
* <p>
|
||||
* A proxy is used to represent a throwable that may not exist in a different class loader or JVM. When an application
|
||||
* deserializes a ThrowableProxy, the throwable may not be set, but the throwable's information is preserved in other
|
||||
* fields of the proxy like the message and stack trace.
|
||||
* </p>
|
||||
*
|
||||
* <p>
|
||||
* TODO: Move this class to org.apache.logging.log4j.core because it is used from LogEvent.
|
||||
* </p>
|
||||
* <p>
|
||||
* TODO: Deserialize: Try to rebuild Throwable if the target exception is in this class loader?
|
||||
* </p>
|
||||
*/
|
||||
public class ThrowableProxy implements Serializable {
|
||||
|
||||
private static final String CAUSED_BY_LABEL = "Caused by: ";
|
||||
private static final String SUPPRESSED_LABEL = "Suppressed: ";
|
||||
private static final String WRAPPED_BY_LABEL = "Wrapped by: ";
|
||||
|
||||
/**
|
||||
* Cached StackTracePackageElement and ClassLoader.
|
||||
* <p>
|
||||
* Consider this class private.
|
||||
* </p>
|
||||
*/
|
||||
static class CacheEntry {
|
||||
private final ExtendedClassInfo element;
|
||||
private final ClassLoader loader;
|
||||
|
||||
public CacheEntry(final ExtendedClassInfo element, final ClassLoader loader) {
|
||||
this.element = element;
|
||||
this.loader = loader;
|
||||
}
|
||||
}
|
||||
|
||||
private static final ThrowableProxy[] EMPTY_THROWABLE_PROXY_ARRAY = new ThrowableProxy[0];
|
||||
|
||||
private static final char EOL = '\n';
|
||||
|
||||
private static final long serialVersionUID = -2752771578252251910L;
|
||||
|
||||
private final ThrowableProxy causeProxy;
|
||||
|
||||
private int commonElementCount;
|
||||
|
||||
private final ExtendedStackTraceElement[] extendedStackTrace;
|
||||
|
||||
private final String localizedMessage;
|
||||
|
||||
private final String message;
|
||||
|
||||
private final String name;
|
||||
|
||||
private final ThrowableProxy[] suppressedProxies;
|
||||
|
||||
private final transient Throwable throwable;
|
||||
|
||||
/**
|
||||
* For JSON and XML IO via Jackson.
|
||||
*/
|
||||
@SuppressWarnings("unused")
|
||||
private ThrowableProxy() {
|
||||
this.throwable = null;
|
||||
this.name = null;
|
||||
this.extendedStackTrace = null;
|
||||
this.causeProxy = null;
|
||||
this.message = null;
|
||||
this.localizedMessage = null;
|
||||
this.suppressedProxies = EMPTY_THROWABLE_PROXY_ARRAY;
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs the wrapper for the Throwable that includes packaging data.
|
||||
*
|
||||
* @param throwable
|
||||
* The Throwable to wrap, must not be null.
|
||||
*/
|
||||
public ThrowableProxy(final Throwable throwable) {
|
||||
this(throwable, null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs the wrapper for the Throwable that includes packaging data.
|
||||
*
|
||||
* @param throwable
|
||||
* The Throwable to wrap, must not be null.
|
||||
* @param visited
|
||||
* The set of visited suppressed exceptions.
|
||||
*/
|
||||
private ThrowableProxy(final Throwable throwable, final Set<Throwable> visited) {
|
||||
this.throwable = throwable;
|
||||
this.name = throwable.getClass().getName();
|
||||
this.message = throwable.getMessage();
|
||||
this.localizedMessage = throwable.getLocalizedMessage();
|
||||
final Map<String, CacheEntry> map = new HashMap<>();
|
||||
final Stack<Class<?>> stack = ReflectionUtil.getCurrentStackTrace();
|
||||
this.extendedStackTrace = this.toExtendedStackTrace(stack, map, null, throwable.getStackTrace());
|
||||
final Throwable throwableCause = throwable.getCause();
|
||||
final Set<Throwable> causeVisited = new HashSet<>(1);
|
||||
this.causeProxy = throwableCause == null ? null : new ThrowableProxy(throwable, stack, map, throwableCause, visited, causeVisited);
|
||||
this.suppressedProxies = this.toSuppressedProxies(throwable, visited);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructs the wrapper for a Throwable that is referenced as the cause by another Throwable.
|
||||
*
|
||||
* @param parent
|
||||
* The Throwable referencing this Throwable.
|
||||
* @param stack
|
||||
* The Class stack.
|
||||
* @param map
|
||||
* The cache containing the packaging data.
|
||||
* @param cause
|
||||
* The Throwable to wrap.
|
||||
* @param suppressedVisited TODO
|
||||
* @param causeVisited TODO
|
||||
*/
|
||||
private ThrowableProxy(final Throwable parent, final Stack<Class<?>> stack, final Map<String, CacheEntry> map,
|
||||
final Throwable cause, final Set<Throwable> suppressedVisited, final Set<Throwable> causeVisited) {
|
||||
causeVisited.add(cause);
|
||||
this.throwable = cause;
|
||||
this.name = cause.getClass().getName();
|
||||
this.message = this.throwable.getMessage();
|
||||
this.localizedMessage = this.throwable.getLocalizedMessage();
|
||||
this.extendedStackTrace = this.toExtendedStackTrace(stack, map, parent.getStackTrace(), cause.getStackTrace());
|
||||
final Throwable causeCause = cause.getCause();
|
||||
this.causeProxy = causeCause == null || causeVisited.contains(causeCause) ? null : new ThrowableProxy(parent,
|
||||
stack, map, causeCause, suppressedVisited, causeVisited);
|
||||
this.suppressedProxies = this.toSuppressedProxies(cause, suppressedVisited);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(final Object obj) {
|
||||
if (this == obj) {
|
||||
return true;
|
||||
}
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (this.getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
final ThrowableProxy other = (ThrowableProxy) obj;
|
||||
if (this.causeProxy == null) {
|
||||
if (other.causeProxy != null) {
|
||||
return false;
|
||||
}
|
||||
} else if (!this.causeProxy.equals(other.causeProxy)) {
|
||||
return false;
|
||||
}
|
||||
if (this.commonElementCount != other.commonElementCount) {
|
||||
return false;
|
||||
}
|
||||
if (this.name == null) {
|
||||
if (other.name != null) {
|
||||
return false;
|
||||
}
|
||||
} else if (!this.name.equals(other.name)) {
|
||||
return false;
|
||||
}
|
||||
if (!Arrays.equals(this.extendedStackTrace, other.extendedStackTrace)) {
|
||||
return false;
|
||||
}
|
||||
if (!Arrays.equals(this.suppressedProxies, other.suppressedProxies)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
private void formatCause(final StringBuilder sb, final String prefix, final ThrowableProxy cause, final List<String> ignorePackages) {
|
||||
formatThrowableProxy(sb, prefix, CAUSED_BY_LABEL, cause, ignorePackages);
|
||||
}
|
||||
|
||||
private void formatThrowableProxy(final StringBuilder sb, final String prefix, final String causeLabel,
|
||||
final ThrowableProxy throwableProxy, final List<String> ignorePackages) {
|
||||
if (throwableProxy == null) {
|
||||
return;
|
||||
}
|
||||
sb.append(prefix).append(causeLabel).append(throwableProxy).append(EOL);
|
||||
this.formatElements(sb, prefix, throwableProxy.commonElementCount,
|
||||
throwableProxy.getStackTrace(), throwableProxy.extendedStackTrace, ignorePackages);
|
||||
this.formatSuppressed(sb, prefix + "\t", throwableProxy.suppressedProxies, ignorePackages);
|
||||
this.formatCause(sb, prefix, throwableProxy.causeProxy, ignorePackages);
|
||||
}
|
||||
|
||||
private void formatSuppressed(final StringBuilder sb, final String prefix, final ThrowableProxy[] suppressedProxies,
|
||||
final List<String> ignorePackages) {
|
||||
if (suppressedProxies == null) {
|
||||
return;
|
||||
}
|
||||
for (final ThrowableProxy suppressedProxy : suppressedProxies) {
|
||||
final ThrowableProxy cause = suppressedProxy;
|
||||
formatThrowableProxy(sb, prefix, SUPPRESSED_LABEL, cause, ignorePackages);
|
||||
}
|
||||
}
|
||||
|
||||
private void formatElements(final StringBuilder sb, final String prefix, final int commonCount,
|
||||
final StackTraceElement[] causedTrace, final ExtendedStackTraceElement[] extStackTrace,
|
||||
final List<String> ignorePackages) {
|
||||
if (ignorePackages == null || ignorePackages.isEmpty()) {
|
||||
for (final ExtendedStackTraceElement element : extStackTrace) {
|
||||
this.formatEntry(element, sb, prefix);
|
||||
}
|
||||
} else {
|
||||
int count = 0;
|
||||
for (int i = 0; i < extStackTrace.length; ++i) {
|
||||
if (!this.ignoreElement(causedTrace[i], ignorePackages)) {
|
||||
if (count > 0) {
|
||||
appendSuppressedCount(sb, prefix, count);
|
||||
count = 0;
|
||||
}
|
||||
this.formatEntry(extStackTrace[i], sb, prefix);
|
||||
} else {
|
||||
++count;
|
||||
}
|
||||
}
|
||||
if (count > 0) {
|
||||
appendSuppressedCount(sb, prefix, count);
|
||||
}
|
||||
}
|
||||
if (commonCount != 0) {
|
||||
sb.append(prefix).append("\t... ").append(commonCount).append(" more").append(EOL);
|
||||
}
|
||||
}
|
||||
|
||||
private void appendSuppressedCount(final StringBuilder sb, final String prefix, final int count) {
|
||||
sb.append(prefix);
|
||||
if (count == 1) {
|
||||
sb.append("\t....").append(EOL);
|
||||
} else {
|
||||
sb.append("\t... suppressed ").append(count).append(" lines").append(EOL);
|
||||
}
|
||||
}
|
||||
|
||||
private void formatEntry(final ExtendedStackTraceElement extStackTraceElement, final StringBuilder sb, final String prefix) {
|
||||
sb.append(prefix);
|
||||
sb.append("\tat ");
|
||||
sb.append(extStackTraceElement);
|
||||
sb.append(EOL);
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats the specified Throwable.
|
||||
*
|
||||
* @param sb
|
||||
* StringBuilder to contain the formatted Throwable.
|
||||
* @param cause
|
||||
* The Throwable to format.
|
||||
*/
|
||||
public void formatWrapper(final StringBuilder sb, final ThrowableProxy cause) {
|
||||
this.formatWrapper(sb, cause, null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Formats the specified Throwable.
|
||||
*
|
||||
* @param sb
|
||||
* StringBuilder to contain the formatted Throwable.
|
||||
* @param cause
|
||||
* The Throwable to format.
|
||||
* @param packages
|
||||
* The List of packages to be suppressed from the trace.
|
||||
*/
|
||||
@SuppressWarnings("ThrowableResultOfMethodCallIgnored")
|
||||
public void formatWrapper(final StringBuilder sb, final ThrowableProxy cause, final List<String> packages) {
|
||||
final Throwable caused = cause.getCauseProxy() != null ? cause.getCauseProxy().getThrowable() : null;
|
||||
if (caused != null) {
|
||||
this.formatWrapper(sb, cause.causeProxy);
|
||||
sb.append(WRAPPED_BY_LABEL);
|
||||
}
|
||||
sb.append(cause).append(EOL);
|
||||
this.formatElements(sb, "", cause.commonElementCount,
|
||||
cause.getThrowable().getStackTrace(), cause.extendedStackTrace, packages);
|
||||
}
|
||||
|
||||
public ThrowableProxy getCauseProxy() {
|
||||
return this.causeProxy;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the Throwable that is the cause of this Throwable.
|
||||
*
|
||||
* @return The formatted Throwable that caused this Throwable.
|
||||
*/
|
||||
public String getCauseStackTraceAsString() {
|
||||
return this.getCauseStackTraceAsString(null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the Throwable that is the cause of this Throwable.
|
||||
*
|
||||
* @param packages
|
||||
* The List of packages to be suppressed from the trace.
|
||||
* @return The formatted Throwable that caused this Throwable.
|
||||
*/
|
||||
public String getCauseStackTraceAsString(final List<String> packages) {
|
||||
final StringBuilder sb = new StringBuilder();
|
||||
if (this.causeProxy != null) {
|
||||
this.formatWrapper(sb, this.causeProxy);
|
||||
sb.append(WRAPPED_BY_LABEL);
|
||||
}
|
||||
sb.append(this.toString());
|
||||
sb.append(EOL);
|
||||
this.formatElements(sb, "", 0, this.throwable.getStackTrace(), this.extendedStackTrace, packages);
|
||||
return sb.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the number of elements that are being omitted because they are common with the parent Throwable's stack
|
||||
* trace.
|
||||
*
|
||||
* @return The number of elements omitted from the stack trace.
|
||||
*/
|
||||
public int getCommonElementCount() {
|
||||
return this.commonElementCount;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the stack trace including packaging information.
|
||||
*
|
||||
* @return The stack trace including packaging information.
|
||||
*/
|
||||
public ExtendedStackTraceElement[] getExtendedStackTrace() {
|
||||
return this.extendedStackTrace;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the stack trace including packaging information.
|
||||
*
|
||||
* @return The formatted stack trace including packaging information.
|
||||
*/
|
||||
public String getExtendedStackTraceAsString() {
|
||||
return this.getExtendedStackTraceAsString(null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the stack trace including packaging information.
|
||||
*
|
||||
* @param ignorePackages
|
||||
* List of packages to be ignored in the trace.
|
||||
* @return The formatted stack trace including packaging information.
|
||||
*/
|
||||
public String getExtendedStackTraceAsString(final List<String> ignorePackages) {
|
||||
final StringBuilder sb = new StringBuilder(this.name);
|
||||
final String msg = this.message;
|
||||
if (msg != null) {
|
||||
sb.append(": ").append(msg);
|
||||
}
|
||||
sb.append(EOL);
|
||||
final StackTraceElement[] causedTrace = this.throwable != null ? this.throwable.getStackTrace() : null;
|
||||
this.formatElements(sb, "", 0, causedTrace, this.extendedStackTrace, ignorePackages);
|
||||
this.formatSuppressed(sb, "\t", this.suppressedProxies, ignorePackages);
|
||||
this.formatCause(sb, "", this.causeProxy, ignorePackages);
|
||||
return sb.toString();
|
||||
}
|
||||
|
||||
public String getLocalizedMessage() {
|
||||
return this.localizedMessage;
|
||||
}
|
||||
|
||||
public String getMessage() {
|
||||
return this.message;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the FQCN of the Throwable.
|
||||
*
|
||||
* @return The FQCN of the Throwable.
|
||||
*/
|
||||
public String getName() {
|
||||
return this.name;
|
||||
}
|
||||
|
||||
public StackTraceElement[] getStackTrace() {
|
||||
return this.throwable == null ? null : this.throwable.getStackTrace();
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets proxies for suppressed exceptions.
|
||||
*
|
||||
* @return proxies for suppressed exceptions.
|
||||
*/
|
||||
public ThrowableProxy[] getSuppressedProxies() {
|
||||
return this.suppressedProxies;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the suppressed Throwables.
|
||||
*
|
||||
* @return The formatted suppressed Throwables.
|
||||
*/
|
||||
public String getSuppressedStackTrace() {
|
||||
final ThrowableProxy[] suppressed = this.getSuppressedProxies();
|
||||
if (suppressed == null || suppressed.length == 0) {
|
||||
return Strings.EMPTY;
|
||||
}
|
||||
final StringBuilder sb = new StringBuilder("Suppressed Stack Trace Elements:").append(EOL);
|
||||
for (final ThrowableProxy proxy : suppressed) {
|
||||
sb.append(proxy.getExtendedStackTraceAsString());
|
||||
}
|
||||
return sb.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* The throwable or null if this object is deserialized from XML or JSON.
|
||||
*
|
||||
* @return The throwable or null if this object is deserialized from XML or JSON.
|
||||
*/
|
||||
public Throwable getThrowable() {
|
||||
return this.throwable;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
final int prime = 31;
|
||||
int result = 1;
|
||||
result = prime * result + (this.causeProxy == null ? 0 : this.causeProxy.hashCode());
|
||||
result = prime * result + this.commonElementCount;
|
||||
result = prime * result + (this.extendedStackTrace == null ? 0 : Arrays.hashCode(this.extendedStackTrace));
|
||||
result = prime * result + (this.suppressedProxies == null ? 0 : Arrays.hashCode(this.suppressedProxies));
|
||||
result = prime * result + (this.name == null ? 0 : this.name.hashCode());
|
||||
return result;
|
||||
}
|
||||
|
||||
private boolean ignoreElement(final StackTraceElement element, final List<String> ignorePackages) {
|
||||
final String className = element.getClassName();
|
||||
for (final String pkg : ignorePackages) {
|
||||
if (className.startsWith(pkg)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Loads classes not located via Reflection.getCallerClass.
|
||||
*
|
||||
* @param lastLoader
|
||||
* The ClassLoader that loaded the Class that called this Class.
|
||||
* @param className
|
||||
* The name of the Class.
|
||||
* @return The Class object for the Class or null if it could not be located.
|
||||
*/
|
||||
private Class<?> loadClass(final ClassLoader lastLoader, final String className) {
|
||||
// XXX: this is overly complicated
|
||||
Class<?> clazz;
|
||||
if (lastLoader != null) {
|
||||
try {
|
||||
clazz = Loader.initializeClass(className, lastLoader);
|
||||
if (clazz != null) {
|
||||
return clazz;
|
||||
}
|
||||
} catch (final Throwable ignore) {
|
||||
// Ignore exception.
|
||||
}
|
||||
}
|
||||
try {
|
||||
clazz = Loader.loadClass(className);
|
||||
} catch (final ClassNotFoundException ignored) {
|
||||
return initializeClass(className);
|
||||
} catch (final NoClassDefFoundError ignored) {
|
||||
return initializeClass(className);
|
||||
} catch (final SecurityException ignored) {
|
||||
return initializeClass(className);
|
||||
}
|
||||
return clazz;
|
||||
}
|
||||
|
||||
private Class<?> initializeClass(final String className) {
|
||||
try {
|
||||
return Loader.initializeClass(className, this.getClass().getClassLoader());
|
||||
} catch (final ClassNotFoundException ignore) {
|
||||
return null;
|
||||
} catch (final NoClassDefFoundError ignore) {
|
||||
return null;
|
||||
} catch (final SecurityException ignore) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct the CacheEntry from the Class's information.
|
||||
*
|
||||
* @param stackTraceElement
|
||||
* The stack trace element
|
||||
* @param callerClass
|
||||
* The Class.
|
||||
* @param exact
|
||||
* True if the class was obtained via Reflection.getCallerClass.
|
||||
*
|
||||
* @return The CacheEntry.
|
||||
*/
|
||||
private CacheEntry toCacheEntry(final StackTraceElement stackTraceElement, final Class<?> callerClass,
|
||||
final boolean exact) {
|
||||
String location = "?";
|
||||
String version = "?";
|
||||
ClassLoader lastLoader = null;
|
||||
if (callerClass != null) {
|
||||
try {
|
||||
final CodeSource source = callerClass.getProtectionDomain().getCodeSource();
|
||||
if (source != null) {
|
||||
final URL locationURL = source.getLocation();
|
||||
if (locationURL != null) {
|
||||
final String str = locationURL.toString().replace('\\', '/');
|
||||
int index = str.lastIndexOf("/");
|
||||
if (index >= 0 && index == str.length() - 1) {
|
||||
index = str.lastIndexOf("/", index - 1);
|
||||
location = str.substring(index + 1);
|
||||
} else {
|
||||
location = str.substring(index + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (final Exception ex) {
|
||||
// Ignore the exception.
|
||||
}
|
||||
final Package pkg = callerClass.getPackage();
|
||||
if (pkg != null) {
|
||||
final String ver = pkg.getImplementationVersion();
|
||||
if (ver != null) {
|
||||
version = ver;
|
||||
}
|
||||
}
|
||||
lastLoader = callerClass.getClassLoader();
|
||||
}
|
||||
return new CacheEntry(new ExtendedClassInfo(exact, location, version), lastLoader);
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve all the stack entries in this stack trace that are not common with the parent.
|
||||
*
|
||||
* @param stack
|
||||
* The callers Class stack.
|
||||
* @param map
|
||||
* The cache of CacheEntry objects.
|
||||
* @param rootTrace
|
||||
* The first stack trace resolve or null.
|
||||
* @param stackTrace
|
||||
* The stack trace being resolved.
|
||||
* @return The StackTracePackageElement array.
|
||||
*/
|
||||
ExtendedStackTraceElement[] toExtendedStackTrace(final Stack<Class<?>> stack, final Map<String, CacheEntry> map,
|
||||
final StackTraceElement[] rootTrace, final StackTraceElement[] stackTrace) {
|
||||
int stackLength;
|
||||
if (rootTrace != null) {
|
||||
int rootIndex = rootTrace.length - 1;
|
||||
int stackIndex = stackTrace.length - 1;
|
||||
while (rootIndex >= 0 && stackIndex >= 0 && rootTrace[rootIndex].equals(stackTrace[stackIndex])) {
|
||||
--rootIndex;
|
||||
--stackIndex;
|
||||
}
|
||||
this.commonElementCount = stackTrace.length - 1 - stackIndex;
|
||||
stackLength = stackIndex + 1;
|
||||
} else {
|
||||
this.commonElementCount = 0;
|
||||
stackLength = stackTrace.length;
|
||||
}
|
||||
final ExtendedStackTraceElement[] extStackTrace = new ExtendedStackTraceElement[stackLength];
|
||||
Class<?> clazz = stack.isEmpty() ? null : stack.peek();
|
||||
ClassLoader lastLoader = null;
|
||||
for (int i = stackLength - 1; i >= 0; --i) {
|
||||
final StackTraceElement stackTraceElement = stackTrace[i];
|
||||
final String className = stackTraceElement.getClassName();
|
||||
// The stack returned from getCurrentStack may be missing entries for java.lang.reflect.Method.invoke()
|
||||
// and its implementation. The Throwable might also contain stack entries that are no longer
|
||||
// present as those methods have returned.
|
||||
ExtendedClassInfo extClassInfo;
|
||||
if (clazz != null && className.equals(clazz.getName())) {
|
||||
final CacheEntry entry = this.toCacheEntry(stackTraceElement, clazz, true);
|
||||
extClassInfo = entry.element;
|
||||
lastLoader = entry.loader;
|
||||
stack.pop();
|
||||
clazz = stack.isEmpty() ? null : stack.peek();
|
||||
} else {
|
||||
final CacheEntry cacheEntry = map.get(className);
|
||||
if (cacheEntry != null) {
|
||||
final CacheEntry entry = cacheEntry;
|
||||
extClassInfo = entry.element;
|
||||
if (entry.loader != null) {
|
||||
lastLoader = entry.loader;
|
||||
}
|
||||
} else {
|
||||
final CacheEntry entry = this.toCacheEntry(stackTraceElement,
|
||||
this.loadClass(lastLoader, className), false);
|
||||
extClassInfo = entry.element;
|
||||
map.put(stackTraceElement.toString(), entry);
|
||||
if (entry.loader != null) {
|
||||
lastLoader = entry.loader;
|
||||
}
|
||||
}
|
||||
}
|
||||
extStackTrace[i] = new ExtendedStackTraceElement(stackTraceElement, extClassInfo);
|
||||
}
|
||||
return extStackTrace;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
final String msg = this.message;
|
||||
return msg != null ? this.name + ": " + msg : this.name;
|
||||
}
|
||||
|
||||
private ThrowableProxy[] toSuppressedProxies(final Throwable thrown, Set<Throwable> suppressedVisited) {
|
||||
try {
|
||||
final Throwable[] suppressed = thrown.getSuppressed();
|
||||
if (suppressed == null) {
|
||||
return EMPTY_THROWABLE_PROXY_ARRAY;
|
||||
}
|
||||
final List<ThrowableProxy> proxies = new ArrayList<>(suppressed.length);
|
||||
if (suppressedVisited == null) {
|
||||
suppressedVisited = new HashSet<>(proxies.size());
|
||||
}
|
||||
for (int i = 0; i < suppressed.length; i++) {
|
||||
final Throwable candidate = suppressed[i];
|
||||
if (!suppressedVisited.contains(candidate)) {
|
||||
suppressedVisited.add(candidate);
|
||||
proxies.add(new ThrowableProxy(candidate, suppressedVisited));
|
||||
}
|
||||
}
|
||||
return proxies.toArray(new ThrowableProxy[proxies.size()]);
|
||||
} catch (final Exception e) {
|
||||
StatusLogger.getLogger().error(e);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
|
@ -1,392 +0,0 @@
|
|||
/*
|
||||
* Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
* contributor license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright ownership.
|
||||
* The ASF licenses this file to You under the Apache license, Version 2.0
|
||||
* (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the license for the specific language governing permissions and
|
||||
* limitations under the license.
|
||||
*/
|
||||
package org.apache.logging.log4j.core.jmx;
|
||||
|
||||
import java.lang.management.ManagementFactory;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.Executor;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
|
||||
import javax.management.InstanceAlreadyExistsException;
|
||||
import javax.management.MBeanRegistrationException;
|
||||
import javax.management.MBeanServer;
|
||||
import javax.management.NotCompliantMBeanException;
|
||||
import javax.management.ObjectName;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.core.Appender;
|
||||
import org.apache.logging.log4j.core.LoggerContext;
|
||||
import org.apache.logging.log4j.core.appender.AsyncAppender;
|
||||
import org.apache.logging.log4j.core.async.AsyncLoggerConfig;
|
||||
import org.apache.logging.log4j.core.async.AsyncLoggerContext;
|
||||
import org.apache.logging.log4j.core.async.DaemonThreadFactory;
|
||||
import org.apache.logging.log4j.core.config.LoggerConfig;
|
||||
import org.apache.logging.log4j.core.impl.Log4jContextFactory;
|
||||
import org.apache.logging.log4j.core.selector.ContextSelector;
|
||||
import org.apache.logging.log4j.core.util.Constants;
|
||||
import org.apache.logging.log4j.spi.LoggerContextFactory;
|
||||
import org.apache.logging.log4j.status.StatusLogger;
|
||||
import org.apache.logging.log4j.util.PropertiesUtil;
|
||||
import org.elasticsearch.common.SuppressForbidden;
|
||||
|
||||
/**
|
||||
* Creates MBeans to instrument various classes in the log4j class hierarchy.
|
||||
* <p>
|
||||
* All instrumentation for Log4j 2 classes can be disabled by setting system property {@code -Dlog4j2.disable.jmx=true}.
|
||||
* </p>
|
||||
*/
|
||||
@SuppressForbidden(reason = "copied class to hack around Log4j bug")
|
||||
public final class Server {
|
||||
|
||||
/**
|
||||
* The domain part, or prefix ({@value}) of the {@code ObjectName} of all MBeans that instrument Log4J2 components.
|
||||
*/
|
||||
public static final String DOMAIN = "org.apache.logging.log4j2";
|
||||
private static final String PROPERTY_DISABLE_JMX = "log4j2.disable.jmx";
|
||||
private static final String PROPERTY_ASYNC_NOTIF = "log4j2.jmx.notify.async";
|
||||
private static final String THREAD_NAME_PREFIX = "log4j2.jmx.notif";
|
||||
private static final StatusLogger LOGGER = StatusLogger.getLogger();
|
||||
static final Executor executor = isJmxDisabled() ? null : createExecutor();
|
||||
|
||||
private Server() {
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns either a {@code null} Executor (causing JMX notifications to be sent from the caller thread) or a daemon
|
||||
* background thread Executor, depending on the value of system property "log4j2.jmx.notify.async". If this
|
||||
* property is not set, use a {@code null} Executor for web apps to avoid memory leaks and other issues when the
|
||||
* web app is restarted.
|
||||
* @see <a href="https://issues.apache.org/jira/browse/LOG4J2-938">LOG4J2-938</a>
|
||||
*/
|
||||
private static ExecutorService createExecutor() {
|
||||
final boolean defaultAsync = !Constants.IS_WEB_APP;
|
||||
final boolean async = PropertiesUtil.getProperties().getBooleanProperty(PROPERTY_ASYNC_NOTIF, defaultAsync);
|
||||
return async ? Executors.newFixedThreadPool(1, new DaemonThreadFactory(THREAD_NAME_PREFIX)) : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Either returns the specified name as is, or returns a quoted value containing the specified name with the special
|
||||
* characters (comma, equals, colon, quote, asterisk, or question mark) preceded with a backslash.
|
||||
*
|
||||
* @param name the name to escape so it can be used as a value in an {@link ObjectName}.
|
||||
* @return the escaped name
|
||||
*/
|
||||
public static String escape(final String name) {
|
||||
final StringBuilder sb = new StringBuilder(name.length() * 2);
|
||||
boolean needsQuotes = false;
|
||||
for (int i = 0; i < name.length(); i++) {
|
||||
final char c = name.charAt(i);
|
||||
switch (c) {
|
||||
case '\\':
|
||||
case '*':
|
||||
case '?':
|
||||
case '\"':
|
||||
// quote, star, question & backslash must be escaped
|
||||
sb.append('\\');
|
||||
needsQuotes = true; // ... and can only appear in quoted value
|
||||
break;
|
||||
case ',':
|
||||
case '=':
|
||||
case ':':
|
||||
// no need to escape these, but value must be quoted
|
||||
needsQuotes = true;
|
||||
break;
|
||||
case '\r':
|
||||
// drop \r characters: \\r gives "invalid escape sequence"
|
||||
continue;
|
||||
case '\n':
|
||||
// replace \n characters with \\n sequence
|
||||
sb.append("\\n");
|
||||
needsQuotes = true;
|
||||
continue;
|
||||
}
|
||||
sb.append(c);
|
||||
}
|
||||
if (needsQuotes) {
|
||||
sb.insert(0, '\"');
|
||||
sb.append('\"');
|
||||
}
|
||||
return sb.toString();
|
||||
}
|
||||
|
||||
private static boolean isJmxDisabled() {
|
||||
return PropertiesUtil.getProperties().getBooleanProperty(PROPERTY_DISABLE_JMX);
|
||||
}
|
||||
|
||||
public static void reregisterMBeansAfterReconfigure() {
|
||||
// avoid creating Platform MBean Server if JMX disabled
|
||||
if (isJmxDisabled()) {
|
||||
LOGGER.debug("JMX disabled for log4j2. Not registering MBeans.");
|
||||
return;
|
||||
}
|
||||
final MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
|
||||
reregisterMBeansAfterReconfigure(mbs);
|
||||
}
|
||||
|
||||
public static void reregisterMBeansAfterReconfigure(final MBeanServer mbs) {
|
||||
if (isJmxDisabled()) {
|
||||
LOGGER.debug("JMX disabled for log4j2. Not registering MBeans.");
|
||||
return;
|
||||
}
|
||||
|
||||
// now provide instrumentation for the newly configured
|
||||
// LoggerConfigs and Appenders
|
||||
try {
|
||||
final ContextSelector selector = getContextSelector();
|
||||
if (selector == null) {
|
||||
LOGGER.debug("Could not register MBeans: no ContextSelector found.");
|
||||
return;
|
||||
}
|
||||
LOGGER.trace("Reregistering MBeans after reconfigure. Selector={}", selector);
|
||||
final List<LoggerContext> contexts = selector.getLoggerContexts();
|
||||
int i = 0;
|
||||
for (final LoggerContext ctx : contexts) {
|
||||
LOGGER.trace("Reregistering context ({}/{}): '{}' {}", ++i, contexts.size(), ctx.getName(), ctx);
|
||||
// first unregister the context and all nested loggers,
|
||||
// appenders, statusLogger, contextSelector, ringbuffers...
|
||||
unregisterLoggerContext(ctx.getName(), mbs);
|
||||
|
||||
final LoggerContextAdmin mbean = new LoggerContextAdmin(ctx, executor);
|
||||
register(mbs, mbean, mbean.getObjectName());
|
||||
|
||||
if (ctx instanceof AsyncLoggerContext) {
|
||||
final RingBufferAdmin rbmbean = ((AsyncLoggerContext) ctx).createRingBufferAdmin();
|
||||
if (rbmbean.getBufferSize() > 0) {
|
||||
// don't register if Disruptor not started (DefaultConfiguration: config not found)
|
||||
register(mbs, rbmbean, rbmbean.getObjectName());
|
||||
}
|
||||
}
|
||||
|
||||
// register the status logger and the context selector
|
||||
// repeatedly
|
||||
// for each known context: if one context is unregistered,
|
||||
// these MBeans should still be available for the other
|
||||
// contexts.
|
||||
registerStatusLogger(ctx.getName(), mbs, executor);
|
||||
registerContextSelector(ctx.getName(), selector, mbs, executor);
|
||||
|
||||
registerLoggerConfigs(ctx, mbs, executor);
|
||||
registerAppenders(ctx, mbs, executor);
|
||||
}
|
||||
} catch (final Exception ex) {
|
||||
LOGGER.error("Could not register mbeans", ex);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Unregister all log4j MBeans from the platform MBean server.
|
||||
*/
|
||||
public static void unregisterMBeans() {
|
||||
if (isJmxDisabled()) {
|
||||
LOGGER.debug("JMX disabled for Log4j2. Not unregistering MBeans.");
|
||||
return;
|
||||
}
|
||||
final MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
|
||||
unregisterMBeans(mbs);
|
||||
}
|
||||
|
||||
/**
|
||||
* Unregister all log4j MBeans from the specified MBean server.
|
||||
*
|
||||
* @param mbs the MBean server to unregister from.
|
||||
*/
|
||||
public static void unregisterMBeans(final MBeanServer mbs) {
|
||||
unregisterStatusLogger("*", mbs);
|
||||
unregisterContextSelector("*", mbs);
|
||||
unregisterContexts(mbs);
|
||||
unregisterLoggerConfigs("*", mbs);
|
||||
unregisterAsyncLoggerRingBufferAdmins("*", mbs);
|
||||
unregisterAsyncLoggerConfigRingBufferAdmins("*", mbs);
|
||||
unregisterAppenders("*", mbs);
|
||||
unregisterAsyncAppenders("*", mbs);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the {@code ContextSelector} of the current {@code Log4jContextFactory}.
|
||||
*
|
||||
* @return the {@code ContextSelector} of the current {@code Log4jContextFactory}
|
||||
*/
|
||||
private static ContextSelector getContextSelector() {
|
||||
final LoggerContextFactory factory = LogManager.getFactory();
|
||||
if (factory instanceof Log4jContextFactory) {
|
||||
final ContextSelector selector = ((Log4jContextFactory) factory).getSelector();
|
||||
return selector;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Unregisters all MBeans associated with the specified logger context (including MBeans for {@code LoggerConfig}s
|
||||
* and {@code Appender}s from the platform MBean server.
|
||||
*
|
||||
* @param loggerContextName name of the logger context to unregister
|
||||
*/
|
||||
public static void unregisterLoggerContext(final String loggerContextName) {
|
||||
if (isJmxDisabled()) {
|
||||
LOGGER.debug("JMX disabled for Log4j2. Not unregistering MBeans.");
|
||||
return;
|
||||
}
|
||||
final MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
|
||||
unregisterLoggerContext(loggerContextName, mbs);
|
||||
}
|
||||
|
||||
/**
|
||||
* Unregisters all MBeans associated with the specified logger context (including MBeans for {@code LoggerConfig}s
|
||||
* and {@code Appender}s from the platform MBean server.
|
||||
*
|
||||
* @param contextName name of the logger context to unregister
|
||||
* @param mbs the MBean Server to unregister the instrumented objects from
|
||||
*/
|
||||
public static void unregisterLoggerContext(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern = LoggerContextAdminMBean.PATTERN;
|
||||
final String search = String.format(pattern, escape(contextName), "*");
|
||||
unregisterAllMatching(search, mbs); // unregister context mbean
|
||||
|
||||
// now unregister all MBeans associated with this logger context
|
||||
unregisterStatusLogger(contextName, mbs);
|
||||
unregisterContextSelector(contextName, mbs);
|
||||
unregisterLoggerConfigs(contextName, mbs);
|
||||
unregisterAppenders(contextName, mbs);
|
||||
unregisterAsyncAppenders(contextName, mbs);
|
||||
unregisterAsyncLoggerRingBufferAdmins(contextName, mbs);
|
||||
unregisterAsyncLoggerConfigRingBufferAdmins(contextName, mbs);
|
||||
}
|
||||
|
||||
private static void registerStatusLogger(final String contextName, final MBeanServer mbs, final Executor executor)
|
||||
throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {
|
||||
|
||||
final StatusLoggerAdmin mbean = new StatusLoggerAdmin(contextName, executor);
|
||||
register(mbs, mbean, mbean.getObjectName());
|
||||
}
|
||||
|
||||
private static void registerContextSelector(final String contextName, final ContextSelector selector,
|
||||
final MBeanServer mbs, final Executor executor) throws InstanceAlreadyExistsException,
|
||||
MBeanRegistrationException, NotCompliantMBeanException {
|
||||
|
||||
final ContextSelectorAdmin mbean = new ContextSelectorAdmin(contextName, selector);
|
||||
register(mbs, mbean, mbean.getObjectName());
|
||||
}
|
||||
|
||||
private static void unregisterStatusLogger(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern = StatusLoggerAdminMBean.PATTERN;
|
||||
final String search = String.format(pattern, escape(contextName), "*");
|
||||
unregisterAllMatching(search, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterContextSelector(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern = ContextSelectorAdminMBean.PATTERN;
|
||||
final String search = String.format(pattern, escape(contextName), "*");
|
||||
unregisterAllMatching(search, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterLoggerConfigs(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern = LoggerConfigAdminMBean.PATTERN;
|
||||
final String search = String.format(pattern, escape(contextName), "*");
|
||||
unregisterAllMatching(search, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterContexts(final MBeanServer mbs) {
|
||||
final String pattern = LoggerContextAdminMBean.PATTERN;
|
||||
final String search = String.format(pattern, "*");
|
||||
unregisterAllMatching(search, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterAppenders(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern = AppenderAdminMBean.PATTERN;
|
||||
final String search = String.format(pattern, escape(contextName), "*");
|
||||
unregisterAllMatching(search, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterAsyncAppenders(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern = AsyncAppenderAdminMBean.PATTERN;
|
||||
final String search = String.format(pattern, escape(contextName), "*");
|
||||
unregisterAllMatching(search, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterAsyncLoggerRingBufferAdmins(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern1 = RingBufferAdminMBean.PATTERN_ASYNC_LOGGER;
|
||||
final String search1 = String.format(pattern1, escape(contextName));
|
||||
unregisterAllMatching(search1, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterAsyncLoggerConfigRingBufferAdmins(final String contextName, final MBeanServer mbs) {
|
||||
final String pattern2 = RingBufferAdminMBean.PATTERN_ASYNC_LOGGER_CONFIG;
|
||||
final String search2 = String.format(pattern2, escape(contextName), "*");
|
||||
unregisterAllMatching(search2, mbs);
|
||||
}
|
||||
|
||||
private static void unregisterAllMatching(final String search, final MBeanServer mbs) {
|
||||
try {
|
||||
final ObjectName pattern = new ObjectName(search);
|
||||
final Set<ObjectName> found = mbs.queryNames(pattern, null);
|
||||
if (found.isEmpty()) {
|
||||
LOGGER.trace("Unregistering but no MBeans found matching '{}'", search);
|
||||
} else {
|
||||
LOGGER.trace("Unregistering {} MBeans: {}", found.size(), found);
|
||||
}
|
||||
for (final ObjectName objectName : found) {
|
||||
mbs.unregisterMBean(objectName);
|
||||
}
|
||||
} catch (final Exception ex) {
|
||||
LOGGER.error("Could not unregister MBeans for " + search, ex);
|
||||
}
|
||||
}
|
||||
|
||||
private static void registerLoggerConfigs(final LoggerContext ctx, final MBeanServer mbs, final Executor executor)
|
||||
throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {
|
||||
|
||||
final Map<String, LoggerConfig> map = ctx.getConfiguration().getLoggers();
|
||||
for (final String name : map.keySet()) {
|
||||
final LoggerConfig cfg = map.get(name);
|
||||
final LoggerConfigAdmin mbean = new LoggerConfigAdmin(ctx, cfg);
|
||||
register(mbs, mbean, mbean.getObjectName());
|
||||
|
||||
if (cfg instanceof AsyncLoggerConfig) {
|
||||
final AsyncLoggerConfig async = (AsyncLoggerConfig) cfg;
|
||||
final RingBufferAdmin rbmbean = async.createRingBufferAdmin(ctx.getName());
|
||||
register(mbs, rbmbean, rbmbean.getObjectName());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void registerAppenders(final LoggerContext ctx, final MBeanServer mbs, final Executor executor)
|
||||
throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {
|
||||
|
||||
final Map<String, Appender> map = ctx.getConfiguration().getAppenders();
|
||||
for (final String name : map.keySet()) {
|
||||
final Appender appender = map.get(name);
|
||||
|
||||
if (appender instanceof AsyncAppender) {
|
||||
final AsyncAppender async = ((AsyncAppender) appender);
|
||||
final AsyncAppenderAdmin mbean = new AsyncAppenderAdmin(ctx.getName(), async);
|
||||
register(mbs, mbean, mbean.getObjectName());
|
||||
} else {
|
||||
final AppenderAdmin mbean = new AppenderAdmin(ctx.getName(), appender);
|
||||
register(mbs, mbean, mbean.getObjectName());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void register(final MBeanServer mbs, final Object mbean, final ObjectName objectName)
|
||||
throws InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {
|
||||
LOGGER.debug("Registering MBean {}", objectName);
|
||||
mbs.registerMBean(mbean, objectName);
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue