diff --git a/.gitignore b/.gitignore index 82ed94bd318..d8798cd9694 100644 --- a/.gitignore +++ b/.gitignore @@ -15,9 +15,11 @@ docs/build.log /tmp/ backwards/ html_docs +.vagrant/ + ## eclipse ignores (use 'mvn eclipse:eclipse' to build eclipse projects) ## All files (.project, .classpath, .settings/*) should be generated through Maven which -## will correctly set the classpath based on the declared dependencies and write settings +## will correctly set the classpath based on the declared dependencies and write settings ## files to ensure common coding style across Eclipse and IDEA. .project .classpath diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 03f8ac46332..0a57d0fa678 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -6,7 +6,7 @@ Elasticsearch is an open source project and we love to receive contributions fro Bug reports ----------- -If you think you have found a bug in Elasticsearch, first make sure that you are testing against the [latest version of Elasticsearch](https://www.elastic.co/downloads/elasticsearch) - your issue may already have been fixed. If not, search our [issues list](https://github.com/elasticsearch/elasticsearch/issues) on GitHub in case a similar issue has already been opened. +If you think you have found a bug in Elasticsearch, first make sure that you are testing against the [latest version of Elasticsearch](https://www.elastic.co/downloads/elasticsearch) - your issue may already have been fixed. If not, search our [issues list](https://github.com/elastic/elasticsearch/issues) on GitHub in case a similar issue has already been opened. It is very helpful if you can prepare a reproduction of the bug. In other words, provide a small test case which we can run to confirm your bug. It makes it easier to find the problem and to fix it. Test cases should be provided as `curl` commands which we can copy and paste into a terminal to run it locally, for example: @@ -29,7 +29,7 @@ Feature requests ---------------- If you find yourself wishing for a feature that doesn't exist in Elasticsearch, you are probably not alone. There are bound to be others out there with similar needs. Many of the features that Elasticsearch has today have been added because our users saw the need. -Open an issue on our [issues list](https://github.com/elasticsearch/elasticsearch/issues) on GitHub which describes the feature you would like to see, why you need it, and how it should work. +Open an issue on our [issues list](https://github.com/elastic/elasticsearch/issues) on GitHub which describes the feature you would like to see, why you need it, and how it should work. Contributing code and documentation changes ------------------------------------------- @@ -38,7 +38,7 @@ If you have a bugfix or new feature that you would like to contribute to Elastic We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code. -The process for contributing to any of the [Elasticsearch repositories](https://github.com/elasticsearch/) is similar. Details for individual projects can be found below. +The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) is similar. Details for individual projects can be found below. ### Fork and clone the repository @@ -58,7 +58,7 @@ Once your changes and tests are ready to submit for review: 2. Sign the Contributor License Agreement - Please make sure you have signed our [Contributor License Agreement](http://www.elasticsearch.org/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once. + Please make sure you have signed our [Contributor License Agreement](https://www.elastic.co/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once. 3. Rebase your changes @@ -74,7 +74,7 @@ Then sit back and wait. There will probably be discussion about the pull request Contributing to the Elasticsearch codebase ------------------------------------------ -**Repository:** [https://github.com/elasticsearch/elasticsearch](https://github.com/elasticsearch/elasticsearch) +**Repository:** [https://github.com/elasticsearch/elasticsearch](https://github.com/elastic/elasticsearch) Make sure you have [Maven](http://maven.apache.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE by running `mvn eclipse:eclipse` and then importing the project into their workspace: `File > Import > Existing project into workspace`. @@ -103,4 +103,4 @@ Before submitting your changes, run the test suite to make sure that nothing is mvn clean test -Dtests.slow=true ``` -Source: [Contributing to elasticsearch](http://www.elasticsearch.org/contributing-to-elasticsearch/) +Source: [Contributing to elasticsearch](https://www.elastic.co/contributing-to-elasticsearch/) diff --git a/TESTING.asciidoc b/TESTING.asciidoc index dc7ada692f5..5cd8d9e8c53 100644 --- a/TESTING.asciidoc +++ b/TESTING.asciidoc @@ -81,7 +81,7 @@ You can also filter tests by certain annotations ie: Those annotation names can be combined into a filter expression like: ------------------------------------------------ -mvn test -Dtests.filter="@nightly and not @backwards" +mvn test -Dtests.filter="@nightly and not @backwards" ------------------------------------------------ to run all nightly test but not the ones that are backwards tests. `tests.filter` supports @@ -89,7 +89,7 @@ the boolean operators `and, or, not` and grouping ie: --------------------------------------------------------------- -mvn test -Dtests.filter="@nightly and not(@badapple or @backwards)" +mvn test -Dtests.filter="@nightly and not(@badapple or @backwards)" --------------------------------------------------------------- === Seed and repetitions. @@ -102,7 +102,7 @@ mvn test -Dtests.seed=DEADBEEF === Repeats _all_ tests of ClassName N times. -Every test repetition will have a different method seed +Every test repetition will have a different method seed (derived from a single random master seed). -------------------------------------------------- @@ -149,7 +149,7 @@ mvn test -Dtests.awaitsfix=[false] - known issue (@AwaitsFix) === Load balancing and caches. -By default, the tests run sequentially on a single forked JVM. +By default, the tests run sequentially on a single forked JVM. To run with more forked JVMs than the default use: @@ -158,7 +158,7 @@ mvn test -Dtests.jvms=8 test ---------------------------- Don't count hypercores for CPU-intense tests and leave some slack -for JVM-internal threads (like the garbage collector). Make sure there is +for JVM-internal threads (like the garbage collector). Make sure there is enough RAM to handle child JVMs. === Test compatibility. @@ -208,7 +208,7 @@ mvn test -Dtests.output=always Configure the heap size. ------------------------------ -mvn test -Dtests.heap.size=512m +mvn test -Dtests.heap.size=512m ------------------------------ Pass arbitrary jvm arguments. @@ -231,7 +231,7 @@ mvn test -Dtests.filter="@backwards" -Dtests.bwc.version=x.y.z -Dtests.bwc.path= Note that backwards tests must be run with security manager disabled. If the elasticsearch release is placed under `./backwards/elasticsearch-x.y.z` the path can be omitted: - + --------------------------------------------------------------------------- mvn test -Dtests.filter="@backwards" -Dtests.bwc.version=x.y.z -Dtests.security.manager=false --------------------------------------------------------------------------- @@ -242,7 +242,7 @@ already in your elasticsearch clone): --------------------------------------------------------------------------- $ mkdir backwards && cd backwards $ curl -O https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.2.1.tar.gz -$ tar -xzf elasticsearch-1.2.1.tar.gz +$ tar -xzf elasticsearch-1.2.1.tar.gz --------------------------------------------------------------------------- == Running integration tests @@ -314,25 +314,180 @@ mvn test -Pdev == Testing scripts -Shell scripts can be tested with the Bash Automate Testing System tool available -at https://github.com/sstephenson/bats. Once the tool is installed, you can -execute a .bats test file with the following command: +The simplest way to test scripts and the packaged distributions is to use +Vagrant. You can get started by following there five easy steps: ---------------------------------------------------------------------------- -bats test_file.bats ---------------------------------------------------------------------------- +. Install Virtual Box and Vagrant. -When executing the test files located in the `/packaging/scripts` folder, -it's possible to add the flag `ES_CLEAN_BEFORE_TEST=true` to clean the test -environment before the tests are executed: +. (Optional) Install vagrant-cachier to squeeze a bit more performance out of +the process: +-------------------------------------- +vagrant plugin install vagrant-cachier +-------------------------------------- ---------------------------------------------------------------------------- -ES_CLEAN_BEFORE_TEST=true bats 30_deb_package.bats ---------------------------------------------------------------------------- +. Validate your installed dependencies: +------------------------------------- +mvn -Pvagrant -pl qa/vagrant validate +------------------------------------- -The current mode of execution is to copy all the packages that should be tested -into one directory, then copy the bats files into the same directory and run -those. +. Download the VMs. Since Maven or ant or something eats the progress reports +from Vagrant when you run it inside mvn its probably best if you run this one +time to setup all the VMs one at a time. Run this to download and setup the VMs +we use for testing by default: +-------------------------------------------------------- +vagrant up --provision trusty && vagrant halt trusty +vagrant up --provision centos-7 && vagrant halt centos-7 +-------------------------------------------------------- +or run this to download and setup all the VMs: +------------------------------------------------------------------------------- +vagrant halt +for box in $(vagrant status | grep 'poweroff\|not created' | cut -f1 -d' '); do + vagrant up --provision $box + vagrant halt $box +done +------------------------------------------------------------------------------- + +. Smoke test the maven/ant dance that we use to get vagrant involved in +integration testing is working: +--------------------------------------------- +mvn -Pvagrant,smoke-vms -pl qa/vagrant verify +--------------------------------------------- +or this to validate all the VMs: +------------------------------------------------- +mvn -Pvagrant,smoke-vms,all -pl qa/vagrant verify +------------------------------------------------- +That will start up the VMs and then immediate quit. + +. Finally run the tests. The fastest way to get this started is to run: +----------------------------------- +mvn clean install -DskipTests +mvn -Pvagrant -pl qa/vagrant verify +----------------------------------- +You could just run: +-------------------- +mvn -Pvagrant verify +-------------------- +but that will run all the tests. Which is probably a good thing, but not always +what you want. + +Whichever snippet you run mvn will build the tar, zip and deb packages. If you +have rpmbuild installed it'll build the rpm package as well. Then mvn will +spin up trusty and verify the tar, zip, and deb package. If you have rpmbuild +installed it'll spin up centos-7 and verify the tar, zip and rpm packages. We +chose those two distributions as the default because they cover deb and rpm +packaging and SyvVinit and systemd. + +You can control the boxes that are used for testing like so. Run just +fedora-22 with: +-------------------------------------------- +mvn -Pvagrant -pl qa/vagrant verify -DboxesToTest=fedora-22 +-------------------------------------------- +or run wheezy and trusty: +------------------------------------------------------------------ +mvn -Pvagrant -pl qa/vagrant verify -DboxesToTest='wheezy, trusty' +------------------------------------------------------------------ +or run all the boxes: +--------------------------------------- +mvn -Pvagrant,all -pl qa/vagrant verify +--------------------------------------- + +Its important to know that if you ctrl-c any of these `mvn` runs that you'll +probably leave a VM up. You can terminate it by running: +------------ +vagrant halt +------------ + +This is just regular vagrant so you can run normal multi box vagrant commands +to test things manually. Just run: +--------------------------------------- +vagrant up trusty && vagrant ssh trusty +--------------------------------------- +to get an Ubuntu or +------------------------------------------- +vagrant up centos-7 && vagrant ssh centos-7 +------------------------------------------- +to get a CentOS. Once you are done with them you should halt them: +------------------- +vagrant halt trusty +------------------- + +These are the linux flavors the Vagrantfile currently supports: +* precise aka Ubuntu 12.04 +* trusty aka Ubuntu 14.04 +* vivid aka Ubuntun 15.04 +* wheezy aka Debian 7, the current debian oldstable distribution +* jessie aka Debian 8, the current debina stable distribution +* centos-6 +* centos-7 +* fedora-22 +* oel-7 aka Oracle Enterprise Linux 7 + +We're missing the following from the support matrix because there aren't high +quality boxes available in vagrant atlas: +* sles-11 +* sles-12 +* opensuse-13 +* oel-6 + +We're missing the follow because our tests are very linux/bash centric: +* Windows Server 2012 + +Its important to think of VMs like cattle: if they become lame you just shoot +them and let vagrant reprovision them. Say you've hosed your precise VM: +---------------------------------------------------- +vagrant ssh precise -c 'sudo rm -rf /bin'; echo oops +---------------------------------------------------- +All you've got to do to get another one is +---------------------------------------------- +vagrant destroy -f trusty && vagrant up trusty +---------------------------------------------- +The whole process takes a minute and a half on a modern laptop, two and a half +without vagrant-cachier. + +Its possible that some downloads will fail and it'll be impossible to restart +them. This is a bug in vagrant. See the instructions here for how to work +around it: +https://github.com/mitchellh/vagrant/issues/4479 + +Some vagrant commands will work on all VMs at once: +------------------ +vagrant halt +vagrant destroy -f +------------------ + +---------- +vagrant up +---------- +would normally start all the VMs but we've prevented that because that'd +consume a ton of ram. + +== Testing scripts more directly + +In general its best to stick to testing in vagrant because the bats scripts are +destructive. When working with a single package its generally faster to run its +tests in a tighter loop than maven provides. In one window: +-------------------------------- +mvn -pl distribution/rpm package +-------------------------------- +and in another window: +---------------------------------------------------- +vagrant up centos-7 && vagrant ssh centos-7 +cd $RPM +sudo ES_CLEAN_BEFORE_TEST=true bats $BATS/*rpm*.bats +---------------------------------------------------- + +At this point `ES_CLEAN_BEFORE_TEST=true` is required or tests fail spuriously. + +If you wanted to retest all the release artifacts on a single VM you could: +------------------------------------------------- +# Build all the distributions fresh but skip recompiling elasticsearch: +mvn -amd -pl distribution install -DskipTests +# Copy them all the testroot +mvn -Pvagrant -pl qa/vagrant pre-integration-test +vagrant up trusty && vagrant ssh trusty +cd $TESTROOT +sudo ES_CLEAN_BEFORE_TEST=true bats $BATS/*.bats +------------------------------------------------- == Coverage analysis @@ -342,3 +497,8 @@ To run tests instrumented with jacoco and produce a coverage report in --------------------------------------------------------------------------- mvn -Dtests.coverage test jacoco:report --------------------------------------------------------------------------- + +== Debugging from an IDE + +If you want to run elasticsearch from your IDE, you should execute ./run.sh +It opens a remote debugging port that you can connect with your IDE. diff --git a/Vagrantfile b/Vagrantfile new file mode 100644 index 00000000000..c6698725c2a --- /dev/null +++ b/Vagrantfile @@ -0,0 +1,156 @@ +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# This Vagrantfile exists to test packaging. Read more about its use in the +# vagrant section in TESTING.asciidoc. + +# Licensed to Elasticsearch under one or more contributor +# license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright +# ownership. Elasticsearch licenses this file to you under +# the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +Vagrant.configure(2) do |config| + config.vm.define "precise", autostart: false do |config| + config.vm.box = "ubuntu/precise64" + ubuntu_common config + end + config.vm.define "trusty", autostart: false do |config| + config.vm.box = "ubuntu/trusty64" + ubuntu_common config + end + config.vm.define "vivid", autostart: false do |config| + config.vm.box = "ubuntu/vivid64" + ubuntu_common config + end + config.vm.define "wheezy", autostart: false do |config| + config.vm.box = "debian/wheezy64" + deb_common(config) + end + config.vm.define "jessie", autostart: false do |config| + config.vm.box = "debian/jessie64" + deb_common(config) + end + config.vm.define "centos-6", autostart: false do |config| + # TODO switch from chef to boxcutter to provide? + config.vm.box = "chef/centos-6.6" + rpm_common(config) + end + config.vm.define "centos-7", autostart: false do |config| + # There is a centos/7 box but it doesn't have rsync or virtualbox guest + # stuff on there so its slow to use. So chef it is.... + # TODO switch from chef to boxcutter to provide? + config.vm.box = "chef/centos-7.0" + rpm_common(config) + end + # This box hangs _forever_ on ```yum check-update```. I have no idea why. + # config.vm.define "oel-6", autostart: false do |config| + # config.vm.box = "boxcutter/oel66" + # rpm_common(config) + # end + config.vm.define "oel-7", autostart: false do |config| + config.vm.box = "boxcutter/oel70" + rpm_common(config) + end + config.vm.define "fedora-22", autostart: false do |config| + # Fedora hosts their own 'cloud' images that aren't in Vagrant's Atlas but + # and are missing required stuff like rsync. It'd be nice if we could use + # them but they much slower to get up and running then the boxcutter image. + config.vm.box = "boxcutter/fedora22" + dnf_common(config) + end + # Switch the default share for the project root from /vagrant to + # /elasticsearch because /vagrant is confusing when there is a project inside + # the elasticsearch project called vagrant.... + config.vm.synced_folder ".", "/vagrant", disabled: true + config.vm.synced_folder "", "/elasticsearch" +end + +def ubuntu_common(config) + # Ubuntu is noisy + # http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html + config.vm.provision "fix-no-tty", type: "shell" do |s| + s.privileged = false + s.inline = "sudo sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile" + end + deb_common(config) +end + +def deb_common(config) + provision(config, "apt-get update", "/var/cache/apt/archives/last_update", + "apt-get install -y", "openjdk-7-jdk") + if Vagrant.has_plugin?("vagrant-cachier") + config.cache.scope = :box + end +end + +def rpm_common(config) + provision(config, "yum check-update", "/var/cache/yum/last_update", + "yum install -y", "java-1.7.0-openjdk-devel") + if Vagrant.has_plugin?("vagrant-cachier") + config.cache.scope = :box + end +end + +def dnf_common(config) + provision(config, "dnf check-update", "/var/cache/dnf/last_update", + "dnf install -y", "java-1.8.0-openjdk-devel") + if Vagrant.has_plugin?("vagrant-cachier") + config.cache.scope = :box + # Autodetect doesn't work.... + config.cache.auto_detect = false + config.cache.enable :generic, { :cache_dir => "/var/cache/dnf" } + end +end + + +def provision(config, update_command, update_tracking_file, install_command, java_package) + config.vm.provision "elasticsearch bats dependencies", type: "shell", inline: <<-SHELL + set -e + installed() { + command -v $1 2>&1 >/dev/null + } + install() { + # Only apt-get update if we haven't in the last day + if [ ! -f /tmp/update ] || [ "x$(find /tmp/update -mtime +0)" == "x/tmp/update" ]; then + sudo #{update_command} || true + touch #{update_tracking_file} + fi + sudo #{install_command} $1 + } + ensure() { + installed $1 || install $1 + } + installed java || install #{java_package} + ensure curl + ensure unzip + + installed bats || { + # Bats lives in a git repository.... + ensure git + git clone https://github.com/sstephenson/bats /tmp/bats + # Centos doesn't add /usr/local/bin to the path.... + sudo /tmp/bats/install.sh /usr + sudo rm -rf /tmp/bats + } + cat \<\ /etc/profile.d/elasticsearch_vars.sh +export ZIP=/elasticsearch/distribution/zip/target/releases +export TAR=/elasticsearch/distribution/tar/target/releases +export RPM=/elasticsearch/distribution/rpm/target/releases +export DEB=/elasticsearch/distribution/deb/target/releases +export TESTROOT=/elasticsearch/qa/vagrant/target/testroot +export BATS=/elasticsearch/qa/vagrant/src/test/resources/packaging/scripts/ +VARS + SHELL +end diff --git a/core/pom.xml b/core/pom.xml index d4fb7f95342..449ce50350f 100644 --- a/core/pom.xml +++ b/core/pom.xml @@ -320,6 +320,17 @@ + + org.apache.maven.plugins + maven-antrun-plugin + + + + check-license + none + + + diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java index ea8c3bc106f..c437a4455b6 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodeStats.java @@ -33,6 +33,7 @@ import org.elasticsearch.monitor.fs.FsInfo; import org.elasticsearch.monitor.jvm.JvmStats; import org.elasticsearch.monitor.os.OsStats; import org.elasticsearch.monitor.process.ProcessStats; +import org.elasticsearch.script.ScriptStats; import org.elasticsearch.threadpool.ThreadPoolStats; import org.elasticsearch.transport.TransportStats; @@ -73,13 +74,17 @@ public class NodeStats extends BaseNodeResponse implements ToXContent { @Nullable private AllCircuitBreakerStats breaker; + @Nullable + private ScriptStats scriptStats; + NodeStats() { } public NodeStats(DiscoveryNode node, long timestamp, @Nullable NodeIndicesStats indices, @Nullable OsStats os, @Nullable ProcessStats process, @Nullable JvmStats jvm, @Nullable ThreadPoolStats threadPool, @Nullable FsInfo fs, @Nullable TransportStats transport, @Nullable HttpStats http, - @Nullable AllCircuitBreakerStats breaker) { + @Nullable AllCircuitBreakerStats breaker, + @Nullable ScriptStats scriptStats) { super(node); this.timestamp = timestamp; this.indices = indices; @@ -91,6 +96,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent { this.transport = transport; this.http = http; this.breaker = breaker; + this.scriptStats = scriptStats; } public long getTimestamp() { @@ -165,6 +171,11 @@ public class NodeStats extends BaseNodeResponse implements ToXContent { return this.breaker; } + @Nullable + public ScriptStats getScriptStats() { + return this.scriptStats; + } + public static NodeStats readNodeStats(StreamInput in) throws IOException { NodeStats nodeInfo = new NodeStats(); nodeInfo.readFrom(in); @@ -200,6 +211,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent { http = HttpStats.readHttpStats(in); } breaker = AllCircuitBreakerStats.readOptionalAllCircuitBreakerStats(in); + scriptStats = in.readOptionalStreamable(new ScriptStats()); } @@ -256,6 +268,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent { http.writeTo(out); } out.writeOptionalStreamable(breaker); + out.writeOptionalStreamable(scriptStats); } @Override @@ -303,6 +316,9 @@ public class NodeStats extends BaseNodeResponse implements ToXContent { if (getBreaker() != null) { getBreaker().toXContent(builder, params); } + if (getScriptStats() != null) { + getScriptStats().toXContent(builder, params); + } return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java index 4fa654e7259..bd4f08fc6cd 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequest.java @@ -41,6 +41,7 @@ public class NodesStatsRequest extends BaseNodesRequest { private boolean transport; private boolean http; private boolean breaker; + private boolean script; protected NodesStatsRequest() { } @@ -67,6 +68,7 @@ public class NodesStatsRequest extends BaseNodesRequest { this.transport = true; this.http = true; this.breaker = true; + this.script = true; return this; } @@ -84,6 +86,7 @@ public class NodesStatsRequest extends BaseNodesRequest { this.transport = false; this.http = false; this.breaker = false; + this.script = false; return this; } @@ -240,6 +243,15 @@ public class NodesStatsRequest extends BaseNodesRequest { return this; } + public boolean script() { + return script; + } + + public NodesStatsRequest script(boolean script) { + this.script = script; + return this; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -253,6 +265,7 @@ public class NodesStatsRequest extends BaseNodesRequest { transport = in.readBoolean(); http = in.readBoolean(); breaker = in.readBoolean(); + script = in.readBoolean(); } @Override @@ -268,6 +281,7 @@ public class NodesStatsRequest extends BaseNodesRequest { out.writeBoolean(transport); out.writeBoolean(http); out.writeBoolean(breaker); + out.writeBoolean(script); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java index 992505ef16a..db8d774b39d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java @@ -62,6 +62,11 @@ public class NodesStatsRequestBuilder extends NodesOperationRequestBuilder shardsStats = new ArrayList<>(); for (IndexService indexService : indicesService) { for (IndexShard indexShard : indexService) { diff --git a/core/src/main/java/org/elasticsearch/node/service/NodeService.java b/core/src/main/java/org/elasticsearch/node/service/NodeService.java index e50fd3e7ee2..5be6bf11e56 100644 --- a/core/src/main/java/org/elasticsearch/node/service/NodeService.java +++ b/core/src/main/java/org/elasticsearch/node/service/NodeService.java @@ -36,6 +36,7 @@ import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.monitor.MonitorService; import org.elasticsearch.plugins.PluginsService; +import org.elasticsearch.script.ScriptService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -51,6 +52,8 @@ public class NodeService extends AbstractComponent { private final IndicesService indicesService; private final PluginsService pluginService; private final CircuitBreakerService circuitBreakerService; + private ScriptService scriptService; + @Nullable private HttpServer httpServer; @@ -63,7 +66,8 @@ public class NodeService extends AbstractComponent { @Inject public NodeService(Settings settings, ThreadPool threadPool, MonitorService monitorService, Discovery discovery, TransportService transportService, IndicesService indicesService, - PluginsService pluginService, CircuitBreakerService circuitBreakerService, Version version) { + PluginsService pluginService, CircuitBreakerService circuitBreakerService, + Version version) { super(settings); this.threadPool = threadPool; this.monitorService = monitorService; @@ -76,6 +80,12 @@ public class NodeService extends AbstractComponent { this.circuitBreakerService = circuitBreakerService; } + // can not use constructor injection or there will be a circular dependency + @Inject(optional = true) + public void setScriptService(ScriptService scriptService) { + this.scriptService = scriptService; + } + public void setHttpServer(@Nullable HttpServer httpServer) { this.httpServer = httpServer; } @@ -134,12 +144,14 @@ public class NodeService extends AbstractComponent { monitorService.fsService().stats(), transportService.stats(), httpServer == null ? null : httpServer.stats(), - circuitBreakerService.stats() + circuitBreakerService.stats(), + scriptService.stats() ); } public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, boolean jvm, boolean threadPool, boolean network, - boolean fs, boolean transport, boolean http, boolean circuitBreaker) { + boolean fs, boolean transport, boolean http, boolean circuitBreaker, + boolean script) { // for indices stats we want to include previous allocated shards stats as well (it will // only be applied to the sensible ones to use, like refresh/merge/flush/indexing stats) return new NodeStats(discovery.localNode(), System.currentTimeMillis(), @@ -151,7 +163,8 @@ public class NodeService extends AbstractComponent { fs ? monitorService.fsService().stats() : null, transport ? transportService.stats() : null, http ? (httpServer == null ? null : httpServer.stats()) : null, - circuitBreaker ? circuitBreakerService.stats() : null + circuitBreaker ? circuitBreakerService.stats() : null, + script ? scriptService.stats() : null ); } } diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java index d5bb383c33a..3c231de534e 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java @@ -76,6 +76,7 @@ public class RestNodesStatsAction extends BaseRestHandler { nodesStatsRequest.indices(metrics.contains("indices")); nodesStatsRequest.process(metrics.contains("process")); nodesStatsRequest.breaker(metrics.contains("breaker")); + nodesStatsRequest.script(metrics.contains("script")); // check for index specific metrics if (metrics.contains("indices")) { diff --git a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java index 91e0235f54b..fe447b2d9d8 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java @@ -57,6 +57,7 @@ import org.elasticsearch.rest.*; import org.elasticsearch.rest.action.support.RestActionListener; import org.elasticsearch.rest.action.support.RestResponseListener; import org.elasticsearch.rest.action.support.RestTable; +import org.elasticsearch.script.ScriptStats; import org.elasticsearch.search.suggest.completion.CompletionStats; import java.util.Locale; @@ -92,7 +93,7 @@ public class RestNodesAction extends AbstractCatAction { @Override public void processResponse(final NodesInfoResponse nodesInfoResponse) { NodesStatsRequest nodesStatsRequest = new NodesStatsRequest(); - nodesStatsRequest.clear().jvm(true).os(true).fs(true).indices(true).process(true); + nodesStatsRequest.clear().jvm(true).os(true).fs(true).indices(true).process(true).script(true); client.admin().cluster().nodesStats(nodesStatsRequest, new RestResponseListener(channel) { @Override public RestResponse buildResponse(NodesStatsResponse nodesStatsResponse) throws Exception { @@ -183,6 +184,9 @@ public class RestNodesAction extends AbstractCatAction { table.addCell("refresh.total", "alias:rto,refreshTotal;default:false;text-align:right;desc:total refreshes"); table.addCell("refresh.time", "alias:rti,refreshTime;default:false;text-align:right;desc:time spent in refreshes"); + table.addCell("script.compilations", "alias:scrcc,scriptCompilations;default:false;text-align:right;desc:script compilations"); + table.addCell("script.cache_evictions", "alias:scrce,scriptCacheEvictions;default:false;text-align:right;desc:script cache evictions"); + table.addCell("search.fetch_current", "alias:sfc,searchFetchCurrent;default:false;text-align:right;desc:current fetch phase ops"); table.addCell("search.fetch_time", "alias:sfti,searchFetchTime;default:false;text-align:right;desc:time spent in fetch phase"); table.addCell("search.fetch_total", "alias:sfto,searchFetchTotal;default:false;text-align:right;desc:total fetch ops"); @@ -317,6 +321,10 @@ public class RestNodesAction extends AbstractCatAction { table.addCell(refreshStats == null ? null : refreshStats.getTotal()); table.addCell(refreshStats == null ? null : refreshStats.getTotalTime()); + ScriptStats scriptStats = stats == null ? null : stats.getScriptStats(); + table.addCell(scriptStats == null ? null : scriptStats.getCompilations()); + table.addCell(scriptStats == null ? null : scriptStats.getCacheEvictions()); + SearchStats searchStats = indicesStats == null ? null : indicesStats.getSearch(); table.addCell(searchStats == null ? null : searchStats.getTotal().getFetchCurrent()); table.addCell(searchStats == null ? null : searchStats.getTotal().getFetchTime()); diff --git a/core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchF.java b/core/src/main/java/org/elasticsearch/script/ScriptMetrics.java similarity index 60% rename from core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchF.java rename to core/src/main/java/org/elasticsearch/script/ScriptMetrics.java index 9190b6eca6f..c779c196290 100644 --- a/core/src/main/java/org/elasticsearch/bootstrap/ElasticsearchF.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptMetrics.java @@ -17,16 +17,23 @@ * under the License. */ -package org.elasticsearch.bootstrap; +package org.elasticsearch.script; -/** - * Same as {@link Elasticsearch} just runs it in the foreground by default (does not close - * sout and serr). - */ -public class ElasticsearchF { +import org.elasticsearch.common.metrics.CounterMetric; - public static void main(String[] args) throws Throwable { - System.setProperty("es.foreground", "yes"); - Bootstrap.main(args); +public class ScriptMetrics { + final CounterMetric compilationsMetric = new CounterMetric(); + final CounterMetric cacheEvictionsMetric = new CounterMetric(); + + public ScriptStats stats() { + return new ScriptStats(compilationsMetric.count(), cacheEvictionsMetric.count()); + } + + public void onCompilation() { + compilationsMetric.inc(); + } + + public void onCacheEviction() { + cacheEvictionsMetric.inc(); } } diff --git a/core/src/main/java/org/elasticsearch/script/ScriptService.java b/core/src/main/java/org/elasticsearch/script/ScriptService.java index f6d8132d6c6..c63cf47f4ae 100644 --- a/core/src/main/java/org/elasticsearch/script/ScriptService.java +++ b/core/src/main/java/org/elasticsearch/script/ScriptService.java @@ -84,6 +84,7 @@ public class ScriptService extends AbstractComponent implements Closeable { public static final String DEFAULT_SCRIPTING_LANGUAGE_SETTING = "script.default_lang"; public static final String SCRIPT_CACHE_SIZE_SETTING = "script.cache.max_size"; + public static final int SCRIPT_CACHE_SIZE_DEFAULT = 100; public static final String SCRIPT_CACHE_EXPIRE_SETTING = "script.cache.expire"; public static final String SCRIPT_INDEX = ".scripts"; public static final String DEFAULT_LANG = GroovyScriptEngineService.NAME; @@ -107,6 +108,8 @@ public class ScriptService extends AbstractComponent implements Closeable { private Client client = null; + private final ScriptMetrics scriptMetrics = new ScriptMetrics(); + /** * @deprecated Use {@link org.elasticsearch.script.Script.ScriptField} instead. This should be removed in * 2.0 @@ -140,7 +143,7 @@ public class ScriptService extends AbstractComponent implements Closeable { this.scriptEngines = scriptEngines; this.scriptContextRegistry = scriptContextRegistry; - int cacheMaxSize = settings.getAsInt(SCRIPT_CACHE_SIZE_SETTING, 100); + int cacheMaxSize = settings.getAsInt(SCRIPT_CACHE_SIZE_SETTING, SCRIPT_CACHE_SIZE_DEFAULT); TimeValue cacheExpire = settings.getAsTime(SCRIPT_CACHE_EXPIRE_SETTING, null); logger.debug("using script cache with max_size [{}], expire [{}]", cacheMaxSize, cacheExpire); @@ -306,6 +309,7 @@ public class ScriptService extends AbstractComponent implements Closeable { //Since the cache key is the script content itself we don't need to //invalidate/check the cache if an indexed script changes. + scriptMetrics.onCompilation(); cache.put(cacheKey, compiledScript); } @@ -474,6 +478,10 @@ public class ScriptService extends AbstractComponent implements Closeable { } } + public ScriptStats stats() { + return scriptMetrics.stats(); + } + /** * A small listener for the script cache that calls each * {@code ScriptEngineService}'s {@code scriptRemoved} method when the @@ -486,6 +494,7 @@ public class ScriptService extends AbstractComponent implements Closeable { if (logger.isDebugEnabled()) { logger.debug("notifying script services of script removal due to: [{}]", notification.getCause()); } + scriptMetrics.onCacheEviction(); for (ScriptEngineService service : scriptEngines) { try { service.scriptRemoved(notification.getValue()); @@ -532,6 +541,7 @@ public class ScriptService extends AbstractComponent implements Closeable { String script = Streams.copyToString(reader); String cacheKey = getCacheKey(engineService, scriptNameExt.v1(), null); staticCache.put(cacheKey, new CompiledScript(ScriptType.FILE, scriptNameExt.v1(), engineService.types()[0], engineService.compile(script))); + scriptMetrics.onCompilation(); } } else { logger.warn("skipping compile of script file [{}] as all scripted operations are disabled for file scripts", file.toAbsolutePath()); diff --git a/core/src/main/java/org/elasticsearch/script/ScriptStats.java b/core/src/main/java/org/elasticsearch/script/ScriptStats.java new file mode 100644 index 00000000000..0bad4b27de0 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/script/ScriptStats.java @@ -0,0 +1,82 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.script; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Streamable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentBuilderString; + +import java.io.IOException; + +public class ScriptStats implements Streamable, ToXContent { + private long compilations; + private long cacheEvictions; + + public ScriptStats() { + } + + public ScriptStats(long compilations, long cacheEvictions) { + this.compilations = compilations; + this.cacheEvictions = cacheEvictions; + } + + public void add(ScriptStats stats) { + this.compilations += stats.compilations; + this.cacheEvictions += stats.cacheEvictions; + } + + public long getCompilations() { + return compilations; + } + + public long getCacheEvictions() { + return cacheEvictions; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + compilations = in.readVLong(); + cacheEvictions = in.readVLong(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVLong(compilations); + out.writeVLong(cacheEvictions); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(Fields.SCRIPT_STATS); + builder.field(Fields.COMPILATIONS, getCompilations()); + builder.field(Fields.CACHE_EVICTIONS, getCacheEvictions()); + builder.endObject(); + return builder; + } + + static final class Fields { + static final XContentBuilderString SCRIPT_STATS = new XContentBuilderString("script"); + static final XContentBuilderString COMPILATIONS = new XContentBuilderString("compilations"); + static final XContentBuilderString CACHE_EVICTIONS = new XContentBuilderString("cache_evictions"); + } +} diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java index 863031a3499..a24e980f236 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java @@ -179,7 +179,8 @@ public class MockDiskUsagesIT extends ESIntegTestCase { System.currentTimeMillis(), null, null, null, null, null, fsInfo, - null, null, null); + null, null, null, + null); } /** diff --git a/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java b/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java index 96f2a45b2b9..658a2100886 100644 --- a/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java +++ b/core/src/test/java/org/elasticsearch/script/ScriptServiceTests.java @@ -387,6 +387,73 @@ public class ScriptServiceTests extends ESTestCase { } } + @Test + public void testCompileCountedInCompilationStats() throws IOException { + buildScriptService(Settings.EMPTY); + scriptService.compile(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + assertEquals(1L, scriptService.stats().getCompilations()); + } + + @Test + public void testExecutableCountedInCompilationStats() throws IOException { + buildScriptService(Settings.EMPTY); + scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + assertEquals(1L, scriptService.stats().getCompilations()); + } + + @Test + public void testSearchCountedInCompilationStats() throws IOException { + buildScriptService(Settings.EMPTY); + scriptService.search(null, new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + assertEquals(1L, scriptService.stats().getCompilations()); + } + + @Test + public void testMultipleCompilationsCountedInCompilationStats() throws IOException { + buildScriptService(Settings.EMPTY); + int numberOfCompilations = randomIntBetween(1, 1024); + for (int i = 0; i < numberOfCompilations; i++) { + scriptService.compile(new Script(i + " + " + i, ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + } + assertEquals(numberOfCompilations, scriptService.stats().getCompilations()); + } + + @Test + public void testCompilationStatsOnCacheHit() throws IOException { + Settings.Builder builder = Settings.builder(); + builder.put(ScriptService.SCRIPT_CACHE_SIZE_SETTING, 1); + buildScriptService(builder.build()); + scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + assertEquals(1L, scriptService.stats().getCompilations()); + } + + @Test + public void testFileScriptCountedInCompilationStats() throws IOException { + buildScriptService(Settings.EMPTY); + createFileScripts("test"); + scriptService.compile(new Script("file_script", ScriptType.FILE, "test", null), randomFrom(scriptContexts)); + assertEquals(1L, scriptService.stats().getCompilations()); + } + + @Test + public void testIndexedScriptCountedInCompilationStats() throws IOException { + buildScriptService(Settings.EMPTY); + scriptService.compile(new Script("script", ScriptType.INDEXED, "test", null), randomFrom(scriptContexts)); + assertEquals(1L, scriptService.stats().getCompilations()); + } + + @Test + public void testCacheEvictionCountedInCacheEvictionsStats() throws IOException { + Settings.Builder builder = Settings.builder(); + builder.put(ScriptService.SCRIPT_CACHE_SIZE_SETTING, 1); + buildScriptService(builder.build()); + scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + scriptService.executable(new Script("2+2", ScriptType.INLINE, "test", null), randomFrom(scriptContexts)); + assertEquals(2L, scriptService.stats().getCompilations()); + assertEquals(1L, scriptService.stats().getCacheEvictions()); + } + private void createFileScripts(String... langs) throws IOException { for (String lang : langs) { Path scriptPath = scriptsFilePath.resolve("file_script." + lang); diff --git a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java index 19fe03ce988..189a813bbf0 100644 --- a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java +++ b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java @@ -1855,7 +1855,7 @@ public final class InternalTestCluster extends TestCluster { } NodeService nodeService = getInstanceFromNode(NodeService.class, nodeAndClient.node); - NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false); + NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false, false); assertThat("Fielddata size must be 0 on node: " + stats.getNode(), stats.getIndices().getFieldData().getMemorySizeInBytes(), equalTo(0l)); assertThat("Query cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getQueryCache().getMemorySizeInBytes(), equalTo(0l)); assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); diff --git a/dev-tools/src/main/resources/ant/integration-tests.xml b/dev-tools/src/main/resources/ant/integration-tests.xml index 4504daca3d9..3709687448b 100644 --- a/dev-tools/src/main/resources/ant/integration-tests.xml +++ b/dev-tools/src/main/resources/ant/integration-tests.xml @@ -237,7 +237,7 @@ - @@ -252,7 +252,7 @@ - @@ -273,13 +273,13 @@ - + - - - + + + @@ -306,7 +306,7 @@ - + @@ -315,11 +315,11 @@ - - - + + + - + @@ -359,5 +359,4 @@ - diff --git a/dev-tools/src/main/resources/forbidden/all-signatures.txt b/dev-tools/src/main/resources/forbidden/all-signatures.txt index b03cd14731f..642310519c8 100644 --- a/dev-tools/src/main/resources/forbidden/all-signatures.txt +++ b/dev-tools/src/main/resources/forbidden/all-signatures.txt @@ -46,17 +46,6 @@ java.nio.file.FileSystems#getDefault() @ use PathUtils.getDefault instead. java.nio.file.Files#createTempDirectory(java.lang.String,java.nio.file.attribute.FileAttribute[]) java.nio.file.Files#createTempFile(java.lang.String,java.lang.String,java.nio.file.attribute.FileAttribute[]) -@defaultMessage Constructing a DateTime without a time zone is dangerous -org.joda.time.DateTime#() -org.joda.time.DateTime#(long) -org.joda.time.DateTime#(int, int, int, int, int) -org.joda.time.DateTime#(int, int, int, int, int, int) -org.joda.time.DateTime#(int, int, int, int, int, int, int) -org.joda.time.DateTime#now() -org.joda.time.DateTimeZone#getDefault() - -com.google.common.collect.Iterators#emptyIterator() @ Use Collections.emptyIterator instead - @defaultMessage Don't use java serialization - this can break BWC without noticing it java.io.ObjectOutputStream java.io.ObjectOutput diff --git a/dev-tools/src/main/resources/forbidden/third-party-shaded-signatures.txt b/dev-tools/src/main/resources/forbidden/third-party-shaded-signatures.txt index 798265d8ad9..db1cd6f83be 100644 --- a/dev-tools/src/main/resources/forbidden/third-party-shaded-signatures.txt +++ b/dev-tools/src/main/resources/forbidden/third-party-shaded-signatures.txt @@ -20,3 +20,14 @@ org.elasticsearch.common.primitives.Longs#compare(long,long) @defaultMessage unsafe encoders/decoders have problems in the lzf compress library. Use variants of encode/decode functions which take Encoder/Decoder. org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder#() org.elasticsearch.common.compress.lzf.util.ChunkDecoderFactory#optimalInstance() + +@defaultMessage Constructing a DateTime without a time zone is dangerous +org.elasticsearch.joda.time.DateTime#() +org.elasticsearch.joda.time.DateTime#(long) +org.elasticsearch.joda.time.DateTime#(int, int, int, int, int) +org.elasticsearch.joda.time.DateTime#(int, int, int, int, int, int) +org.elasticsearch.joda.time.DateTime#(int, int, int, int, int, int, int) +org.elasticsearch.joda.time.DateTime#now() +org.elasticsearch.joda.time.DateTimeZone#getDefault() + +org.elasticsearch.common.collect.Iterators#emptyIterator() @ Use Collections.emptyIterator instead diff --git a/dev-tools/src/main/resources/forbidden/third-party-unshaded-signatures.txt b/dev-tools/src/main/resources/forbidden/third-party-unshaded-signatures.txt index 379951e8c3a..9979f8c221a 100644 --- a/dev-tools/src/main/resources/forbidden/third-party-unshaded-signatures.txt +++ b/dev-tools/src/main/resources/forbidden/third-party-unshaded-signatures.txt @@ -58,3 +58,14 @@ com.ning.compress.lzf.LZFOutputStream#(java.io.OutputStream) com.ning.compress.lzf.LZFOutputStream#(java.io.OutputStream, com.ning.compress.BufferRecycler) com.ning.compress.lzf.LZFUncompressor#(com.ning.compress.DataHandler) com.ning.compress.lzf.LZFUncompressor#(com.ning.compress.DataHandler, com.ning.compress.BufferRecycler) + +@defaultMessage Constructing a DateTime without a time zone is dangerous +org.joda.time.DateTime#() +org.joda.time.DateTime#(long) +org.joda.time.DateTime#(int, int, int, int, int) +org.joda.time.DateTime#(int, int, int, int, int, int) +org.joda.time.DateTime#(int, int, int, int, int, int, int) +org.joda.time.DateTime#now() +org.joda.time.DateTimeZone#getDefault() + +com.google.common.collect.Iterators#emptyIterator() @ Use Collections.emptyIterator instead \ No newline at end of file diff --git a/dev-tools/src/main/resources/license-check/check_license_and_sha.pl b/dev-tools/src/main/resources/license-check/check_license_and_sha.pl index cc5f5b02773..62e70b581c7 100755 --- a/dev-tools/src/main/resources/license-check/check_license_and_sha.pl +++ b/dev-tools/src/main/resources/license-check/check_license_and_sha.pl @@ -2,6 +2,7 @@ use strict; use warnings; +use v5.10; use FindBin qw($RealBin); use lib "$RealBin/lib"; @@ -10,20 +11,9 @@ use File::Temp(); use File::Find(); use File::Basename qw(basename); use Archive::Extract(); +use Digest::SHA(); $Archive::Extract::PREFER_BIN = 1; -our $SHA_CLASS = 'Digest::SHA'; -if ( eval { require Digest::SHA } ) { - $SHA_CLASS = 'Digest::SHA'; -} -else { - - print STDERR "Digest::SHA not available. " - . "Falling back to Digest::SHA::PurePerl\n"; - require Digest::SHA::PurePerl; - $SHA_CLASS = 'Digest::SHA::PurePerl'; -} - my $mode = shift(@ARGV) || ""; die usage() unless $mode =~ /^--(check|update)$/; @@ -32,6 +22,9 @@ my $Source = shift(@ARGV) || die usage(); $License_Dir = File::Spec->rel2abs($License_Dir) . '/'; $Source = File::Spec->rel2abs($Source); +say "LICENSE DIR: $License_Dir"; +say "SOURCE: $Source"; + die "License dir is not a directory: $License_Dir\n" . usage() unless -d $License_Dir; @@ -59,15 +52,15 @@ sub check_shas_and_licenses { for my $jar ( sort keys %new ) { my $old_sha = delete $old{$jar}; unless ($old_sha) { - print STDERR "$jar: SHA is missing\n"; + say STDERR "$jar: SHA is missing"; $error++; $sha_error++; next; } unless ( $old_sha eq $new{$jar} ) { - print STDERR - "$jar: SHA has changed, expected $old_sha but found $new{$jar}\n"; + say STDERR + "$jar: SHA has changed, expected $old_sha but found $new{$jar}"; $error++; $sha_error++; next; @@ -95,41 +88,37 @@ sub check_shas_and_licenses { } } unless ($license_found) { - print STDERR "$jar: LICENSE is missing\n"; + say STDERR "$jar: LICENSE is missing"; $error++; $sha_error++; } unless ($notice_found) { - print STDERR "$jar: NOTICE is missing\n"; + say STDERR "$jar: NOTICE is missing"; $error++; } } if ( keys %old ) { - print STDERR "Extra SHA files present for: " . join ", ", - sort keys %old; - print "\n"; + say STDERR "Extra SHA files present for: " . join ", ", sort keys %old; $error++; } my @unused_licenses = grep { !$licenses{$_} } keys %licenses; if (@unused_licenses) { $error++; - print STDERR "Extra LICENCE file present: " . join ", ", + say STDERR "Extra LICENCE file present: " . join ", ", sort @unused_licenses; - print "\n"; } my @unused_notices = grep { !$notices{$_} } keys %notices; if (@unused_notices) { $error++; - print STDERR "Extra NOTICE file present: " . join ", ", + say STDERR "Extra NOTICE file present: " . join ", ", sort @unused_notices; - print "\n"; } if ($sha_error) { - print STDERR <<"SHAS" + say STDERR <<"SHAS" You can update the SHA files by running: @@ -137,7 +126,7 @@ $0 --update $License_Dir $Source SHAS } - print("All SHAs and licenses OK\n") unless $error; + say("All SHAs and licenses OK") unless $error; return $error; } @@ -150,13 +139,13 @@ sub write_shas { for my $jar ( sort keys %new ) { if ( $old{$jar} ) { next if $old{$jar} eq $new{$jar}; - print "Updating $jar\n"; + say "Updating $jar"; } else { - print "Adding $jar\n"; + say "Adding $jar"; } open my $fh, '>', $License_Dir . $jar or die $!; - print $fh $new{$jar} . "\n" or die $!; + say $fh $new{$jar} or die $!; close $fh or die $!; } continue { @@ -164,10 +153,10 @@ sub write_shas { } for my $jar ( sort keys %old ) { - print "Deleting $jar\n"; + say "Deleting $jar"; unlink $License_Dir . $jar or die $!; } - print "SHAs updated\n"; + say "SHAs updated"; return 0; } @@ -212,8 +201,6 @@ sub jars_from_zip { $archive->extract( to => $dir_name ) || die $archive->error; my @jars = map { File::Spec->rel2abs( $_, $dir_name ) } grep { /\.jar$/ && !/elasticsearch[^\/]*$/ } @{ $archive->files }; - die "No JARS found in: $source\n" - unless @jars; return calculate_shas(@jars); } @@ -231,8 +218,6 @@ sub jars_from_dir { }, $source ); - die "No JARS found in: $source\n" - unless @jars; return calculate_shas(@jars); } @@ -241,7 +226,7 @@ sub calculate_shas { #=================================== my %shas; while ( my $file = shift() ) { - my $digest = eval { $SHA_CLASS->new(1)->addfile($file) } + my $digest = eval { Digest::SHA->new(1)->addfile($file) } or die "Error calculating SHA1 for <$file>: $!\n"; $shas{ basename($file) . ".sha1" } = $digest->hexdigest; } diff --git a/distribution/deb/pom.xml b/distribution/deb/pom.xml index 2567fa996ab..46173b033f1 100644 --- a/distribution/deb/pom.xml +++ b/distribution/deb/pom.xml @@ -139,6 +139,12 @@ root + + template + + ${packaging.elasticsearch.conf.dir}/scripts + + ${project.build.directory}/generated-packaging/deb/env/elasticsearch diff --git a/distribution/deb/src/main/packaging/init.d/elasticsearch b/distribution/deb/src/main/packaging/init.d/elasticsearch index 19198c91314..7bbb69d440a 100755 --- a/distribution/deb/src/main/packaging/init.d/elasticsearch +++ b/distribution/deb/src/main/packaging/init.d/elasticsearch @@ -175,8 +175,7 @@ case "$1" in # Start Daemon start-stop-daemon -d $ES_HOME --start -b --user "$ES_USER" -c "$ES_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS return=$? - if [ $return -eq 0 ] - then + if [ $return -eq 0 ]; then i=0 timeout=10 # Wait for the process to be properly started before exiting @@ -189,9 +188,9 @@ case "$1" in exit 1 fi done - else - log_end_msg $return fi + log_end_msg $return + exit $return ;; stop) log_daemon_msg "Stopping $DESC" @@ -199,7 +198,8 @@ case "$1" in if [ -f "$PID_FILE" ]; then start-stop-daemon --stop --pidfile "$PID_FILE" \ --user "$ES_USER" \ - --retry=TERM/20/KILL/5 >/dev/null + --quiet \ + --retry forever/TERM/20 > /dev/null if [ $? -eq 1 ]; then log_progress_msg "$DESC is not running but pid file exists, cleaning up" elif [ $? -eq 3 ]; then diff --git a/distribution/pom.xml b/distribution/pom.xml index 9a01416ba30..cb3bdfc5275 100644 --- a/distribution/pom.xml +++ b/distribution/pom.xml @@ -29,6 +29,10 @@ /usr/lib/sysctl.d /usr/lib/tmpfiles.d + + ${project.basedir}/../licenses + ${integ.scratch} + /usr/bin/rpmbuild @@ -97,31 +101,6 @@ org.apache.maven.plugins maven-antrun-plugin - - - check-license - verify - - run - - - ${skip.integ.tests} - - - - - Running license check - - - - - - - - - - org.apache.maven.plugins diff --git a/distribution/rpm/pom.xml b/distribution/rpm/pom.xml index f073561cf47..b682ec32201 100644 --- a/distribution/rpm/pom.xml +++ b/distribution/rpm/pom.xml @@ -144,6 +144,10 @@ + + ${packaging.elasticsearch.conf.dir}/scripts + noreplace + /etc/sysconfig/ diff --git a/distribution/rpm/src/main/packaging/init.d/elasticsearch b/distribution/rpm/src/main/packaging/init.d/elasticsearch index 2b247f933bb..9626dfc862b 100644 --- a/distribution/rpm/src/main/packaging/init.d/elasticsearch +++ b/distribution/rpm/src/main/packaging/init.d/elasticsearch @@ -120,7 +120,7 @@ start() { stop() { echo -n $"Stopping $prog: " # stop it here, often "killproc $prog" - killproc -p $pidfile -d 20 $prog + killproc -p $pidfile -d ${packaging.elasticsearch.stopping.timeout} $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile diff --git a/distribution/rpm/src/main/packaging/packaging.properties b/distribution/rpm/src/main/packaging/packaging.properties index 7aa3228e381..b5bf28aef52 100644 --- a/distribution/rpm/src/main/packaging/packaging.properties +++ b/distribution/rpm/src/main/packaging/packaging.properties @@ -14,3 +14,6 @@ packaging.type=rpm # Custom header for package scripts packaging.scripts.header= packaging.scripts.footer=# Built for ${project.name}-${project.version} (${packaging.type}) + +# Maximum time to wait for elasticsearch to stop (default to 1 day) +packaging.elasticsearch.stopping.timeout=86400 diff --git a/distribution/src/main/packaging/scripts/postrm b/distribution/src/main/packaging/scripts/postrm index 3dfc52e4474..ee1c49b14ad 100644 --- a/distribution/src/main/packaging/scripts/postrm +++ b/distribution/src/main/packaging/scripts/postrm @@ -8,8 +8,8 @@ ${packaging.scripts.header} # $1=purge : indicates an upgrade # # On RedHat, -# $1=1 : indicates an new install -# $1=2 : indicates an upgrade +# $1=0 : indicates a removal +# $1=1 : indicates an upgrade @@ -39,7 +39,7 @@ case "$1" in REMOVE_SERVICE=true REMOVE_USER_AND_GROUP=true ;; - 2) + 1) # If $1=1 this is an upgrade IS_UPGRADE=true ;; diff --git a/distribution/src/main/packaging/scripts/prerm b/distribution/src/main/packaging/scripts/prerm index e8da0069067..11d51c68637 100644 --- a/distribution/src/main/packaging/scripts/prerm +++ b/distribution/src/main/packaging/scripts/prerm @@ -47,13 +47,13 @@ esac if [ "$STOP_REQUIRED" = "true" ]; then echo -n "Stopping elasticsearch service..." if command -v systemctl >/dev/null; then - systemctl --no-reload stop elasticsearch.service > /dev/null 2>&1 || true + systemctl --no-reload stop elasticsearch.service elif [ -x /etc/init.d/elasticsearch ]; then if command -v invoke-rc.d >/dev/null; then - invoke-rc.d elasticsearch stop || true + invoke-rc.d elasticsearch stop else - /etc/init.d/elasticsearch stop || true + /etc/init.d/elasticsearch stop fi # older suse linux distributions do not ship with systemd @@ -61,7 +61,7 @@ if [ "$STOP_REQUIRED" = "true" ]; then # this tries to start the elasticsearch service on these # as well without failing this script elif [ -x /etc/rc.d/init.d/elasticsearch ] ; then - /etc/rc.d/init.d/elasticsearch stop || true + /etc/rc.d/init.d/elasticsearch stop fi echo " OK" fi diff --git a/distribution/src/main/packaging/systemd/elasticsearch.service b/distribution/src/main/packaging/systemd/elasticsearch.service index b857d6136bf..6791992df0c 100644 --- a/distribution/src/main/packaging/systemd/elasticsearch.service +++ b/distribution/src/main/packaging/systemd/elasticsearch.service @@ -32,9 +32,6 @@ StandardOutput=null # Connects standard error to journal StandardError=journal -# When a JVM receives a SIGTERM signal it exits with code 143 -SuccessExitStatus=143 - # Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=${packaging.os.max.open.files} @@ -43,8 +40,17 @@ LimitNOFILE=${packaging.os.max.open.files} # in elasticsearch.yml and 'MAX_LOCKED_MEMORY=unlimited' in ${packaging.env.file} #LimitMEMLOCK=infinity -# Shutdown delay in seconds, before process is tried to be killed with KILL (if configured) -TimeoutStopSec=20 +# Disable timeout logic and wait until process is stopped +TimeoutStopSec=0 + +# SIGTERM signal is used to stop the Java process +KillSignal=SIGTERM + +# Java process is never killed +SendSIGKILL=no + +# When a JVM receives a SIGTERM signal it exits with code 143 +SuccessExitStatus=143 [Install] WantedBy=multi-user.target diff --git a/docs/reference/aggregations/pipeline.asciidoc b/docs/reference/aggregations/pipeline.asciidoc index c566bb174e0..dfaaa74de21 100644 --- a/docs/reference/aggregations/pipeline.asciidoc +++ b/docs/reference/aggregations/pipeline.asciidoc @@ -8,7 +8,7 @@ experimental[] Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding information to the output tree. There are many different types of pipeline aggregation, each computing different information from -other aggregations, but these types can broken down into two families: +other aggregations, but these types can be broken down into two families: _Parent_:: A family of pipeline aggregations that is provided with the output of its parent aggregation and is able diff --git a/docs/reference/cat/nodes.asciidoc b/docs/reference/cat/nodes.asciidoc index 0eccba76827..ab9ffd9a384 100644 --- a/docs/reference/cat/nodes.asciidoc +++ b/docs/reference/cat/nodes.asciidoc @@ -168,6 +168,8 @@ percolating |0s |`percolate.total` |`pto`, `percolateTotal` |No |Total percolations |0 |`refresh.total` |`rto`, `refreshTotal` |No |Number of refreshes |16 |`refresh.time` |`rti`, `refreshTime` |No |Time spent in refreshes |91ms +|`script.compilations` |`scrcc`, `scriptCompilations` |No |Total script compilations |17 +|`script.cache_evictions` |`scrce`, `scriptCacheEvictions` |No |Total compiled scripts evicted from cache |6 |`search.fetch_current` |`sfc`, `searchFetchCurrent` |No |Current fetch phase operations |0 |`search.fetch_time` |`sfti`, `searchFetchTime` |No |Time spent in fetch diff --git a/docs/reference/index-modules/allocation/delayed.asciidoc b/docs/reference/index-modules/allocation/delayed.asciidoc index cc9e72e3647..8d936383847 100644 --- a/docs/reference/index-modules/allocation/delayed.asciidoc +++ b/docs/reference/index-modules/allocation/delayed.asciidoc @@ -60,6 +60,17 @@ NOTE: This setting will not affect the promotion of replicas to primaries, nor will it affect the assignment of replicas that have not been assigned previously. +==== Cancellation of shard relocation + +If delayed allocation times out, the master assigns the missing shards to +another node which will start recovery. If the missing node rejoins the +cluster, and its shards still have the same sync-id as the primary, shard +relocation will be cancelled and the synced shard will be used for recovery +instead. + +For this reason, the default `timeout` is set to just one minute: even if shard +relocation begins, cancelling recovery in favour of the synced shard is cheap. + ==== Monitoring delayed unassigned shards The number of shards whose allocation has been delayed by this timeout setting diff --git a/docs/reference/migration/migrate_2_0.asciidoc b/docs/reference/migration/migrate_2_0.asciidoc index b7237a6c44a..d6f0b0fc94b 100644 --- a/docs/reference/migration/migrate_2_0.asciidoc +++ b/docs/reference/migration/migrate_2_0.asciidoc @@ -827,6 +827,19 @@ infrastructure for the pluginmanager and the elasticsearch start up script, the `-v` parameter now stands for `--verbose`, where as `-V` or `--version` can be used to show the Elasticsearch version and exit. +=== `/bin/elasticsearch` dynamic parameters must come after static ones + +If you are setting configuration options like cluster name or node name via +the commandline, you have to ensure, that the static options like pid file +path or daemonizing always come first, like this + +``` +/bin/elasticsearch -d -p /tmp/foo.pid --http.cors.enabled=true --http.cors.allow-origin='*' + +``` + +For a list of those static parameters, run `/bin/elasticsearch -h` + === Aliases Fields used in alias filters no longer have to exist in the mapping upon alias creation time. Alias filters are now diff --git a/docs/reference/modules/discovery/zen.asciidoc b/docs/reference/modules/discovery/zen.asciidoc index a0d3be9e1f9..7cca0175f3c 100644 --- a/docs/reference/modules/discovery/zen.asciidoc +++ b/docs/reference/modules/discovery/zen.asciidoc @@ -82,6 +82,12 @@ serves as a protection against (partial) network failures where node may unjustl think that the master has failed. In this case the node will simply hear from other nodes about the currently active master. +If `discovery.zen.master_election.filter_client` is `true`, pings from client nodes (nodes where `node.client` is +`true`, or both `node.data` and `node.master` are `false`) are ignored during master election; the default value is +`true`. If `discovery.zen.master_election.filter_data` is `true`, pings from non-master-eligible data nodes (nodes +where `node.data` is `true` and `node.master` is `false`) are ignored during master election; the default value is +`false`. Pings from master-eligible nodes are always observed during master election. + Nodes can be excluded from becoming a master by setting `node.master` to `false`. Note, once a node is a client node (`node.client` set to `true`), it will not be allowed to become a master (`node.master` is diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index 10388054e22..38812b1e82f 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -59,13 +59,13 @@ There are added features when using the `elasticsearch` shell script. The first, which was explained earlier, is the ability to easily run the process either in the foreground or the background. -Another feature is the ability to pass `-X` and `-D` or getopt long style +Another feature is the ability to pass `-D` or getopt long style configuration parameters directly to the script. When set, all override anything set using either `JAVA_OPTS` or `ES_JAVA_OPTS`. For example: [source,sh] -------------------------------------------------- -$ bin/elasticsearch -Xmx2g -Xms2g -Des.index.store.type=memory --node.name=my-node +$ bin/elasticsearch -Des.index.refresh_interval=5s --node.name=my-node -------------------------------------------------- ************************************************************************* diff --git a/docs/reference/setup/configuration.asciidoc b/docs/reference/setup/configuration.asciidoc index 0719b1c60d3..45c384bb7bb 100644 --- a/docs/reference/setup/configuration.asciidoc +++ b/docs/reference/setup/configuration.asciidoc @@ -319,9 +319,8 @@ YAML or JSON): -------------------------------------------------- $ curl -XPUT http://localhost:9200/kimchy/ -d \ ' -index : - store: - type: memory +index: + refresh_interval: 5s ' -------------------------------------------------- @@ -331,8 +330,7 @@ within the `elasticsearch.yml` file, the following can be set: [source,yaml] -------------------------------------------------- index : - store: - type: memory + refresh_interval: 5s -------------------------------------------------- This means that every index that gets created on the specific node @@ -343,7 +341,7 @@ above can also be set as a "collapsed" setting, for example: [source,sh] -------------------------------------------------- -$ elasticsearch -Des.index.store.type=memory +$ elasticsearch -Des.index.refresh_interval=5s -------------------------------------------------- All of the index level configuration can be found within each diff --git a/plugins/delete-by-query/licenses/no_deps.txt b/plugins/delete-by-query/licenses/no_deps.txt new file mode 100644 index 00000000000..8cce254d037 --- /dev/null +++ b/plugins/delete-by-query/licenses/no_deps.txt @@ -0,0 +1 @@ +This plugin has no third party dependencies diff --git a/plugins/mapper-size/licenses/no_deps.txt b/plugins/mapper-size/licenses/no_deps.txt new file mode 100644 index 00000000000..8cce254d037 --- /dev/null +++ b/plugins/mapper-size/licenses/no_deps.txt @@ -0,0 +1 @@ +This plugin has no third party dependencies diff --git a/plugins/site-example/licenses/no_deps.txt b/plugins/site-example/licenses/no_deps.txt new file mode 100644 index 00000000000..8cce254d037 --- /dev/null +++ b/plugins/site-example/licenses/no_deps.txt @@ -0,0 +1 @@ +This plugin has no third party dependencies diff --git a/pom.xml b/pom.xml index 1e7ca1a6c75..4b3cef39c8d 100644 --- a/pom.xml +++ b/pom.xml @@ -57,6 +57,10 @@ ${elasticsearch.tools.directory}/ant/integration-tests.xml ${elasticsearch.integ.antfile.default} + + ${project.basedir}/licenses + ${basedir}/target/releases/${project.build.finalName}.zip + auto true @@ -1191,6 +1195,28 @@ org.eclipse.jdt.ui.text.custom_code_templates=run + + check-license + verify + + run + + + + true + + Running license check + + + + + + + + + + diff --git a/qa/pom.xml b/qa/pom.xml index a5d68c1beaf..066f897bbe7 100644 --- a/qa/pom.xml +++ b/qa/pom.xml @@ -33,188 +33,7 @@ lucene-test-framework test - - org.elasticsearch - elasticsearch - test-jar - test - - - - org.elasticsearch - elasticsearch - provided - - - org.apache.lucene - lucene-core - provided - - - org.apache.lucene - lucene-backward-codecs - provided - - - org.apache.lucene - lucene-analyzers-common - provided - - - org.apache.lucene - lucene-queries - provided - - - org.apache.lucene - lucene-memory - provided - - - org.apache.lucene - lucene-highlighter - provided - - - org.apache.lucene - lucene-queryparser - provided - - - org.apache.lucene - lucene-suggest - provided - - - org.apache.lucene - lucene-join - provided - - - org.apache.lucene - lucene-spatial - provided - - - org.apache.lucene - lucene-expressions - provided - - - com.spatial4j - spatial4j - provided - - - com.vividsolutions - jts - provided - - - com.github.spullara.mustache.java - compiler - provided - - - com.google.guava - guava - provided - - - com.carrotsearch - hppc - provided - - - joda-time - joda-time - provided - - - org.joda - joda-convert - provided - - - com.fasterxml.jackson.core - jackson-core - provided - - - com.fasterxml.jackson.dataformat - jackson-dataformat-smile - provided - - - com.fasterxml.jackson.dataformat - jackson-dataformat-yaml - provided - - - com.fasterxml.jackson.dataformat - jackson-dataformat-cbor - provided - - - io.netty - netty - provided - - - com.ning - compress-lzf - provided - - - com.tdunning - t-digest - provided - - - org.apache.commons - commons-lang3 - provided - - - commons-cli - commons-cli - provided - - - org.codehaus.groovy - groovy-all - indy - provided - - - log4j - log4j - provided - - - log4j - apache-log4j-extras - provided - - - org.slf4j - slf4j-api - provided - - - net.java.dev.jna - jna - provided - - - - - - org.apache.httpcomponents - httpclient - test - @@ -310,11 +129,32 @@ + + org.apache.maven.plugins + maven-antrun-plugin + + + + check-license + none + + + smoke-test-plugins + smoke-test-shaded + + + + vagrant + + vagrant + + + diff --git a/qa/smoke-test-plugins/pom.xml b/qa/smoke-test-plugins/pom.xml index 489608b4ff3..4a02c591306 100644 --- a/qa/smoke-test-plugins/pom.xml +++ b/qa/smoke-test-plugins/pom.xml @@ -31,6 +31,190 @@ smoke_test_plugins false + + + org.elasticsearch + elasticsearch + test-jar + test + + + + + org.elasticsearch + elasticsearch + provided + + + org.apache.lucene + lucene-core + provided + + + org.apache.lucene + lucene-backward-codecs + provided + + + org.apache.lucene + lucene-analyzers-common + provided + + + org.apache.lucene + lucene-queries + provided + + + org.apache.lucene + lucene-memory + provided + + + org.apache.lucene + lucene-highlighter + provided + + + org.apache.lucene + lucene-queryparser + provided + + + org.apache.lucene + lucene-suggest + provided + + + org.apache.lucene + lucene-join + provided + + + org.apache.lucene + lucene-spatial + provided + + + org.apache.lucene + lucene-expressions + provided + + + com.spatial4j + spatial4j + provided + + + com.vividsolutions + jts + provided + + + com.github.spullara.mustache.java + compiler + provided + + + com.google.guava + guava + provided + + + com.carrotsearch + hppc + provided + + + joda-time + joda-time + provided + + + org.joda + joda-convert + provided + + + com.fasterxml.jackson.core + jackson-core + provided + + + com.fasterxml.jackson.dataformat + jackson-dataformat-smile + provided + + + com.fasterxml.jackson.dataformat + jackson-dataformat-yaml + provided + + + com.fasterxml.jackson.dataformat + jackson-dataformat-cbor + provided + + + io.netty + netty + provided + + + com.ning + compress-lzf + provided + + + com.tdunning + t-digest + provided + + + org.apache.commons + commons-lang3 + provided + + + commons-cli + commons-cli + provided + + + org.codehaus.groovy + groovy-all + indy + provided + + + log4j + log4j + provided + + + log4j + apache-log4j-extras + provided + + + org.slf4j + slf4j-api + provided + + + net.java.dev.jna + jna + provided + + + + + + org.apache.httpcomponents + httpclient + test + + diff --git a/qa/smoke-test-shaded/pom.xml b/qa/smoke-test-shaded/pom.xml new file mode 100644 index 00000000000..0f701d06034 --- /dev/null +++ b/qa/smoke-test-shaded/pom.xml @@ -0,0 +1,38 @@ + + + 4.0.0 + + + org.elasticsearch.qa + elasticsearch-qa + 2.0.0-beta1-SNAPSHOT + + + smoke-test-shaded + QA: Smoke Test Shaded Jar + Runs a simple + + + shaded + + + + + org.elasticsearch.distribution.shaded + elasticsearch + 2.0.0-beta1-SNAPSHOT + + + org.hamcrest + hamcrest-all + test + + + org.apache.lucene + lucene-test-framework + test + + + diff --git a/qa/smoke-test-shaded/src/test/java/org/elasticsearch/shaded/test/ShadedIT.java b/qa/smoke-test-shaded/src/test/java/org/elasticsearch/shaded/test/ShadedIT.java new file mode 100644 index 00000000000..13a6804ab20 --- /dev/null +++ b/qa/smoke-test-shaded/src/test/java/org/elasticsearch/shaded/test/ShadedIT.java @@ -0,0 +1,54 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.shaded.test; + +import org.apache.lucene.util.LuceneTestCase; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.client.Client; +import org.elasticsearch.common.logging.ESLoggerFactory; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.node.Node; +import org.elasticsearch.node.NodeBuilder; + +import java.nio.file.Path; + +/** + */ +public class ShadedIT extends LuceneTestCase { + + public void testStartShadedNode() { + ESLoggerFactory.getRootLogger().setLevel("ERROR"); + Path data = createTempDir(); + Settings settings = Settings.builder() + .put("path.home", data.toAbsolutePath().toString()) + .put("node.mode", "local") + .put("http.enabled", "false") + .build(); + NodeBuilder builder = NodeBuilder.nodeBuilder().data(true).settings(settings).loadConfigSettings(false).local(true); + try (Node node = builder.node()) { + Client client = node.client(); + client.admin().indices().prepareCreate("test").get(); + client.prepareIndex("test", "foo").setSource("{ \"field\" : \"value\" }").get(); + client.admin().indices().prepareRefresh().get(); + SearchResponse response = client.prepareSearch("test").get(); + assertEquals(response.getHits().getTotalHits(), 1l); + } + + } +} diff --git a/qa/smoke-test-shaded/src/test/java/org/elasticsearch/shaded/test/ShadedTest.java b/qa/smoke-test-shaded/src/test/java/org/elasticsearch/shaded/test/ShadedTest.java new file mode 100644 index 00000000000..d2d8c2422be --- /dev/null +++ b/qa/smoke-test-shaded/src/test/java/org/elasticsearch/shaded/test/ShadedTest.java @@ -0,0 +1,49 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.shaded.test; + +import org.apache.lucene.util.LuceneTestCase; +import org.junit.Test; + +/** + */ +public class ShadedTest extends LuceneTestCase { + + @Test + public void testLoadShadedClasses() throws ClassNotFoundException { + Class.forName("org.elasticsearch.common.collect.ImmutableList"); + Class.forName("org.elasticsearch.common.joda.time.DateTime"); + Class.forName("org.elasticsearch.common.util.concurrent.jsr166e.LongAdder"); + } + + @Test(expected = ClassNotFoundException.class) + public void testGuavaIsNotOnTheCP() throws ClassNotFoundException { + Class.forName("com.google.common.collect.ImmutableList"); + } + + @Test(expected = ClassNotFoundException.class) + public void testJodaIsNotOnTheCP() throws ClassNotFoundException { + Class.forName("org.joda.time.DateTime"); + } + + @Test(expected = ClassNotFoundException.class) + public void testjsr166eIsNotOnTheCP() throws ClassNotFoundException { + Class.forName("com.twitter.jsr166e.LongAdder"); + } +} diff --git a/qa/vagrant/pom.xml b/qa/vagrant/pom.xml new file mode 100644 index 00000000000..b5b2713aa77 --- /dev/null +++ b/qa/vagrant/pom.xml @@ -0,0 +1,279 @@ + + + 4.0.0 + + org.elasticsearch.qa + elasticsearch-qa + 2.0.0-beta1-SNAPSHOT + + + elasticsearch-distribution-tests + QA: Elasticsearch Vagrant Tests + Tests the Elasticsearch distribution artifacts on virtual + machines using vagrant and bats. + pom + + + + *.bats + sudo ES_CLEAN_BEFORE_TEST=true bats $BATS/${testScripts} + + precise, trusty, vivid, wheezy, jessie + centos-6, centos-7, fedora-22, oel-7 + + trusty + centos-7 + + + ${debBoxes} + + + /usr/bin/rpmbuild + + + + + + + maven-clean-plugin + + + clean-testroot + pre-integration-test + + clean + + + true + + + ${project.build.directory}/testroot + + + + + + + + + maven-dependency-plugin + + + copy-common-to-testroot + pre-integration-test + + copy + + + ${project.build.directory}/testroot + + + org.elasticsearch.distribution.zip + elasticsearch + ${elasticsearch.version} + zip + + + org.elasticsearch.distribution.tar + elasticsearch + ${elasticsearch.version} + tar.gz + + + org.elasticsearch.distribution.deb + elasticsearch + ${elasticsearch.version} + deb + + + + + + + + org.apache.maven.plugins + maven-antrun-plugin + + + ant-contrib + ant-contrib + 1.0b3 + + + ant + ant + + + + + + + check-vagrant-version + validate + + run + + + + + + + + + + test-vms + integration-test + + run + + + + + + + + + + + + + + + + + + + all + + ${allDebBoxes} + ${allRpmBoxes} + + + + + rpm + + + ${packaging.rpm.rpmbuild} + + + + + + maven-dependency-plugin + + + copy-rpm-to-testroot + pre-integration-test + + copy + + + ${project.build.directory}/testroot + + + org.elasticsearch.distribution.rpm + elasticsearch + ${elasticsearch.version} + rpm + + + + + + + + + + + org.elasticsearch.distribution + elasticsearch-rpm + ${elasticsearch.version} + rpm + + + + ${debBoxes}, ${rpmBoxes} + + + + + rpm-via-homebrew + + + /usr/local/bin/rpmbuild + + + + + + maven-dependency-plugin + + + copy-rpm-to-testroot + pre-integration-test + + copy + + + ${project.build.directory}/testroot + + + org.elasticsearch.distribution.rpm + elasticsearch + ${elasticsearch.version} + rpm + + + + + + + + + + ${debBoxes}, ${rpmBoxes} + + + + + set-boxes-to-test + + + !boxesToTest + + + + ${proposedBoxesToTest} + + + + + + + smoke-vms + + echo skipping tests + + + + diff --git a/qa/vagrant/src/dev/ant/vagrant-integration-tests.xml b/qa/vagrant/src/dev/ant/vagrant-integration-tests.xml new file mode 100644 index 00000000000..453a7957204 --- /dev/null +++ b/qa/vagrant/src/dev/ant/vagrant-integration-tests.xml @@ -0,0 +1,74 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/distribution/src/test/resources/packaging/scripts/20_tar_package.bats b/qa/vagrant/src/test/resources/packaging/scripts/20_tar_package.bats similarity index 100% rename from distribution/src/test/resources/packaging/scripts/20_tar_package.bats rename to qa/vagrant/src/test/resources/packaging/scripts/20_tar_package.bats diff --git a/distribution/src/test/resources/packaging/scripts/25_tar_plugins.bats b/qa/vagrant/src/test/resources/packaging/scripts/25_tar_plugins.bats similarity index 97% rename from distribution/src/test/resources/packaging/scripts/25_tar_plugins.bats rename to qa/vagrant/src/test/resources/packaging/scripts/25_tar_plugins.bats index e4eccfaf1ec..0ec12eca2e9 100644 --- a/distribution/src/test/resources/packaging/scripts/25_tar_plugins.bats +++ b/qa/vagrant/src/test/resources/packaging/scripts/25_tar_plugins.bats @@ -50,6 +50,7 @@ setup() { # Install plugins with a tar archive ################################## @test "[TAR] install shield plugin" { + skip "awaits public release of shield for 2.0" # Install the archive install_archive @@ -90,6 +91,7 @@ setup() { } @test "[TAR] install shield plugin with a custom path.plugins" { + skip "awaits public release of shield for 2.0" # Install the archive install_archive @@ -143,6 +145,7 @@ setup() { } @test "[TAR] install shield plugin with a custom CONFIG_DIR" { + skip "awaits public release of shield for 2.0" # Install the archive install_archive @@ -199,6 +202,7 @@ setup() { } @test "[TAR] install shield plugin with a custom ES_JAVA_OPTS" { + skip "awaits public release of shield for 2.0" # Install the archive install_archive @@ -259,6 +263,8 @@ setup() { } @test "[TAR] install shield plugin to elasticsearch directory with a space" { + skip "awaits public release of shield for 2.0" + export ES_DIR="/tmp/elastic search" # Install the archive @@ -307,6 +313,7 @@ setup() { } @test "[TAR] install shield plugin from a directory with a space" { + skip "awaits public release of shield for 2.0" export SHIELD_ZIP_WITH_SPACE="/tmp/plugins with space/shield.zip" diff --git a/distribution/src/test/resources/packaging/scripts/30_deb_package.bats b/qa/vagrant/src/test/resources/packaging/scripts/30_deb_package.bats similarity index 100% rename from distribution/src/test/resources/packaging/scripts/30_deb_package.bats rename to qa/vagrant/src/test/resources/packaging/scripts/30_deb_package.bats diff --git a/distribution/src/test/resources/packaging/scripts/40_rpm_package.bats b/qa/vagrant/src/test/resources/packaging/scripts/40_rpm_package.bats similarity index 100% rename from distribution/src/test/resources/packaging/scripts/40_rpm_package.bats rename to qa/vagrant/src/test/resources/packaging/scripts/40_rpm_package.bats diff --git a/distribution/src/test/resources/packaging/scripts/50_plugins.bats b/qa/vagrant/src/test/resources/packaging/scripts/50_plugins.bats similarity index 98% rename from distribution/src/test/resources/packaging/scripts/50_plugins.bats rename to qa/vagrant/src/test/resources/packaging/scripts/50_plugins.bats index 5986a528117..99f2be98982 100644 --- a/distribution/src/test/resources/packaging/scripts/50_plugins.bats +++ b/qa/vagrant/src/test/resources/packaging/scripts/50_plugins.bats @@ -63,6 +63,7 @@ install_package() { # Install plugins with DEB/RPM package ################################## @test "[PLUGINS] install shield plugin" { + skip "awaits public release of shield for 2.0" # Install the package install_package @@ -103,6 +104,7 @@ install_package() { } @test "[PLUGINS] install shield plugin with a custom path.plugins" { + skip "awaits public release of shield for 2.0" # Install the package install_package @@ -160,6 +162,7 @@ install_package() { } @test "[PLUGINS] install shield plugin with a custom CONFIG_DIR" { + skip "awaits public release of shield for 2.0" # Install the package install_package @@ -227,6 +230,7 @@ install_package() { } @test "[PLUGINS] install shield plugin with a custom ES_JAVA_OPTS" { + skip "awaits public release of shield for 2.0" # Install the package install_package diff --git a/distribution/src/test/resources/packaging/scripts/60_systemd.bats b/qa/vagrant/src/test/resources/packaging/scripts/60_systemd.bats similarity index 100% rename from distribution/src/test/resources/packaging/scripts/60_systemd.bats rename to qa/vagrant/src/test/resources/packaging/scripts/60_systemd.bats diff --git a/distribution/src/test/resources/packaging/scripts/70_sysv_initd.bats b/qa/vagrant/src/test/resources/packaging/scripts/70_sysv_initd.bats similarity index 97% rename from distribution/src/test/resources/packaging/scripts/70_sysv_initd.bats rename to qa/vagrant/src/test/resources/packaging/scripts/70_sysv_initd.bats index 0cd0d652c47..97d1cce918f 100644 --- a/distribution/src/test/resources/packaging/scripts/70_sysv_initd.bats +++ b/qa/vagrant/src/test/resources/packaging/scripts/70_sysv_initd.bats @@ -97,7 +97,8 @@ setup() { skip_not_sysvinit run service elasticsearch status - [ "$status" -eq 3 ] + # precise returns 4, trusty 3 + [ "$status" -eq 3 ] || [ "$status" -eq 4 ] } # Simulates the behavior of a system restart: @@ -120,4 +121,4 @@ setup() { run service elasticsearch stop [ "$status" -eq 0 ] -} \ No newline at end of file +} diff --git a/distribution/src/test/resources/packaging/scripts/packaging_test_utils.bash b/qa/vagrant/src/test/resources/packaging/scripts/packaging_test_utils.bash similarity index 100% rename from distribution/src/test/resources/packaging/scripts/packaging_test_utils.bash rename to qa/vagrant/src/test/resources/packaging/scripts/packaging_test_utils.bash