Merge branch 'master' into feature/query-refactoring
This commit is contained in:
commit
4e936d1964
|
@ -15,9 +15,11 @@ docs/build.log
|
|||
/tmp/
|
||||
backwards/
|
||||
html_docs
|
||||
.vagrant/
|
||||
|
||||
## eclipse ignores (use 'mvn eclipse:eclipse' to build eclipse projects)
|
||||
## All files (.project, .classpath, .settings/*) should be generated through Maven which
|
||||
## will correctly set the classpath based on the declared dependencies and write settings
|
||||
## will correctly set the classpath based on the declared dependencies and write settings
|
||||
## files to ensure common coding style across Eclipse and IDEA.
|
||||
.project
|
||||
.classpath
|
||||
|
|
|
@ -6,7 +6,7 @@ Elasticsearch is an open source project and we love to receive contributions fro
|
|||
Bug reports
|
||||
-----------
|
||||
|
||||
If you think you have found a bug in Elasticsearch, first make sure that you are testing against the [latest version of Elasticsearch](https://www.elastic.co/downloads/elasticsearch) - your issue may already have been fixed. If not, search our [issues list](https://github.com/elasticsearch/elasticsearch/issues) on GitHub in case a similar issue has already been opened.
|
||||
If you think you have found a bug in Elasticsearch, first make sure that you are testing against the [latest version of Elasticsearch](https://www.elastic.co/downloads/elasticsearch) - your issue may already have been fixed. If not, search our [issues list](https://github.com/elastic/elasticsearch/issues) on GitHub in case a similar issue has already been opened.
|
||||
|
||||
It is very helpful if you can prepare a reproduction of the bug. In other words, provide a small test case which we can run to confirm your bug. It makes it easier to find the problem and to fix it. Test cases should be provided as `curl` commands which we can copy and paste into a terminal to run it locally, for example:
|
||||
|
||||
|
@ -29,7 +29,7 @@ Feature requests
|
|||
----------------
|
||||
|
||||
If you find yourself wishing for a feature that doesn't exist in Elasticsearch, you are probably not alone. There are bound to be others out there with similar needs. Many of the features that Elasticsearch has today have been added because our users saw the need.
|
||||
Open an issue on our [issues list](https://github.com/elasticsearch/elasticsearch/issues) on GitHub which describes the feature you would like to see, why you need it, and how it should work.
|
||||
Open an issue on our [issues list](https://github.com/elastic/elasticsearch/issues) on GitHub which describes the feature you would like to see, why you need it, and how it should work.
|
||||
|
||||
Contributing code and documentation changes
|
||||
-------------------------------------------
|
||||
|
@ -38,7 +38,7 @@ If you have a bugfix or new feature that you would like to contribute to Elastic
|
|||
|
||||
We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code.
|
||||
|
||||
The process for contributing to any of the [Elasticsearch repositories](https://github.com/elasticsearch/) is similar. Details for individual projects can be found below.
|
||||
The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) is similar. Details for individual projects can be found below.
|
||||
|
||||
### Fork and clone the repository
|
||||
|
||||
|
@ -58,7 +58,7 @@ Once your changes and tests are ready to submit for review:
|
|||
|
||||
2. Sign the Contributor License Agreement
|
||||
|
||||
Please make sure you have signed our [Contributor License Agreement](http://www.elasticsearch.org/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once.
|
||||
Please make sure you have signed our [Contributor License Agreement](https://www.elastic.co/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once.
|
||||
|
||||
3. Rebase your changes
|
||||
|
||||
|
@ -74,7 +74,7 @@ Then sit back and wait. There will probably be discussion about the pull request
|
|||
Contributing to the Elasticsearch codebase
|
||||
------------------------------------------
|
||||
|
||||
**Repository:** [https://github.com/elasticsearch/elasticsearch](https://github.com/elasticsearch/elasticsearch)
|
||||
**Repository:** [https://github.com/elasticsearch/elasticsearch](https://github.com/elastic/elasticsearch)
|
||||
|
||||
Make sure you have [Maven](http://maven.apache.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE by running `mvn eclipse:eclipse` and then importing the project into their workspace: `File > Import > Existing project into workspace`.
|
||||
|
||||
|
@ -103,4 +103,4 @@ Before submitting your changes, run the test suite to make sure that nothing is
|
|||
mvn clean test -Dtests.slow=true
|
||||
```
|
||||
|
||||
Source: [Contributing to elasticsearch](http://www.elasticsearch.org/contributing-to-elasticsearch/)
|
||||
Source: [Contributing to elasticsearch](https://www.elastic.co/contributing-to-elasticsearch/)
|
||||
|
|
206
TESTING.asciidoc
206
TESTING.asciidoc
|
@ -81,7 +81,7 @@ You can also filter tests by certain annotations ie:
|
|||
Those annotation names can be combined into a filter expression like:
|
||||
|
||||
------------------------------------------------
|
||||
mvn test -Dtests.filter="@nightly and not @backwards"
|
||||
mvn test -Dtests.filter="@nightly and not @backwards"
|
||||
------------------------------------------------
|
||||
|
||||
to run all nightly test but not the ones that are backwards tests. `tests.filter` supports
|
||||
|
@ -89,7 +89,7 @@ the boolean operators `and, or, not` and grouping ie:
|
|||
|
||||
|
||||
---------------------------------------------------------------
|
||||
mvn test -Dtests.filter="@nightly and not(@badapple or @backwards)"
|
||||
mvn test -Dtests.filter="@nightly and not(@badapple or @backwards)"
|
||||
---------------------------------------------------------------
|
||||
|
||||
=== Seed and repetitions.
|
||||
|
@ -102,7 +102,7 @@ mvn test -Dtests.seed=DEADBEEF
|
|||
|
||||
=== Repeats _all_ tests of ClassName N times.
|
||||
|
||||
Every test repetition will have a different method seed
|
||||
Every test repetition will have a different method seed
|
||||
(derived from a single random master seed).
|
||||
|
||||
--------------------------------------------------
|
||||
|
@ -149,7 +149,7 @@ mvn test -Dtests.awaitsfix=[false] - known issue (@AwaitsFix)
|
|||
|
||||
=== Load balancing and caches.
|
||||
|
||||
By default, the tests run sequentially on a single forked JVM.
|
||||
By default, the tests run sequentially on a single forked JVM.
|
||||
|
||||
To run with more forked JVMs than the default use:
|
||||
|
||||
|
@ -158,7 +158,7 @@ mvn test -Dtests.jvms=8 test
|
|||
----------------------------
|
||||
|
||||
Don't count hypercores for CPU-intense tests and leave some slack
|
||||
for JVM-internal threads (like the garbage collector). Make sure there is
|
||||
for JVM-internal threads (like the garbage collector). Make sure there is
|
||||
enough RAM to handle child JVMs.
|
||||
|
||||
=== Test compatibility.
|
||||
|
@ -208,7 +208,7 @@ mvn test -Dtests.output=always
|
|||
Configure the heap size.
|
||||
|
||||
------------------------------
|
||||
mvn test -Dtests.heap.size=512m
|
||||
mvn test -Dtests.heap.size=512m
|
||||
------------------------------
|
||||
|
||||
Pass arbitrary jvm arguments.
|
||||
|
@ -231,7 +231,7 @@ mvn test -Dtests.filter="@backwards" -Dtests.bwc.version=x.y.z -Dtests.bwc.path=
|
|||
Note that backwards tests must be run with security manager disabled.
|
||||
If the elasticsearch release is placed under `./backwards/elasticsearch-x.y.z` the path
|
||||
can be omitted:
|
||||
|
||||
|
||||
---------------------------------------------------------------------------
|
||||
mvn test -Dtests.filter="@backwards" -Dtests.bwc.version=x.y.z -Dtests.security.manager=false
|
||||
---------------------------------------------------------------------------
|
||||
|
@ -242,7 +242,7 @@ already in your elasticsearch clone):
|
|||
---------------------------------------------------------------------------
|
||||
$ mkdir backwards && cd backwards
|
||||
$ curl -O https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.2.1.tar.gz
|
||||
$ tar -xzf elasticsearch-1.2.1.tar.gz
|
||||
$ tar -xzf elasticsearch-1.2.1.tar.gz
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
== Running integration tests
|
||||
|
@ -314,25 +314,180 @@ mvn test -Pdev
|
|||
|
||||
== Testing scripts
|
||||
|
||||
Shell scripts can be tested with the Bash Automate Testing System tool available
|
||||
at https://github.com/sstephenson/bats. Once the tool is installed, you can
|
||||
execute a .bats test file with the following command:
|
||||
The simplest way to test scripts and the packaged distributions is to use
|
||||
Vagrant. You can get started by following there five easy steps:
|
||||
|
||||
---------------------------------------------------------------------------
|
||||
bats test_file.bats
|
||||
---------------------------------------------------------------------------
|
||||
. Install Virtual Box and Vagrant.
|
||||
|
||||
When executing the test files located in the `/packaging/scripts` folder,
|
||||
it's possible to add the flag `ES_CLEAN_BEFORE_TEST=true` to clean the test
|
||||
environment before the tests are executed:
|
||||
. (Optional) Install vagrant-cachier to squeeze a bit more performance out of
|
||||
the process:
|
||||
--------------------------------------
|
||||
vagrant plugin install vagrant-cachier
|
||||
--------------------------------------
|
||||
|
||||
---------------------------------------------------------------------------
|
||||
ES_CLEAN_BEFORE_TEST=true bats 30_deb_package.bats
|
||||
---------------------------------------------------------------------------
|
||||
. Validate your installed dependencies:
|
||||
-------------------------------------
|
||||
mvn -Pvagrant -pl qa/vagrant validate
|
||||
-------------------------------------
|
||||
|
||||
The current mode of execution is to copy all the packages that should be tested
|
||||
into one directory, then copy the bats files into the same directory and run
|
||||
those.
|
||||
. Download the VMs. Since Maven or ant or something eats the progress reports
|
||||
from Vagrant when you run it inside mvn its probably best if you run this one
|
||||
time to setup all the VMs one at a time. Run this to download and setup the VMs
|
||||
we use for testing by default:
|
||||
--------------------------------------------------------
|
||||
vagrant up --provision trusty && vagrant halt trusty
|
||||
vagrant up --provision centos-7 && vagrant halt centos-7
|
||||
--------------------------------------------------------
|
||||
or run this to download and setup all the VMs:
|
||||
-------------------------------------------------------------------------------
|
||||
vagrant halt
|
||||
for box in $(vagrant status | grep 'poweroff\|not created' | cut -f1 -d' '); do
|
||||
vagrant up --provision $box
|
||||
vagrant halt $box
|
||||
done
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
. Smoke test the maven/ant dance that we use to get vagrant involved in
|
||||
integration testing is working:
|
||||
---------------------------------------------
|
||||
mvn -Pvagrant,smoke-vms -pl qa/vagrant verify
|
||||
---------------------------------------------
|
||||
or this to validate all the VMs:
|
||||
-------------------------------------------------
|
||||
mvn -Pvagrant,smoke-vms,all -pl qa/vagrant verify
|
||||
-------------------------------------------------
|
||||
That will start up the VMs and then immediate quit.
|
||||
|
||||
. Finally run the tests. The fastest way to get this started is to run:
|
||||
-----------------------------------
|
||||
mvn clean install -DskipTests
|
||||
mvn -Pvagrant -pl qa/vagrant verify
|
||||
-----------------------------------
|
||||
You could just run:
|
||||
--------------------
|
||||
mvn -Pvagrant verify
|
||||
--------------------
|
||||
but that will run all the tests. Which is probably a good thing, but not always
|
||||
what you want.
|
||||
|
||||
Whichever snippet you run mvn will build the tar, zip and deb packages. If you
|
||||
have rpmbuild installed it'll build the rpm package as well. Then mvn will
|
||||
spin up trusty and verify the tar, zip, and deb package. If you have rpmbuild
|
||||
installed it'll spin up centos-7 and verify the tar, zip and rpm packages. We
|
||||
chose those two distributions as the default because they cover deb and rpm
|
||||
packaging and SyvVinit and systemd.
|
||||
|
||||
You can control the boxes that are used for testing like so. Run just
|
||||
fedora-22 with:
|
||||
--------------------------------------------
|
||||
mvn -Pvagrant -pl qa/vagrant verify -DboxesToTest=fedora-22
|
||||
--------------------------------------------
|
||||
or run wheezy and trusty:
|
||||
------------------------------------------------------------------
|
||||
mvn -Pvagrant -pl qa/vagrant verify -DboxesToTest='wheezy, trusty'
|
||||
------------------------------------------------------------------
|
||||
or run all the boxes:
|
||||
---------------------------------------
|
||||
mvn -Pvagrant,all -pl qa/vagrant verify
|
||||
---------------------------------------
|
||||
|
||||
Its important to know that if you ctrl-c any of these `mvn` runs that you'll
|
||||
probably leave a VM up. You can terminate it by running:
|
||||
------------
|
||||
vagrant halt
|
||||
------------
|
||||
|
||||
This is just regular vagrant so you can run normal multi box vagrant commands
|
||||
to test things manually. Just run:
|
||||
---------------------------------------
|
||||
vagrant up trusty && vagrant ssh trusty
|
||||
---------------------------------------
|
||||
to get an Ubuntu or
|
||||
-------------------------------------------
|
||||
vagrant up centos-7 && vagrant ssh centos-7
|
||||
-------------------------------------------
|
||||
to get a CentOS. Once you are done with them you should halt them:
|
||||
-------------------
|
||||
vagrant halt trusty
|
||||
-------------------
|
||||
|
||||
These are the linux flavors the Vagrantfile currently supports:
|
||||
* precise aka Ubuntu 12.04
|
||||
* trusty aka Ubuntu 14.04
|
||||
* vivid aka Ubuntun 15.04
|
||||
* wheezy aka Debian 7, the current debian oldstable distribution
|
||||
* jessie aka Debian 8, the current debina stable distribution
|
||||
* centos-6
|
||||
* centos-7
|
||||
* fedora-22
|
||||
* oel-7 aka Oracle Enterprise Linux 7
|
||||
|
||||
We're missing the following from the support matrix because there aren't high
|
||||
quality boxes available in vagrant atlas:
|
||||
* sles-11
|
||||
* sles-12
|
||||
* opensuse-13
|
||||
* oel-6
|
||||
|
||||
We're missing the follow because our tests are very linux/bash centric:
|
||||
* Windows Server 2012
|
||||
|
||||
Its important to think of VMs like cattle: if they become lame you just shoot
|
||||
them and let vagrant reprovision them. Say you've hosed your precise VM:
|
||||
----------------------------------------------------
|
||||
vagrant ssh precise -c 'sudo rm -rf /bin'; echo oops
|
||||
----------------------------------------------------
|
||||
All you've got to do to get another one is
|
||||
----------------------------------------------
|
||||
vagrant destroy -f trusty && vagrant up trusty
|
||||
----------------------------------------------
|
||||
The whole process takes a minute and a half on a modern laptop, two and a half
|
||||
without vagrant-cachier.
|
||||
|
||||
Its possible that some downloads will fail and it'll be impossible to restart
|
||||
them. This is a bug in vagrant. See the instructions here for how to work
|
||||
around it:
|
||||
https://github.com/mitchellh/vagrant/issues/4479
|
||||
|
||||
Some vagrant commands will work on all VMs at once:
|
||||
------------------
|
||||
vagrant halt
|
||||
vagrant destroy -f
|
||||
------------------
|
||||
|
||||
----------
|
||||
vagrant up
|
||||
----------
|
||||
would normally start all the VMs but we've prevented that because that'd
|
||||
consume a ton of ram.
|
||||
|
||||
== Testing scripts more directly
|
||||
|
||||
In general its best to stick to testing in vagrant because the bats scripts are
|
||||
destructive. When working with a single package its generally faster to run its
|
||||
tests in a tighter loop than maven provides. In one window:
|
||||
--------------------------------
|
||||
mvn -pl distribution/rpm package
|
||||
--------------------------------
|
||||
and in another window:
|
||||
----------------------------------------------------
|
||||
vagrant up centos-7 && vagrant ssh centos-7
|
||||
cd $RPM
|
||||
sudo ES_CLEAN_BEFORE_TEST=true bats $BATS/*rpm*.bats
|
||||
----------------------------------------------------
|
||||
|
||||
At this point `ES_CLEAN_BEFORE_TEST=true` is required or tests fail spuriously.
|
||||
|
||||
If you wanted to retest all the release artifacts on a single VM you could:
|
||||
-------------------------------------------------
|
||||
# Build all the distributions fresh but skip recompiling elasticsearch:
|
||||
mvn -amd -pl distribution install -DskipTests
|
||||
# Copy them all the testroot
|
||||
mvn -Pvagrant -pl qa/vagrant pre-integration-test
|
||||
vagrant up trusty && vagrant ssh trusty
|
||||
cd $TESTROOT
|
||||
sudo ES_CLEAN_BEFORE_TEST=true bats $BATS/*.bats
|
||||
-------------------------------------------------
|
||||
|
||||
== Coverage analysis
|
||||
|
||||
|
@ -342,3 +497,8 @@ To run tests instrumented with jacoco and produce a coverage report in
|
|||
---------------------------------------------------------------------------
|
||||
mvn -Dtests.coverage test jacoco:report
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
== Debugging from an IDE
|
||||
|
||||
If you want to run elasticsearch from your IDE, you should execute ./run.sh
|
||||
It opens a remote debugging port that you can connect with your IDE.
|
||||
|
|
|
@ -0,0 +1,156 @@
|
|||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
# This Vagrantfile exists to test packaging. Read more about its use in the
|
||||
# vagrant section in TESTING.asciidoc.
|
||||
|
||||
# Licensed to Elasticsearch under one or more contributor
|
||||
# license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright
|
||||
# ownership. Elasticsearch licenses this file to you under
|
||||
# the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing,
|
||||
# software distributed under the License is distributed on an
|
||||
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
# KIND, either express or implied. See the License for the
|
||||
# specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
Vagrant.configure(2) do |config|
|
||||
config.vm.define "precise", autostart: false do |config|
|
||||
config.vm.box = "ubuntu/precise64"
|
||||
ubuntu_common config
|
||||
end
|
||||
config.vm.define "trusty", autostart: false do |config|
|
||||
config.vm.box = "ubuntu/trusty64"
|
||||
ubuntu_common config
|
||||
end
|
||||
config.vm.define "vivid", autostart: false do |config|
|
||||
config.vm.box = "ubuntu/vivid64"
|
||||
ubuntu_common config
|
||||
end
|
||||
config.vm.define "wheezy", autostart: false do |config|
|
||||
config.vm.box = "debian/wheezy64"
|
||||
deb_common(config)
|
||||
end
|
||||
config.vm.define "jessie", autostart: false do |config|
|
||||
config.vm.box = "debian/jessie64"
|
||||
deb_common(config)
|
||||
end
|
||||
config.vm.define "centos-6", autostart: false do |config|
|
||||
# TODO switch from chef to boxcutter to provide?
|
||||
config.vm.box = "chef/centos-6.6"
|
||||
rpm_common(config)
|
||||
end
|
||||
config.vm.define "centos-7", autostart: false do |config|
|
||||
# There is a centos/7 box but it doesn't have rsync or virtualbox guest
|
||||
# stuff on there so its slow to use. So chef it is....
|
||||
# TODO switch from chef to boxcutter to provide?
|
||||
config.vm.box = "chef/centos-7.0"
|
||||
rpm_common(config)
|
||||
end
|
||||
# This box hangs _forever_ on ```yum check-update```. I have no idea why.
|
||||
# config.vm.define "oel-6", autostart: false do |config|
|
||||
# config.vm.box = "boxcutter/oel66"
|
||||
# rpm_common(config)
|
||||
# end
|
||||
config.vm.define "oel-7", autostart: false do |config|
|
||||
config.vm.box = "boxcutter/oel70"
|
||||
rpm_common(config)
|
||||
end
|
||||
config.vm.define "fedora-22", autostart: false do |config|
|
||||
# Fedora hosts their own 'cloud' images that aren't in Vagrant's Atlas but
|
||||
# and are missing required stuff like rsync. It'd be nice if we could use
|
||||
# them but they much slower to get up and running then the boxcutter image.
|
||||
config.vm.box = "boxcutter/fedora22"
|
||||
dnf_common(config)
|
||||
end
|
||||
# Switch the default share for the project root from /vagrant to
|
||||
# /elasticsearch because /vagrant is confusing when there is a project inside
|
||||
# the elasticsearch project called vagrant....
|
||||
config.vm.synced_folder ".", "/vagrant", disabled: true
|
||||
config.vm.synced_folder "", "/elasticsearch"
|
||||
end
|
||||
|
||||
def ubuntu_common(config)
|
||||
# Ubuntu is noisy
|
||||
# http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html
|
||||
config.vm.provision "fix-no-tty", type: "shell" do |s|
|
||||
s.privileged = false
|
||||
s.inline = "sudo sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile"
|
||||
end
|
||||
deb_common(config)
|
||||
end
|
||||
|
||||
def deb_common(config)
|
||||
provision(config, "apt-get update", "/var/cache/apt/archives/last_update",
|
||||
"apt-get install -y", "openjdk-7-jdk")
|
||||
if Vagrant.has_plugin?("vagrant-cachier")
|
||||
config.cache.scope = :box
|
||||
end
|
||||
end
|
||||
|
||||
def rpm_common(config)
|
||||
provision(config, "yum check-update", "/var/cache/yum/last_update",
|
||||
"yum install -y", "java-1.7.0-openjdk-devel")
|
||||
if Vagrant.has_plugin?("vagrant-cachier")
|
||||
config.cache.scope = :box
|
||||
end
|
||||
end
|
||||
|
||||
def dnf_common(config)
|
||||
provision(config, "dnf check-update", "/var/cache/dnf/last_update",
|
||||
"dnf install -y", "java-1.8.0-openjdk-devel")
|
||||
if Vagrant.has_plugin?("vagrant-cachier")
|
||||
config.cache.scope = :box
|
||||
# Autodetect doesn't work....
|
||||
config.cache.auto_detect = false
|
||||
config.cache.enable :generic, { :cache_dir => "/var/cache/dnf" }
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
def provision(config, update_command, update_tracking_file, install_command, java_package)
|
||||
config.vm.provision "elasticsearch bats dependencies", type: "shell", inline: <<-SHELL
|
||||
set -e
|
||||
installed() {
|
||||
command -v $1 2>&1 >/dev/null
|
||||
}
|
||||
install() {
|
||||
# Only apt-get update if we haven't in the last day
|
||||
if [ ! -f /tmp/update ] || [ "x$(find /tmp/update -mtime +0)" == "x/tmp/update" ]; then
|
||||
sudo #{update_command} || true
|
||||
touch #{update_tracking_file}
|
||||
fi
|
||||
sudo #{install_command} $1
|
||||
}
|
||||
ensure() {
|
||||
installed $1 || install $1
|
||||
}
|
||||
installed java || install #{java_package}
|
||||
ensure curl
|
||||
ensure unzip
|
||||
|
||||
installed bats || {
|
||||
# Bats lives in a git repository....
|
||||
ensure git
|
||||
git clone https://github.com/sstephenson/bats /tmp/bats
|
||||
# Centos doesn't add /usr/local/bin to the path....
|
||||
sudo /tmp/bats/install.sh /usr
|
||||
sudo rm -rf /tmp/bats
|
||||
}
|
||||
cat \<\<VARS > /etc/profile.d/elasticsearch_vars.sh
|
||||
export ZIP=/elasticsearch/distribution/zip/target/releases
|
||||
export TAR=/elasticsearch/distribution/tar/target/releases
|
||||
export RPM=/elasticsearch/distribution/rpm/target/releases
|
||||
export DEB=/elasticsearch/distribution/deb/target/releases
|
||||
export TESTROOT=/elasticsearch/qa/vagrant/target/testroot
|
||||
export BATS=/elasticsearch/qa/vagrant/src/test/resources/packaging/scripts/
|
||||
VARS
|
||||
SHELL
|
||||
end
|
11
core/pom.xml
11
core/pom.xml
|
@ -320,6 +320,17 @@
|
|||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<!-- Don't run the license checker in core -->
|
||||
<id>check-license</id>
|
||||
<phase>none</phase>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
<pluginManagement>
|
||||
<plugins>
|
||||
|
|
|
@ -33,6 +33,7 @@ import org.elasticsearch.monitor.fs.FsInfo;
|
|||
import org.elasticsearch.monitor.jvm.JvmStats;
|
||||
import org.elasticsearch.monitor.os.OsStats;
|
||||
import org.elasticsearch.monitor.process.ProcessStats;
|
||||
import org.elasticsearch.script.ScriptStats;
|
||||
import org.elasticsearch.threadpool.ThreadPoolStats;
|
||||
import org.elasticsearch.transport.TransportStats;
|
||||
|
||||
|
@ -73,13 +74,17 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
|
|||
@Nullable
|
||||
private AllCircuitBreakerStats breaker;
|
||||
|
||||
@Nullable
|
||||
private ScriptStats scriptStats;
|
||||
|
||||
NodeStats() {
|
||||
}
|
||||
|
||||
public NodeStats(DiscoveryNode node, long timestamp, @Nullable NodeIndicesStats indices,
|
||||
@Nullable OsStats os, @Nullable ProcessStats process, @Nullable JvmStats jvm, @Nullable ThreadPoolStats threadPool,
|
||||
@Nullable FsInfo fs, @Nullable TransportStats transport, @Nullable HttpStats http,
|
||||
@Nullable AllCircuitBreakerStats breaker) {
|
||||
@Nullable AllCircuitBreakerStats breaker,
|
||||
@Nullable ScriptStats scriptStats) {
|
||||
super(node);
|
||||
this.timestamp = timestamp;
|
||||
this.indices = indices;
|
||||
|
@ -91,6 +96,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
|
|||
this.transport = transport;
|
||||
this.http = http;
|
||||
this.breaker = breaker;
|
||||
this.scriptStats = scriptStats;
|
||||
}
|
||||
|
||||
public long getTimestamp() {
|
||||
|
@ -165,6 +171,11 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
|
|||
return this.breaker;
|
||||
}
|
||||
|
||||
@Nullable
|
||||
public ScriptStats getScriptStats() {
|
||||
return this.scriptStats;
|
||||
}
|
||||
|
||||
public static NodeStats readNodeStats(StreamInput in) throws IOException {
|
||||
NodeStats nodeInfo = new NodeStats();
|
||||
nodeInfo.readFrom(in);
|
||||
|
@ -200,6 +211,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
|
|||
http = HttpStats.readHttpStats(in);
|
||||
}
|
||||
breaker = AllCircuitBreakerStats.readOptionalAllCircuitBreakerStats(in);
|
||||
scriptStats = in.readOptionalStreamable(new ScriptStats());
|
||||
|
||||
}
|
||||
|
||||
|
@ -256,6 +268,7 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
|
|||
http.writeTo(out);
|
||||
}
|
||||
out.writeOptionalStreamable(breaker);
|
||||
out.writeOptionalStreamable(scriptStats);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -303,6 +316,9 @@ public class NodeStats extends BaseNodeResponse implements ToXContent {
|
|||
if (getBreaker() != null) {
|
||||
getBreaker().toXContent(builder, params);
|
||||
}
|
||||
if (getScriptStats() != null) {
|
||||
getScriptStats().toXContent(builder, params);
|
||||
}
|
||||
|
||||
return builder;
|
||||
}
|
||||
|
|
|
@ -41,6 +41,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
|
|||
private boolean transport;
|
||||
private boolean http;
|
||||
private boolean breaker;
|
||||
private boolean script;
|
||||
|
||||
protected NodesStatsRequest() {
|
||||
}
|
||||
|
@ -67,6 +68,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
|
|||
this.transport = true;
|
||||
this.http = true;
|
||||
this.breaker = true;
|
||||
this.script = true;
|
||||
return this;
|
||||
}
|
||||
|
||||
|
@ -84,6 +86,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
|
|||
this.transport = false;
|
||||
this.http = false;
|
||||
this.breaker = false;
|
||||
this.script = false;
|
||||
return this;
|
||||
}
|
||||
|
||||
|
@ -240,6 +243,15 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
|
|||
return this;
|
||||
}
|
||||
|
||||
public boolean script() {
|
||||
return script;
|
||||
}
|
||||
|
||||
public NodesStatsRequest script(boolean script) {
|
||||
this.script = script;
|
||||
return this;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
super.readFrom(in);
|
||||
|
@ -253,6 +265,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
|
|||
transport = in.readBoolean();
|
||||
http = in.readBoolean();
|
||||
breaker = in.readBoolean();
|
||||
script = in.readBoolean();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -268,6 +281,7 @@ public class NodesStatsRequest extends BaseNodesRequest<NodesStatsRequest> {
|
|||
out.writeBoolean(transport);
|
||||
out.writeBoolean(http);
|
||||
out.writeBoolean(breaker);
|
||||
out.writeBoolean(script);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -62,6 +62,11 @@ public class NodesStatsRequestBuilder extends NodesOperationRequestBuilder<Nodes
|
|||
return this;
|
||||
}
|
||||
|
||||
public NodesStatsRequestBuilder setScript(boolean script) {
|
||||
request.script(script);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Should the node indices stats be returned.
|
||||
*/
|
||||
|
|
|
@ -80,7 +80,7 @@ public class TransportNodesStatsAction extends TransportNodesAction<NodesStatsRe
|
|||
protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) {
|
||||
NodesStatsRequest request = nodeStatsRequest.request;
|
||||
return nodeService.stats(request.indices(), request.os(), request.process(), request.jvm(), request.threadPool(), request.network(),
|
||||
request.fs(), request.transport(), request.http(), request.breaker());
|
||||
request.fs(), request.transport(), request.http(), request.breaker(), request.script());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
|
|
@ -100,7 +100,7 @@ public class TransportClusterStatsAction extends TransportNodesAction<ClusterSta
|
|||
@Override
|
||||
protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeRequest) {
|
||||
NodeInfo nodeInfo = nodeService.info(false, true, false, true, false, false, true, false, true);
|
||||
NodeStats nodeStats = nodeService.stats(CommonStatsFlags.NONE, false, true, true, false, false, true, false, false, false);
|
||||
NodeStats nodeStats = nodeService.stats(CommonStatsFlags.NONE, false, true, true, false, false, true, false, false, false, false);
|
||||
List<ShardStats> shardsStats = new ArrayList<>();
|
||||
for (IndexService indexService : indicesService) {
|
||||
for (IndexShard indexShard : indexService) {
|
||||
|
|
|
@ -36,6 +36,7 @@ import org.elasticsearch.indices.IndicesService;
|
|||
import org.elasticsearch.indices.breaker.CircuitBreakerService;
|
||||
import org.elasticsearch.monitor.MonitorService;
|
||||
import org.elasticsearch.plugins.PluginsService;
|
||||
import org.elasticsearch.script.ScriptService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
|
||||
|
@ -51,6 +52,8 @@ public class NodeService extends AbstractComponent {
|
|||
private final IndicesService indicesService;
|
||||
private final PluginsService pluginService;
|
||||
private final CircuitBreakerService circuitBreakerService;
|
||||
private ScriptService scriptService;
|
||||
|
||||
@Nullable
|
||||
private HttpServer httpServer;
|
||||
|
||||
|
@ -63,7 +66,8 @@ public class NodeService extends AbstractComponent {
|
|||
@Inject
|
||||
public NodeService(Settings settings, ThreadPool threadPool, MonitorService monitorService, Discovery discovery,
|
||||
TransportService transportService, IndicesService indicesService,
|
||||
PluginsService pluginService, CircuitBreakerService circuitBreakerService, Version version) {
|
||||
PluginsService pluginService, CircuitBreakerService circuitBreakerService,
|
||||
Version version) {
|
||||
super(settings);
|
||||
this.threadPool = threadPool;
|
||||
this.monitorService = monitorService;
|
||||
|
@ -76,6 +80,12 @@ public class NodeService extends AbstractComponent {
|
|||
this.circuitBreakerService = circuitBreakerService;
|
||||
}
|
||||
|
||||
// can not use constructor injection or there will be a circular dependency
|
||||
@Inject(optional = true)
|
||||
public void setScriptService(ScriptService scriptService) {
|
||||
this.scriptService = scriptService;
|
||||
}
|
||||
|
||||
public void setHttpServer(@Nullable HttpServer httpServer) {
|
||||
this.httpServer = httpServer;
|
||||
}
|
||||
|
@ -134,12 +144,14 @@ public class NodeService extends AbstractComponent {
|
|||
monitorService.fsService().stats(),
|
||||
transportService.stats(),
|
||||
httpServer == null ? null : httpServer.stats(),
|
||||
circuitBreakerService.stats()
|
||||
circuitBreakerService.stats(),
|
||||
scriptService.stats()
|
||||
);
|
||||
}
|
||||
|
||||
public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, boolean jvm, boolean threadPool, boolean network,
|
||||
boolean fs, boolean transport, boolean http, boolean circuitBreaker) {
|
||||
boolean fs, boolean transport, boolean http, boolean circuitBreaker,
|
||||
boolean script) {
|
||||
// for indices stats we want to include previous allocated shards stats as well (it will
|
||||
// only be applied to the sensible ones to use, like refresh/merge/flush/indexing stats)
|
||||
return new NodeStats(discovery.localNode(), System.currentTimeMillis(),
|
||||
|
@ -151,7 +163,8 @@ public class NodeService extends AbstractComponent {
|
|||
fs ? monitorService.fsService().stats() : null,
|
||||
transport ? transportService.stats() : null,
|
||||
http ? (httpServer == null ? null : httpServer.stats()) : null,
|
||||
circuitBreaker ? circuitBreakerService.stats() : null
|
||||
circuitBreaker ? circuitBreakerService.stats() : null,
|
||||
script ? scriptService.stats() : null
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -76,6 +76,7 @@ public class RestNodesStatsAction extends BaseRestHandler {
|
|||
nodesStatsRequest.indices(metrics.contains("indices"));
|
||||
nodesStatsRequest.process(metrics.contains("process"));
|
||||
nodesStatsRequest.breaker(metrics.contains("breaker"));
|
||||
nodesStatsRequest.script(metrics.contains("script"));
|
||||
|
||||
// check for index specific metrics
|
||||
if (metrics.contains("indices")) {
|
||||
|
|
|
@ -57,6 +57,7 @@ import org.elasticsearch.rest.*;
|
|||
import org.elasticsearch.rest.action.support.RestActionListener;
|
||||
import org.elasticsearch.rest.action.support.RestResponseListener;
|
||||
import org.elasticsearch.rest.action.support.RestTable;
|
||||
import org.elasticsearch.script.ScriptStats;
|
||||
import org.elasticsearch.search.suggest.completion.CompletionStats;
|
||||
|
||||
import java.util.Locale;
|
||||
|
@ -92,7 +93,7 @@ public class RestNodesAction extends AbstractCatAction {
|
|||
@Override
|
||||
public void processResponse(final NodesInfoResponse nodesInfoResponse) {
|
||||
NodesStatsRequest nodesStatsRequest = new NodesStatsRequest();
|
||||
nodesStatsRequest.clear().jvm(true).os(true).fs(true).indices(true).process(true);
|
||||
nodesStatsRequest.clear().jvm(true).os(true).fs(true).indices(true).process(true).script(true);
|
||||
client.admin().cluster().nodesStats(nodesStatsRequest, new RestResponseListener<NodesStatsResponse>(channel) {
|
||||
@Override
|
||||
public RestResponse buildResponse(NodesStatsResponse nodesStatsResponse) throws Exception {
|
||||
|
@ -183,6 +184,9 @@ public class RestNodesAction extends AbstractCatAction {
|
|||
table.addCell("refresh.total", "alias:rto,refreshTotal;default:false;text-align:right;desc:total refreshes");
|
||||
table.addCell("refresh.time", "alias:rti,refreshTime;default:false;text-align:right;desc:time spent in refreshes");
|
||||
|
||||
table.addCell("script.compilations", "alias:scrcc,scriptCompilations;default:false;text-align:right;desc:script compilations");
|
||||
table.addCell("script.cache_evictions", "alias:scrce,scriptCacheEvictions;default:false;text-align:right;desc:script cache evictions");
|
||||
|
||||
table.addCell("search.fetch_current", "alias:sfc,searchFetchCurrent;default:false;text-align:right;desc:current fetch phase ops");
|
||||
table.addCell("search.fetch_time", "alias:sfti,searchFetchTime;default:false;text-align:right;desc:time spent in fetch phase");
|
||||
table.addCell("search.fetch_total", "alias:sfto,searchFetchTotal;default:false;text-align:right;desc:total fetch ops");
|
||||
|
@ -317,6 +321,10 @@ public class RestNodesAction extends AbstractCatAction {
|
|||
table.addCell(refreshStats == null ? null : refreshStats.getTotal());
|
||||
table.addCell(refreshStats == null ? null : refreshStats.getTotalTime());
|
||||
|
||||
ScriptStats scriptStats = stats == null ? null : stats.getScriptStats();
|
||||
table.addCell(scriptStats == null ? null : scriptStats.getCompilations());
|
||||
table.addCell(scriptStats == null ? null : scriptStats.getCacheEvictions());
|
||||
|
||||
SearchStats searchStats = indicesStats == null ? null : indicesStats.getSearch();
|
||||
table.addCell(searchStats == null ? null : searchStats.getTotal().getFetchCurrent());
|
||||
table.addCell(searchStats == null ? null : searchStats.getTotal().getFetchTime());
|
||||
|
|
|
@ -17,16 +17,23 @@
|
|||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.bootstrap;
|
||||
package org.elasticsearch.script;
|
||||
|
||||
/**
|
||||
* Same as {@link Elasticsearch} just runs it in the foreground by default (does not close
|
||||
* sout and serr).
|
||||
*/
|
||||
public class ElasticsearchF {
|
||||
import org.elasticsearch.common.metrics.CounterMetric;
|
||||
|
||||
public static void main(String[] args) throws Throwable {
|
||||
System.setProperty("es.foreground", "yes");
|
||||
Bootstrap.main(args);
|
||||
public class ScriptMetrics {
|
||||
final CounterMetric compilationsMetric = new CounterMetric();
|
||||
final CounterMetric cacheEvictionsMetric = new CounterMetric();
|
||||
|
||||
public ScriptStats stats() {
|
||||
return new ScriptStats(compilationsMetric.count(), cacheEvictionsMetric.count());
|
||||
}
|
||||
|
||||
public void onCompilation() {
|
||||
compilationsMetric.inc();
|
||||
}
|
||||
|
||||
public void onCacheEviction() {
|
||||
cacheEvictionsMetric.inc();
|
||||
}
|
||||
}
|
|
@ -84,6 +84,7 @@ public class ScriptService extends AbstractComponent implements Closeable {
|
|||
|
||||
public static final String DEFAULT_SCRIPTING_LANGUAGE_SETTING = "script.default_lang";
|
||||
public static final String SCRIPT_CACHE_SIZE_SETTING = "script.cache.max_size";
|
||||
public static final int SCRIPT_CACHE_SIZE_DEFAULT = 100;
|
||||
public static final String SCRIPT_CACHE_EXPIRE_SETTING = "script.cache.expire";
|
||||
public static final String SCRIPT_INDEX = ".scripts";
|
||||
public static final String DEFAULT_LANG = GroovyScriptEngineService.NAME;
|
||||
|
@ -107,6 +108,8 @@ public class ScriptService extends AbstractComponent implements Closeable {
|
|||
|
||||
private Client client = null;
|
||||
|
||||
private final ScriptMetrics scriptMetrics = new ScriptMetrics();
|
||||
|
||||
/**
|
||||
* @deprecated Use {@link org.elasticsearch.script.Script.ScriptField} instead. This should be removed in
|
||||
* 2.0
|
||||
|
@ -140,7 +143,7 @@ public class ScriptService extends AbstractComponent implements Closeable {
|
|||
|
||||
this.scriptEngines = scriptEngines;
|
||||
this.scriptContextRegistry = scriptContextRegistry;
|
||||
int cacheMaxSize = settings.getAsInt(SCRIPT_CACHE_SIZE_SETTING, 100);
|
||||
int cacheMaxSize = settings.getAsInt(SCRIPT_CACHE_SIZE_SETTING, SCRIPT_CACHE_SIZE_DEFAULT);
|
||||
TimeValue cacheExpire = settings.getAsTime(SCRIPT_CACHE_EXPIRE_SETTING, null);
|
||||
logger.debug("using script cache with max_size [{}], expire [{}]", cacheMaxSize, cacheExpire);
|
||||
|
||||
|
@ -306,6 +309,7 @@ public class ScriptService extends AbstractComponent implements Closeable {
|
|||
|
||||
//Since the cache key is the script content itself we don't need to
|
||||
//invalidate/check the cache if an indexed script changes.
|
||||
scriptMetrics.onCompilation();
|
||||
cache.put(cacheKey, compiledScript);
|
||||
}
|
||||
|
||||
|
@ -474,6 +478,10 @@ public class ScriptService extends AbstractComponent implements Closeable {
|
|||
}
|
||||
}
|
||||
|
||||
public ScriptStats stats() {
|
||||
return scriptMetrics.stats();
|
||||
}
|
||||
|
||||
/**
|
||||
* A small listener for the script cache that calls each
|
||||
* {@code ScriptEngineService}'s {@code scriptRemoved} method when the
|
||||
|
@ -486,6 +494,7 @@ public class ScriptService extends AbstractComponent implements Closeable {
|
|||
if (logger.isDebugEnabled()) {
|
||||
logger.debug("notifying script services of script removal due to: [{}]", notification.getCause());
|
||||
}
|
||||
scriptMetrics.onCacheEviction();
|
||||
for (ScriptEngineService service : scriptEngines) {
|
||||
try {
|
||||
service.scriptRemoved(notification.getValue());
|
||||
|
@ -532,6 +541,7 @@ public class ScriptService extends AbstractComponent implements Closeable {
|
|||
String script = Streams.copyToString(reader);
|
||||
String cacheKey = getCacheKey(engineService, scriptNameExt.v1(), null);
|
||||
staticCache.put(cacheKey, new CompiledScript(ScriptType.FILE, scriptNameExt.v1(), engineService.types()[0], engineService.compile(script)));
|
||||
scriptMetrics.onCompilation();
|
||||
}
|
||||
} else {
|
||||
logger.warn("skipping compile of script file [{}] as all scripted operations are disabled for file scripts", file.toAbsolutePath());
|
||||
|
|
|
@ -0,0 +1,82 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.script;
|
||||
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
import org.elasticsearch.common.io.stream.StreamOutput;
|
||||
import org.elasticsearch.common.io.stream.Streamable;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilderString;
|
||||
|
||||
import java.io.IOException;
|
||||
|
||||
public class ScriptStats implements Streamable, ToXContent {
|
||||
private long compilations;
|
||||
private long cacheEvictions;
|
||||
|
||||
public ScriptStats() {
|
||||
}
|
||||
|
||||
public ScriptStats(long compilations, long cacheEvictions) {
|
||||
this.compilations = compilations;
|
||||
this.cacheEvictions = cacheEvictions;
|
||||
}
|
||||
|
||||
public void add(ScriptStats stats) {
|
||||
this.compilations += stats.compilations;
|
||||
this.cacheEvictions += stats.cacheEvictions;
|
||||
}
|
||||
|
||||
public long getCompilations() {
|
||||
return compilations;
|
||||
}
|
||||
|
||||
public long getCacheEvictions() {
|
||||
return cacheEvictions;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void readFrom(StreamInput in) throws IOException {
|
||||
compilations = in.readVLong();
|
||||
cacheEvictions = in.readVLong();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeVLong(compilations);
|
||||
out.writeVLong(cacheEvictions);
|
||||
}
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject(Fields.SCRIPT_STATS);
|
||||
builder.field(Fields.COMPILATIONS, getCompilations());
|
||||
builder.field(Fields.CACHE_EVICTIONS, getCacheEvictions());
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
static final class Fields {
|
||||
static final XContentBuilderString SCRIPT_STATS = new XContentBuilderString("script");
|
||||
static final XContentBuilderString COMPILATIONS = new XContentBuilderString("compilations");
|
||||
static final XContentBuilderString CACHE_EVICTIONS = new XContentBuilderString("cache_evictions");
|
||||
}
|
||||
}
|
|
@ -179,7 +179,8 @@ public class MockDiskUsagesIT extends ESIntegTestCase {
|
|||
System.currentTimeMillis(),
|
||||
null, null, null, null, null,
|
||||
fsInfo,
|
||||
null, null, null);
|
||||
null, null, null,
|
||||
null);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -387,6 +387,73 @@ public class ScriptServiceTests extends ESTestCase {
|
|||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testCompileCountedInCompilationStats() throws IOException {
|
||||
buildScriptService(Settings.EMPTY);
|
||||
scriptService.compile(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
assertEquals(1L, scriptService.stats().getCompilations());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testExecutableCountedInCompilationStats() throws IOException {
|
||||
buildScriptService(Settings.EMPTY);
|
||||
scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
assertEquals(1L, scriptService.stats().getCompilations());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testSearchCountedInCompilationStats() throws IOException {
|
||||
buildScriptService(Settings.EMPTY);
|
||||
scriptService.search(null, new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
assertEquals(1L, scriptService.stats().getCompilations());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testMultipleCompilationsCountedInCompilationStats() throws IOException {
|
||||
buildScriptService(Settings.EMPTY);
|
||||
int numberOfCompilations = randomIntBetween(1, 1024);
|
||||
for (int i = 0; i < numberOfCompilations; i++) {
|
||||
scriptService.compile(new Script(i + " + " + i, ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
}
|
||||
assertEquals(numberOfCompilations, scriptService.stats().getCompilations());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testCompilationStatsOnCacheHit() throws IOException {
|
||||
Settings.Builder builder = Settings.builder();
|
||||
builder.put(ScriptService.SCRIPT_CACHE_SIZE_SETTING, 1);
|
||||
buildScriptService(builder.build());
|
||||
scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
assertEquals(1L, scriptService.stats().getCompilations());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testFileScriptCountedInCompilationStats() throws IOException {
|
||||
buildScriptService(Settings.EMPTY);
|
||||
createFileScripts("test");
|
||||
scriptService.compile(new Script("file_script", ScriptType.FILE, "test", null), randomFrom(scriptContexts));
|
||||
assertEquals(1L, scriptService.stats().getCompilations());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testIndexedScriptCountedInCompilationStats() throws IOException {
|
||||
buildScriptService(Settings.EMPTY);
|
||||
scriptService.compile(new Script("script", ScriptType.INDEXED, "test", null), randomFrom(scriptContexts));
|
||||
assertEquals(1L, scriptService.stats().getCompilations());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testCacheEvictionCountedInCacheEvictionsStats() throws IOException {
|
||||
Settings.Builder builder = Settings.builder();
|
||||
builder.put(ScriptService.SCRIPT_CACHE_SIZE_SETTING, 1);
|
||||
buildScriptService(builder.build());
|
||||
scriptService.executable(new Script("1+1", ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
scriptService.executable(new Script("2+2", ScriptType.INLINE, "test", null), randomFrom(scriptContexts));
|
||||
assertEquals(2L, scriptService.stats().getCompilations());
|
||||
assertEquals(1L, scriptService.stats().getCacheEvictions());
|
||||
}
|
||||
|
||||
private void createFileScripts(String... langs) throws IOException {
|
||||
for (String lang : langs) {
|
||||
Path scriptPath = scriptsFilePath.resolve("file_script." + lang);
|
||||
|
|
|
@ -1855,7 +1855,7 @@ public final class InternalTestCluster extends TestCluster {
|
|||
}
|
||||
|
||||
NodeService nodeService = getInstanceFromNode(NodeService.class, nodeAndClient.node);
|
||||
NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false);
|
||||
NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false, false);
|
||||
assertThat("Fielddata size must be 0 on node: " + stats.getNode(), stats.getIndices().getFieldData().getMemorySizeInBytes(), equalTo(0l));
|
||||
assertThat("Query cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getQueryCache().getMemorySizeInBytes(), equalTo(0l));
|
||||
assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getBitsetMemoryInBytes(), equalTo(0l));
|
||||
|
|
|
@ -237,7 +237,7 @@
|
|||
<target name="setup-workspace-zip" depends="stop-external-cluster">
|
||||
<sequential>
|
||||
<delete dir="${integ.scratch}"/>
|
||||
<unzip src="${project.build.directory}/releases/${project.artifactId}-${project.version}.zip"
|
||||
<unzip src="${project.build.directory}/releases/${project.artifactId}-${project.version}.zip"
|
||||
dest="${integ.scratch}"/>
|
||||
</sequential>
|
||||
</target>
|
||||
|
@ -252,7 +252,7 @@
|
|||
<target name="setup-workspace-tar" depends="stop-external-cluster">
|
||||
<sequential>
|
||||
<delete dir="${integ.scratch}"/>
|
||||
<untar src="${project.build.directory}/releases/${project.artifactId}-${project.version}.tar.gz"
|
||||
<untar src="${project.build.directory}/releases/${project.artifactId}-${project.version}.tar.gz"
|
||||
dest="${integ.scratch}"
|
||||
compression="gzip"/>
|
||||
</sequential>
|
||||
|
@ -273,13 +273,13 @@
|
|||
<!-- print some basic package info -->
|
||||
<exec executable="dpkg-deb" failonerror="true" taskname="deb-info">
|
||||
<arg value="-I"/>
|
||||
<arg value="${debfile}"/>
|
||||
<arg value="${debfile}"/>
|
||||
</exec>
|
||||
<!-- extract contents from .deb package -->
|
||||
<exec executable="dpkg-deb" failonerror="true">
|
||||
<arg value="-x"/>
|
||||
<arg value="${debfile}"/>
|
||||
<arg value="${integ.scratch}/deb-extracted"/>
|
||||
<arg value="-x"/>
|
||||
<arg value="${debfile}"/>
|
||||
<arg value="${integ.scratch}/deb-extracted"/>
|
||||
</exec>
|
||||
</sequential>
|
||||
</target>
|
||||
|
@ -306,7 +306,7 @@
|
|||
<arg value="-q"/>
|
||||
<arg value="-i"/>
|
||||
<arg value="-p"/>
|
||||
<arg value="${rpm.file}"/>
|
||||
<arg value="${rpm.file}"/>
|
||||
</exec>
|
||||
<!-- extract contents from .rpm package -->
|
||||
<exec executable="rpm" failonerror="true" taskname="rpm">
|
||||
|
@ -315,11 +315,11 @@
|
|||
<arg value="--badreloc"/>
|
||||
<arg value="--relocate"/>
|
||||
<arg value="/=${rpm.extracted}"/>
|
||||
<arg value="--nodeps"/>
|
||||
<arg value="--noscripts"/>
|
||||
<arg value="--notriggers"/>
|
||||
<arg value="--nodeps"/>
|
||||
<arg value="--noscripts"/>
|
||||
<arg value="--notriggers"/>
|
||||
<arg value="-i"/>
|
||||
<arg value="${rpm.file}"/>
|
||||
<arg value="${rpm.file}"/>
|
||||
</exec>
|
||||
</sequential>
|
||||
</target>
|
||||
|
@ -359,5 +359,4 @@
|
|||
</condition>
|
||||
</fail>
|
||||
</target>
|
||||
|
||||
</project>
|
||||
|
|
|
@ -46,17 +46,6 @@ java.nio.file.FileSystems#getDefault() @ use PathUtils.getDefault instead.
|
|||
java.nio.file.Files#createTempDirectory(java.lang.String,java.nio.file.attribute.FileAttribute[])
|
||||
java.nio.file.Files#createTempFile(java.lang.String,java.lang.String,java.nio.file.attribute.FileAttribute[])
|
||||
|
||||
@defaultMessage Constructing a DateTime without a time zone is dangerous
|
||||
org.joda.time.DateTime#<init>()
|
||||
org.joda.time.DateTime#<init>(long)
|
||||
org.joda.time.DateTime#<init>(int, int, int, int, int)
|
||||
org.joda.time.DateTime#<init>(int, int, int, int, int, int)
|
||||
org.joda.time.DateTime#<init>(int, int, int, int, int, int, int)
|
||||
org.joda.time.DateTime#now()
|
||||
org.joda.time.DateTimeZone#getDefault()
|
||||
|
||||
com.google.common.collect.Iterators#emptyIterator() @ Use Collections.emptyIterator instead
|
||||
|
||||
@defaultMessage Don't use java serialization - this can break BWC without noticing it
|
||||
java.io.ObjectOutputStream
|
||||
java.io.ObjectOutput
|
||||
|
|
|
@ -20,3 +20,14 @@ org.elasticsearch.common.primitives.Longs#compare(long,long)
|
|||
@defaultMessage unsafe encoders/decoders have problems in the lzf compress library. Use variants of encode/decode functions which take Encoder/Decoder.
|
||||
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder#<init>()
|
||||
org.elasticsearch.common.compress.lzf.util.ChunkDecoderFactory#optimalInstance()
|
||||
|
||||
@defaultMessage Constructing a DateTime without a time zone is dangerous
|
||||
org.elasticsearch.joda.time.DateTime#<init>()
|
||||
org.elasticsearch.joda.time.DateTime#<init>(long)
|
||||
org.elasticsearch.joda.time.DateTime#<init>(int, int, int, int, int)
|
||||
org.elasticsearch.joda.time.DateTime#<init>(int, int, int, int, int, int)
|
||||
org.elasticsearch.joda.time.DateTime#<init>(int, int, int, int, int, int, int)
|
||||
org.elasticsearch.joda.time.DateTime#now()
|
||||
org.elasticsearch.joda.time.DateTimeZone#getDefault()
|
||||
|
||||
org.elasticsearch.common.collect.Iterators#emptyIterator() @ Use Collections.emptyIterator instead
|
||||
|
|
|
@ -58,3 +58,14 @@ com.ning.compress.lzf.LZFOutputStream#<init>(java.io.OutputStream)
|
|||
com.ning.compress.lzf.LZFOutputStream#<init>(java.io.OutputStream, com.ning.compress.BufferRecycler)
|
||||
com.ning.compress.lzf.LZFUncompressor#<init>(com.ning.compress.DataHandler)
|
||||
com.ning.compress.lzf.LZFUncompressor#<init>(com.ning.compress.DataHandler, com.ning.compress.BufferRecycler)
|
||||
|
||||
@defaultMessage Constructing a DateTime without a time zone is dangerous
|
||||
org.joda.time.DateTime#<init>()
|
||||
org.joda.time.DateTime#<init>(long)
|
||||
org.joda.time.DateTime#<init>(int, int, int, int, int)
|
||||
org.joda.time.DateTime#<init>(int, int, int, int, int, int)
|
||||
org.joda.time.DateTime#<init>(int, int, int, int, int, int, int)
|
||||
org.joda.time.DateTime#now()
|
||||
org.joda.time.DateTimeZone#getDefault()
|
||||
|
||||
com.google.common.collect.Iterators#emptyIterator() @ Use Collections.emptyIterator instead
|
|
@ -2,6 +2,7 @@
|
|||
|
||||
use strict;
|
||||
use warnings;
|
||||
use v5.10;
|
||||
|
||||
use FindBin qw($RealBin);
|
||||
use lib "$RealBin/lib";
|
||||
|
@ -10,20 +11,9 @@ use File::Temp();
|
|||
use File::Find();
|
||||
use File::Basename qw(basename);
|
||||
use Archive::Extract();
|
||||
use Digest::SHA();
|
||||
$Archive::Extract::PREFER_BIN = 1;
|
||||
|
||||
our $SHA_CLASS = 'Digest::SHA';
|
||||
if ( eval { require Digest::SHA } ) {
|
||||
$SHA_CLASS = 'Digest::SHA';
|
||||
}
|
||||
else {
|
||||
|
||||
print STDERR "Digest::SHA not available. "
|
||||
. "Falling back to Digest::SHA::PurePerl\n";
|
||||
require Digest::SHA::PurePerl;
|
||||
$SHA_CLASS = 'Digest::SHA::PurePerl';
|
||||
}
|
||||
|
||||
my $mode = shift(@ARGV) || "";
|
||||
die usage() unless $mode =~ /^--(check|update)$/;
|
||||
|
||||
|
@ -32,6 +22,9 @@ my $Source = shift(@ARGV) || die usage();
|
|||
$License_Dir = File::Spec->rel2abs($License_Dir) . '/';
|
||||
$Source = File::Spec->rel2abs($Source);
|
||||
|
||||
say "LICENSE DIR: $License_Dir";
|
||||
say "SOURCE: $Source";
|
||||
|
||||
die "License dir is not a directory: $License_Dir\n" . usage()
|
||||
unless -d $License_Dir;
|
||||
|
||||
|
@ -59,15 +52,15 @@ sub check_shas_and_licenses {
|
|||
for my $jar ( sort keys %new ) {
|
||||
my $old_sha = delete $old{$jar};
|
||||
unless ($old_sha) {
|
||||
print STDERR "$jar: SHA is missing\n";
|
||||
say STDERR "$jar: SHA is missing";
|
||||
$error++;
|
||||
$sha_error++;
|
||||
next;
|
||||
}
|
||||
|
||||
unless ( $old_sha eq $new{$jar} ) {
|
||||
print STDERR
|
||||
"$jar: SHA has changed, expected $old_sha but found $new{$jar}\n";
|
||||
say STDERR
|
||||
"$jar: SHA has changed, expected $old_sha but found $new{$jar}";
|
||||
$error++;
|
||||
$sha_error++;
|
||||
next;
|
||||
|
@ -95,41 +88,37 @@ sub check_shas_and_licenses {
|
|||
}
|
||||
}
|
||||
unless ($license_found) {
|
||||
print STDERR "$jar: LICENSE is missing\n";
|
||||
say STDERR "$jar: LICENSE is missing";
|
||||
$error++;
|
||||
$sha_error++;
|
||||
}
|
||||
unless ($notice_found) {
|
||||
print STDERR "$jar: NOTICE is missing\n";
|
||||
say STDERR "$jar: NOTICE is missing";
|
||||
$error++;
|
||||
}
|
||||
}
|
||||
|
||||
if ( keys %old ) {
|
||||
print STDERR "Extra SHA files present for: " . join ", ",
|
||||
sort keys %old;
|
||||
print "\n";
|
||||
say STDERR "Extra SHA files present for: " . join ", ", sort keys %old;
|
||||
$error++;
|
||||
}
|
||||
|
||||
my @unused_licenses = grep { !$licenses{$_} } keys %licenses;
|
||||
if (@unused_licenses) {
|
||||
$error++;
|
||||
print STDERR "Extra LICENCE file present: " . join ", ",
|
||||
say STDERR "Extra LICENCE file present: " . join ", ",
|
||||
sort @unused_licenses;
|
||||
print "\n";
|
||||
}
|
||||
|
||||
my @unused_notices = grep { !$notices{$_} } keys %notices;
|
||||
if (@unused_notices) {
|
||||
$error++;
|
||||
print STDERR "Extra NOTICE file present: " . join ", ",
|
||||
say STDERR "Extra NOTICE file present: " . join ", ",
|
||||
sort @unused_notices;
|
||||
print "\n";
|
||||
}
|
||||
|
||||
if ($sha_error) {
|
||||
print STDERR <<"SHAS"
|
||||
say STDERR <<"SHAS"
|
||||
|
||||
You can update the SHA files by running:
|
||||
|
||||
|
@ -137,7 +126,7 @@ $0 --update $License_Dir $Source
|
|||
|
||||
SHAS
|
||||
}
|
||||
print("All SHAs and licenses OK\n") unless $error;
|
||||
say("All SHAs and licenses OK") unless $error;
|
||||
return $error;
|
||||
}
|
||||
|
||||
|
@ -150,13 +139,13 @@ sub write_shas {
|
|||
for my $jar ( sort keys %new ) {
|
||||
if ( $old{$jar} ) {
|
||||
next if $old{$jar} eq $new{$jar};
|
||||
print "Updating $jar\n";
|
||||
say "Updating $jar";
|
||||
}
|
||||
else {
|
||||
print "Adding $jar\n";
|
||||
say "Adding $jar";
|
||||
}
|
||||
open my $fh, '>', $License_Dir . $jar or die $!;
|
||||
print $fh $new{$jar} . "\n" or die $!;
|
||||
say $fh $new{$jar} or die $!;
|
||||
close $fh or die $!;
|
||||
}
|
||||
continue {
|
||||
|
@ -164,10 +153,10 @@ sub write_shas {
|
|||
}
|
||||
|
||||
for my $jar ( sort keys %old ) {
|
||||
print "Deleting $jar\n";
|
||||
say "Deleting $jar";
|
||||
unlink $License_Dir . $jar or die $!;
|
||||
}
|
||||
print "SHAs updated\n";
|
||||
say "SHAs updated";
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -212,8 +201,6 @@ sub jars_from_zip {
|
|||
$archive->extract( to => $dir_name ) || die $archive->error;
|
||||
my @jars = map { File::Spec->rel2abs( $_, $dir_name ) }
|
||||
grep { /\.jar$/ && !/elasticsearch[^\/]*$/ } @{ $archive->files };
|
||||
die "No JARS found in: $source\n"
|
||||
unless @jars;
|
||||
return calculate_shas(@jars);
|
||||
}
|
||||
|
||||
|
@ -231,8 +218,6 @@ sub jars_from_dir {
|
|||
},
|
||||
$source
|
||||
);
|
||||
die "No JARS found in: $source\n"
|
||||
unless @jars;
|
||||
return calculate_shas(@jars);
|
||||
}
|
||||
|
||||
|
@ -241,7 +226,7 @@ sub calculate_shas {
|
|||
#===================================
|
||||
my %shas;
|
||||
while ( my $file = shift() ) {
|
||||
my $digest = eval { $SHA_CLASS->new(1)->addfile($file) }
|
||||
my $digest = eval { Digest::SHA->new(1)->addfile($file) }
|
||||
or die "Error calculating SHA1 for <$file>: $!\n";
|
||||
$shas{ basename($file) . ".sha1" } = $digest->hexdigest;
|
||||
}
|
||||
|
|
|
@ -139,6 +139,12 @@
|
|||
<group>root</group>
|
||||
</mapper>
|
||||
</data>
|
||||
<data>
|
||||
<type>template</type>
|
||||
<paths>
|
||||
<path>${packaging.elasticsearch.conf.dir}/scripts</path>
|
||||
</paths>
|
||||
</data>
|
||||
<!-- Add environment vars file -->
|
||||
<data>
|
||||
<src>${project.build.directory}/generated-packaging/deb/env/elasticsearch</src>
|
||||
|
|
|
@ -175,8 +175,7 @@ case "$1" in
|
|||
# Start Daemon
|
||||
start-stop-daemon -d $ES_HOME --start -b --user "$ES_USER" -c "$ES_USER" --pidfile "$PID_FILE" --exec $DAEMON -- $DAEMON_OPTS
|
||||
return=$?
|
||||
if [ $return -eq 0 ]
|
||||
then
|
||||
if [ $return -eq 0 ]; then
|
||||
i=0
|
||||
timeout=10
|
||||
# Wait for the process to be properly started before exiting
|
||||
|
@ -189,9 +188,9 @@ case "$1" in
|
|||
exit 1
|
||||
fi
|
||||
done
|
||||
else
|
||||
log_end_msg $return
|
||||
fi
|
||||
log_end_msg $return
|
||||
exit $return
|
||||
;;
|
||||
stop)
|
||||
log_daemon_msg "Stopping $DESC"
|
||||
|
@ -199,7 +198,8 @@ case "$1" in
|
|||
if [ -f "$PID_FILE" ]; then
|
||||
start-stop-daemon --stop --pidfile "$PID_FILE" \
|
||||
--user "$ES_USER" \
|
||||
--retry=TERM/20/KILL/5 >/dev/null
|
||||
--quiet \
|
||||
--retry forever/TERM/20 > /dev/null
|
||||
if [ $? -eq 1 ]; then
|
||||
log_progress_msg "$DESC is not running but pid file exists, cleaning up"
|
||||
elif [ $? -eq 3 ]; then
|
||||
|
|
|
@ -29,6 +29,10 @@
|
|||
<packaging.elasticsearch.systemd.sysctl.dir>/usr/lib/sysctl.d</packaging.elasticsearch.systemd.sysctl.dir>
|
||||
<packaging.elasticsearch.tmpfilesd.dir>/usr/lib/tmpfiles.d</packaging.elasticsearch.tmpfilesd.dir>
|
||||
|
||||
<!-- Properties for the license checker -->
|
||||
<project.licenses.dir>${project.basedir}/../licenses</project.licenses.dir>
|
||||
<project.licenses.check_target>${integ.scratch}</project.licenses.check_target>
|
||||
|
||||
<!-- rpmbuild location : default to /usr/bin/rpmbuild -->
|
||||
<packaging.rpm.rpmbuild>/usr/bin/rpmbuild</packaging.rpm.rpmbuild>
|
||||
|
||||
|
@ -97,31 +101,6 @@
|
|||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<!-- checks integration test scratch area (where we extract the distribution) -->
|
||||
<executions>
|
||||
<execution>
|
||||
<id>check-license</id>
|
||||
<phase>verify</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<skip>${skip.integ.tests}</skip>
|
||||
<target>
|
||||
<condition property="licenses.exists">
|
||||
<available file="${basedir}/../licenses" type="dir"/>
|
||||
</condition>
|
||||
<echo taskName="license check">Running license check</echo>
|
||||
<exec failonerror="${licenses.exists}" executable="perl"
|
||||
dir="${elasticsearch.tools.directory}/license-check">
|
||||
<arg value="check_license_and_sha.pl"/>
|
||||
<arg value="--check"/>
|
||||
<arg value="${basedir}/../licenses"/>
|
||||
<arg value="${integ.scratch}"/>
|
||||
</exec>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
|
|
|
@ -144,6 +144,10 @@
|
|||
</source>
|
||||
</sources>
|
||||
</mapping>
|
||||
<mapping>
|
||||
<directory>${packaging.elasticsearch.conf.dir}/scripts</directory>
|
||||
<configuration>noreplace</configuration>
|
||||
</mapping>
|
||||
<!-- Add environment vars file -->
|
||||
<mapping>
|
||||
<directory>/etc/sysconfig/</directory>
|
||||
|
|
|
@ -120,7 +120,7 @@ start() {
|
|||
stop() {
|
||||
echo -n $"Stopping $prog: "
|
||||
# stop it here, often "killproc $prog"
|
||||
killproc -p $pidfile -d 20 $prog
|
||||
killproc -p $pidfile -d ${packaging.elasticsearch.stopping.timeout} $prog
|
||||
retval=$?
|
||||
echo
|
||||
[ $retval -eq 0 ] && rm -f $lockfile
|
||||
|
|
|
@ -14,3 +14,6 @@ packaging.type=rpm
|
|||
# Custom header for package scripts
|
||||
packaging.scripts.header=
|
||||
packaging.scripts.footer=# Built for ${project.name}-${project.version} (${packaging.type})
|
||||
|
||||
# Maximum time to wait for elasticsearch to stop (default to 1 day)
|
||||
packaging.elasticsearch.stopping.timeout=86400
|
||||
|
|
|
@ -8,8 +8,8 @@ ${packaging.scripts.header}
|
|||
# $1=purge : indicates an upgrade
|
||||
#
|
||||
# On RedHat,
|
||||
# $1=1 : indicates an new install
|
||||
# $1=2 : indicates an upgrade
|
||||
# $1=0 : indicates a removal
|
||||
# $1=1 : indicates an upgrade
|
||||
|
||||
|
||||
|
||||
|
@ -39,7 +39,7 @@ case "$1" in
|
|||
REMOVE_SERVICE=true
|
||||
REMOVE_USER_AND_GROUP=true
|
||||
;;
|
||||
2)
|
||||
1)
|
||||
# If $1=1 this is an upgrade
|
||||
IS_UPGRADE=true
|
||||
;;
|
||||
|
|
|
@ -47,13 +47,13 @@ esac
|
|||
if [ "$STOP_REQUIRED" = "true" ]; then
|
||||
echo -n "Stopping elasticsearch service..."
|
||||
if command -v systemctl >/dev/null; then
|
||||
systemctl --no-reload stop elasticsearch.service > /dev/null 2>&1 || true
|
||||
systemctl --no-reload stop elasticsearch.service
|
||||
|
||||
elif [ -x /etc/init.d/elasticsearch ]; then
|
||||
if command -v invoke-rc.d >/dev/null; then
|
||||
invoke-rc.d elasticsearch stop || true
|
||||
invoke-rc.d elasticsearch stop
|
||||
else
|
||||
/etc/init.d/elasticsearch stop || true
|
||||
/etc/init.d/elasticsearch stop
|
||||
fi
|
||||
|
||||
# older suse linux distributions do not ship with systemd
|
||||
|
@ -61,7 +61,7 @@ if [ "$STOP_REQUIRED" = "true" ]; then
|
|||
# this tries to start the elasticsearch service on these
|
||||
# as well without failing this script
|
||||
elif [ -x /etc/rc.d/init.d/elasticsearch ] ; then
|
||||
/etc/rc.d/init.d/elasticsearch stop || true
|
||||
/etc/rc.d/init.d/elasticsearch stop
|
||||
fi
|
||||
echo " OK"
|
||||
fi
|
||||
|
|
|
@ -32,9 +32,6 @@ StandardOutput=null
|
|||
# Connects standard error to journal
|
||||
StandardError=journal
|
||||
|
||||
# When a JVM receives a SIGTERM signal it exits with code 143
|
||||
SuccessExitStatus=143
|
||||
|
||||
# Specifies the maximum file descriptor number that can be opened by this process
|
||||
LimitNOFILE=${packaging.os.max.open.files}
|
||||
|
||||
|
@ -43,8 +40,17 @@ LimitNOFILE=${packaging.os.max.open.files}
|
|||
# in elasticsearch.yml and 'MAX_LOCKED_MEMORY=unlimited' in ${packaging.env.file}
|
||||
#LimitMEMLOCK=infinity
|
||||
|
||||
# Shutdown delay in seconds, before process is tried to be killed with KILL (if configured)
|
||||
TimeoutStopSec=20
|
||||
# Disable timeout logic and wait until process is stopped
|
||||
TimeoutStopSec=0
|
||||
|
||||
# SIGTERM signal is used to stop the Java process
|
||||
KillSignal=SIGTERM
|
||||
|
||||
# Java process is never killed
|
||||
SendSIGKILL=no
|
||||
|
||||
# When a JVM receives a SIGTERM signal it exits with code 143
|
||||
SuccessExitStatus=143
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
|
|
@ -8,7 +8,7 @@ experimental[]
|
|||
|
||||
Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding
|
||||
information to the output tree. There are many different types of pipeline aggregation, each computing different information from
|
||||
other aggregations, but these types can broken down into two families:
|
||||
other aggregations, but these types can be broken down into two families:
|
||||
|
||||
_Parent_::
|
||||
A family of pipeline aggregations that is provided with the output of its parent aggregation and is able
|
||||
|
|
|
@ -168,6 +168,8 @@ percolating |0s
|
|||
|`percolate.total` |`pto`, `percolateTotal` |No |Total percolations |0
|
||||
|`refresh.total` |`rto`, `refreshTotal` |No |Number of refreshes |16
|
||||
|`refresh.time` |`rti`, `refreshTime` |No |Time spent in refreshes |91ms
|
||||
|`script.compilations` |`scrcc`, `scriptCompilations` |No |Total script compilations |17
|
||||
|`script.cache_evictions` |`scrce`, `scriptCacheEvictions` |No |Total compiled scripts evicted from cache |6
|
||||
|`search.fetch_current` |`sfc`, `searchFetchCurrent` |No |Current fetch
|
||||
phase operations |0
|
||||
|`search.fetch_time` |`sfti`, `searchFetchTime` |No |Time spent in fetch
|
||||
|
|
|
@ -60,6 +60,17 @@ NOTE: This setting will not affect the promotion of replicas to primaries, nor
|
|||
will it affect the assignment of replicas that have not been assigned
|
||||
previously.
|
||||
|
||||
==== Cancellation of shard relocation
|
||||
|
||||
If delayed allocation times out, the master assigns the missing shards to
|
||||
another node which will start recovery. If the missing node rejoins the
|
||||
cluster, and its shards still have the same sync-id as the primary, shard
|
||||
relocation will be cancelled and the synced shard will be used for recovery
|
||||
instead.
|
||||
|
||||
For this reason, the default `timeout` is set to just one minute: even if shard
|
||||
relocation begins, cancelling recovery in favour of the synced shard is cheap.
|
||||
|
||||
==== Monitoring delayed unassigned shards
|
||||
|
||||
The number of shards whose allocation has been delayed by this timeout setting
|
||||
|
|
|
@ -827,6 +827,19 @@ infrastructure for the pluginmanager and the elasticsearch start up
|
|||
script, the `-v` parameter now stands for `--verbose`, where as `-V` or
|
||||
`--version` can be used to show the Elasticsearch version and exit.
|
||||
|
||||
=== `/bin/elasticsearch` dynamic parameters must come after static ones
|
||||
|
||||
If you are setting configuration options like cluster name or node name via
|
||||
the commandline, you have to ensure, that the static options like pid file
|
||||
path or daemonizing always come first, like this
|
||||
|
||||
```
|
||||
/bin/elasticsearch -d -p /tmp/foo.pid --http.cors.enabled=true --http.cors.allow-origin='*'
|
||||
|
||||
```
|
||||
|
||||
For a list of those static parameters, run `/bin/elasticsearch -h`
|
||||
|
||||
=== Aliases
|
||||
|
||||
Fields used in alias filters no longer have to exist in the mapping upon alias creation time. Alias filters are now
|
||||
|
|
|
@ -82,6 +82,12 @@ serves as a protection against (partial) network failures where node may unjustl
|
|||
think that the master has failed. In this case the node will simply hear from
|
||||
other nodes about the currently active master.
|
||||
|
||||
If `discovery.zen.master_election.filter_client` is `true`, pings from client nodes (nodes where `node.client` is
|
||||
`true`, or both `node.data` and `node.master` are `false`) are ignored during master election; the default value is
|
||||
`true`. If `discovery.zen.master_election.filter_data` is `true`, pings from non-master-eligible data nodes (nodes
|
||||
where `node.data` is `true` and `node.master` is `false`) are ignored during master election; the default value is
|
||||
`false`. Pings from master-eligible nodes are always observed during master election.
|
||||
|
||||
Nodes can be excluded from becoming a master by setting `node.master` to
|
||||
`false`. Note, once a node is a client node (`node.client` set to
|
||||
`true`), it will not be allowed to become a master (`node.master` is
|
||||
|
|
|
@ -59,13 +59,13 @@ There are added features when using the `elasticsearch` shell script.
|
|||
The first, which was explained earlier, is the ability to easily run the
|
||||
process either in the foreground or the background.
|
||||
|
||||
Another feature is the ability to pass `-X` and `-D` or getopt long style
|
||||
Another feature is the ability to pass `-D` or getopt long style
|
||||
configuration parameters directly to the script. When set, all override
|
||||
anything set using either `JAVA_OPTS` or `ES_JAVA_OPTS`. For example:
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
$ bin/elasticsearch -Xmx2g -Xms2g -Des.index.store.type=memory --node.name=my-node
|
||||
$ bin/elasticsearch -Des.index.refresh_interval=5s --node.name=my-node
|
||||
--------------------------------------------------
|
||||
*************************************************************************
|
||||
|
||||
|
|
|
@ -319,9 +319,8 @@ YAML or JSON):
|
|||
--------------------------------------------------
|
||||
$ curl -XPUT http://localhost:9200/kimchy/ -d \
|
||||
'
|
||||
index :
|
||||
store:
|
||||
type: memory
|
||||
index:
|
||||
refresh_interval: 5s
|
||||
'
|
||||
--------------------------------------------------
|
||||
|
||||
|
@ -331,8 +330,7 @@ within the `elasticsearch.yml` file, the following can be set:
|
|||
[source,yaml]
|
||||
--------------------------------------------------
|
||||
index :
|
||||
store:
|
||||
type: memory
|
||||
refresh_interval: 5s
|
||||
--------------------------------------------------
|
||||
|
||||
This means that every index that gets created on the specific node
|
||||
|
@ -343,7 +341,7 @@ above can also be set as a "collapsed" setting, for example:
|
|||
|
||||
[source,sh]
|
||||
--------------------------------------------------
|
||||
$ elasticsearch -Des.index.store.type=memory
|
||||
$ elasticsearch -Des.index.refresh_interval=5s
|
||||
--------------------------------------------------
|
||||
|
||||
All of the index level configuration can be found within each
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
This plugin has no third party dependencies
|
|
@ -0,0 +1 @@
|
|||
This plugin has no third party dependencies
|
|
@ -0,0 +1 @@
|
|||
This plugin has no third party dependencies
|
26
pom.xml
26
pom.xml
|
@ -57,6 +57,10 @@
|
|||
<elasticsearch.integ.antfile.default>${elasticsearch.tools.directory}/ant/integration-tests.xml</elasticsearch.integ.antfile.default>
|
||||
<elasticsearch.integ.antfile>${elasticsearch.integ.antfile.default}</elasticsearch.integ.antfile>
|
||||
|
||||
<!-- Properties for the license checker -->
|
||||
<project.licenses.dir>${project.basedir}/licenses</project.licenses.dir>
|
||||
<project.licenses.check_target>${basedir}/target/releases/${project.build.finalName}.zip</project.licenses.check_target>
|
||||
|
||||
<!-- Test properties -->
|
||||
<tests.jvms>auto</tests.jvms>
|
||||
<tests.shuffle>true</tests.shuffle>
|
||||
|
@ -1191,6 +1195,28 @@ org.eclipse.jdt.ui.text.custom_code_templates=<?xml version\="1.0" encoding\="UT
|
|||
<goal>run</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>check-license</id>
|
||||
<phase>verify</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<!-- <skip>${skip.integ.tests}</skip> -->
|
||||
<skip>true</skip>
|
||||
<target>
|
||||
<echo taskName="license check">Running license check</echo>
|
||||
<!-- don't run on windows, because everyone hates it -->
|
||||
<exec failonerror="true" executable="perl" osfamily="unix"
|
||||
dir="${elasticsearch.tools.directory}/license-check">
|
||||
<arg value="check_license_and_sha.pl"/>
|
||||
<arg value="--check"/>
|
||||
<arg value="${project.licenses.dir}"/>
|
||||
<arg value="${project.licenses.check_target}"/>
|
||||
</exec>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
|
|
202
qa/pom.xml
202
qa/pom.xml
|
@ -33,188 +33,7 @@
|
|||
<artifactId>lucene-test-framework</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.elasticsearch</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<type>test-jar</type>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Provided dependencies by elasticsearch itself -->
|
||||
<dependency>
|
||||
<groupId>org.elasticsearch</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-core</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-backward-codecs</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-analyzers-common</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-queries</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-memory</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-highlighter</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-queryparser</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-suggest</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-join</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-spatial</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-expressions</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.spatial4j</groupId>
|
||||
<artifactId>spatial4j</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.vividsolutions</groupId>
|
||||
<artifactId>jts</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.github.spullara.mustache.java</groupId>
|
||||
<artifactId>compiler</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.guava</groupId>
|
||||
<artifactId>guava</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.carrotsearch</groupId>
|
||||
<artifactId>hppc</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>joda-time</groupId>
|
||||
<artifactId>joda-time</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.joda</groupId>
|
||||
<artifactId>joda-convert</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-core</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-smile</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-yaml</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-cbor</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.netty</groupId>
|
||||
<artifactId>netty</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.ning</groupId>
|
||||
<artifactId>compress-lzf</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.tdunning</groupId>
|
||||
<artifactId>t-digest</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.commons</groupId>
|
||||
<artifactId>commons-lang3</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>commons-cli</groupId>
|
||||
<artifactId>commons-cli</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.codehaus.groovy</groupId>
|
||||
<artifactId>groovy-all</artifactId>
|
||||
<classifier>indy</classifier>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>log4j</groupId>
|
||||
<artifactId>log4j</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>log4j</groupId>
|
||||
<artifactId>apache-log4j-extras</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-api</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>net.java.dev.jna</groupId>
|
||||
<artifactId>jna</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Required by the REST test framework -->
|
||||
<!-- TODO: remove this dependency when we will have a REST Test module -->
|
||||
<dependency>
|
||||
<groupId>org.apache.httpcomponents</groupId>
|
||||
<artifactId>httpclient</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<!-- typical layout -->
|
||||
|
@ -310,11 +129,32 @@
|
|||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<!-- Don't run the license checker in qa -->
|
||||
<id>check-license</id>
|
||||
<phase>none</phase>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</pluginManagement>
|
||||
</build>
|
||||
|
||||
<modules>
|
||||
<module>smoke-test-plugins</module>
|
||||
<module>smoke-test-shaded</module>
|
||||
</modules>
|
||||
|
||||
<profiles>
|
||||
<profile>
|
||||
<id>vagrant</id>
|
||||
<modules>
|
||||
<module>vagrant</module>
|
||||
</modules>
|
||||
</profile>
|
||||
</profiles>
|
||||
</project>
|
||||
|
|
|
@ -31,6 +31,190 @@
|
|||
<tests.rest.suite>smoke_test_plugins</tests.rest.suite>
|
||||
<tests.rest.load_packaged>false</tests.rest.load_packaged>
|
||||
</properties>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.elasticsearch</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<type>test-jar</type>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Provided dependencies by elasticsearch itself -->
|
||||
<dependency>
|
||||
<groupId>org.elasticsearch</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-core</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-backward-codecs</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-analyzers-common</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-queries</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-memory</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-highlighter</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-queryparser</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-suggest</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-join</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-spatial</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-expressions</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.spatial4j</groupId>
|
||||
<artifactId>spatial4j</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.vividsolutions</groupId>
|
||||
<artifactId>jts</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.github.spullara.mustache.java</groupId>
|
||||
<artifactId>compiler</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.guava</groupId>
|
||||
<artifactId>guava</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.carrotsearch</groupId>
|
||||
<artifactId>hppc</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>joda-time</groupId>
|
||||
<artifactId>joda-time</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.joda</groupId>
|
||||
<artifactId>joda-convert</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-core</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-smile</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-yaml</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.dataformat</groupId>
|
||||
<artifactId>jackson-dataformat-cbor</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.netty</groupId>
|
||||
<artifactId>netty</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.ning</groupId>
|
||||
<artifactId>compress-lzf</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.tdunning</groupId>
|
||||
<artifactId>t-digest</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.commons</groupId>
|
||||
<artifactId>commons-lang3</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>commons-cli</groupId>
|
||||
<artifactId>commons-cli</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.codehaus.groovy</groupId>
|
||||
<artifactId>groovy-all</artifactId>
|
||||
<classifier>indy</classifier>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>log4j</groupId>
|
||||
<artifactId>log4j</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>log4j</groupId>
|
||||
<artifactId>apache-log4j-extras</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-api</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>net.java.dev.jna</groupId>
|
||||
<artifactId>jna</artifactId>
|
||||
<scope>provided</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Required by the REST test framework -->
|
||||
<!-- TODO: remove this dependency when we will have a REST Test module -->
|
||||
<dependency>
|
||||
<groupId>org.apache.httpcomponents</groupId>
|
||||
<artifactId>httpclient</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
|
|
|
@ -0,0 +1,38 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<parent>
|
||||
<groupId>org.elasticsearch.qa</groupId>
|
||||
<artifactId>elasticsearch-qa</artifactId>
|
||||
<version>2.0.0-beta1-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>smoke-test-shaded</artifactId>
|
||||
<name>QA: Smoke Test Shaded Jar</name>
|
||||
<description>Runs a simple </description>
|
||||
|
||||
<properties>
|
||||
<elasticsearch.thirdparty.config>shaded</elasticsearch.thirdparty.config>
|
||||
</properties>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.elasticsearch.distribution.shaded</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<version>2.0.0-beta1-SNAPSHOT</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.hamcrest</groupId>
|
||||
<artifactId>hamcrest-all</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.lucene</groupId>
|
||||
<artifactId>lucene-test-framework</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</project>
|
|
@ -0,0 +1,54 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.shaded.test;
|
||||
|
||||
import org.apache.lucene.util.LuceneTestCase;
|
||||
import org.elasticsearch.action.search.SearchResponse;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.common.logging.ESLoggerFactory;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.node.Node;
|
||||
import org.elasticsearch.node.NodeBuilder;
|
||||
|
||||
import java.nio.file.Path;
|
||||
|
||||
/**
|
||||
*/
|
||||
public class ShadedIT extends LuceneTestCase {
|
||||
|
||||
public void testStartShadedNode() {
|
||||
ESLoggerFactory.getRootLogger().setLevel("ERROR");
|
||||
Path data = createTempDir();
|
||||
Settings settings = Settings.builder()
|
||||
.put("path.home", data.toAbsolutePath().toString())
|
||||
.put("node.mode", "local")
|
||||
.put("http.enabled", "false")
|
||||
.build();
|
||||
NodeBuilder builder = NodeBuilder.nodeBuilder().data(true).settings(settings).loadConfigSettings(false).local(true);
|
||||
try (Node node = builder.node()) {
|
||||
Client client = node.client();
|
||||
client.admin().indices().prepareCreate("test").get();
|
||||
client.prepareIndex("test", "foo").setSource("{ \"field\" : \"value\" }").get();
|
||||
client.admin().indices().prepareRefresh().get();
|
||||
SearchResponse response = client.prepareSearch("test").get();
|
||||
assertEquals(response.getHits().getTotalHits(), 1l);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
|
@ -0,0 +1,49 @@
|
|||
/*
|
||||
* Licensed to Elasticsearch under one or more contributor
|
||||
* license agreements. See the NOTICE file distributed with
|
||||
* this work for additional information regarding copyright
|
||||
* ownership. Elasticsearch licenses this file to you under
|
||||
* the Apache License, Version 2.0 (the "License"); you may
|
||||
* not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing,
|
||||
* software distributed under the License is distributed on an
|
||||
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
* KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations
|
||||
* under the License.
|
||||
*/
|
||||
package org.elasticsearch.shaded.test;
|
||||
|
||||
import org.apache.lucene.util.LuceneTestCase;
|
||||
import org.junit.Test;
|
||||
|
||||
/**
|
||||
*/
|
||||
public class ShadedTest extends LuceneTestCase {
|
||||
|
||||
@Test
|
||||
public void testLoadShadedClasses() throws ClassNotFoundException {
|
||||
Class.forName("org.elasticsearch.common.collect.ImmutableList");
|
||||
Class.forName("org.elasticsearch.common.joda.time.DateTime");
|
||||
Class.forName("org.elasticsearch.common.util.concurrent.jsr166e.LongAdder");
|
||||
}
|
||||
|
||||
@Test(expected = ClassNotFoundException.class)
|
||||
public void testGuavaIsNotOnTheCP() throws ClassNotFoundException {
|
||||
Class.forName("com.google.common.collect.ImmutableList");
|
||||
}
|
||||
|
||||
@Test(expected = ClassNotFoundException.class)
|
||||
public void testJodaIsNotOnTheCP() throws ClassNotFoundException {
|
||||
Class.forName("org.joda.time.DateTime");
|
||||
}
|
||||
|
||||
@Test(expected = ClassNotFoundException.class)
|
||||
public void testjsr166eIsNotOnTheCP() throws ClassNotFoundException {
|
||||
Class.forName("com.twitter.jsr166e.LongAdder");
|
||||
}
|
||||
}
|
|
@ -0,0 +1,279 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<parent>
|
||||
<groupId>org.elasticsearch.qa</groupId>
|
||||
<artifactId>elasticsearch-qa</artifactId>
|
||||
<version>2.0.0-beta1-SNAPSHOT</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>elasticsearch-distribution-tests</artifactId>
|
||||
<name>QA: Elasticsearch Vagrant Tests</name>
|
||||
<description>Tests the Elasticsearch distribution artifacts on virtual
|
||||
machines using vagrant and bats.</description>
|
||||
<packaging>pom</packaging>
|
||||
|
||||
<!-- The documentation for how to run this is in ../../Vagrantfile -->
|
||||
<properties>
|
||||
<testScripts>*.bats</testScripts>
|
||||
<testCommand>sudo ES_CLEAN_BEFORE_TEST=true bats $BATS/${testScripts}</testCommand>
|
||||
|
||||
<allDebBoxes>precise, trusty, vivid, wheezy, jessie</allDebBoxes>
|
||||
<allRpmBoxes>centos-6, centos-7, fedora-22, oel-7</allRpmBoxes>
|
||||
|
||||
<debBoxes>trusty</debBoxes>
|
||||
<rpmBoxes>centos-7</rpmBoxes>
|
||||
|
||||
<!-- Unless rpmbuild is available on the host we can't test rpm based
|
||||
boxes because we can't build the rpm and they fail without the rpm.
|
||||
So to get good coverage you'll need to run this on a system with
|
||||
rpmbuild installed - either osx via homebrew or fedora/centos/rhel.
|
||||
-->
|
||||
<proposedBoxesToTest>${debBoxes}</proposedBoxesToTest>
|
||||
|
||||
<!-- rpmbuild location : default to /usr/bin/rpmbuild -->
|
||||
<packaging.rpm.rpmbuild>/usr/bin/rpmbuild</packaging.rpm.rpmbuild>
|
||||
</properties>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<!-- Clean the location where we keep the distribution artifacts
|
||||
to make sure that there aren't any old versions in there. -->
|
||||
<plugin>
|
||||
<artifactId>maven-clean-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>clean-testroot</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>clean</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<excludeDefaultDirectories>true</excludeDefaultDirectories>
|
||||
<filesets>
|
||||
<fileset>
|
||||
<directory>${project.build.directory}/testroot</directory>
|
||||
</fileset>
|
||||
</filesets>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<!-- Put the distribution artifacts some place the test can get at
|
||||
them -->
|
||||
<plugin>
|
||||
<artifactId>maven-dependency-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>copy-common-to-testroot</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>copy</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<outputDirectory>${project.build.directory}/testroot</outputDirectory>
|
||||
<artifactItems>
|
||||
<artifactItem>
|
||||
<groupId>org.elasticsearch.distribution.zip</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<version>${elasticsearch.version}</version>
|
||||
<type>zip</type>
|
||||
</artifactItem>
|
||||
<artifactItem>
|
||||
<groupId>org.elasticsearch.distribution.tar</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<version>${elasticsearch.version}</version>
|
||||
<type>tar.gz</type>
|
||||
</artifactItem>
|
||||
<artifactItem>
|
||||
<groupId>org.elasticsearch.distribution.deb</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<version>${elasticsearch.version}</version>
|
||||
<type>deb</type>
|
||||
</artifactItem>
|
||||
</artifactItems>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>ant-contrib</groupId>
|
||||
<artifactId>ant-contrib</artifactId>
|
||||
<version>1.0b3</version>
|
||||
<exclusions>
|
||||
<exclusion>
|
||||
<groupId>ant</groupId>
|
||||
<artifactId>ant</artifactId>
|
||||
</exclusion>
|
||||
</exclusions>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>check-vagrant-version</id>
|
||||
<phase>validate</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<target>
|
||||
<taskdef resource="net/sf/antcontrib/antlib.xml"
|
||||
classpathref="maven.dependency.classpath" />
|
||||
<ant antfile="src/dev/ant/vagrant-integration-tests.xml"
|
||||
target="check-vagrant-version"/>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>test-vms</id>
|
||||
<phase>integration-test</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<target unless="${skipTests}">
|
||||
<taskdef resource="net/sf/antcontrib/antlib.xml"
|
||||
classpathref="maven.dependency.classpath" />
|
||||
<echo message="Running package tests on ${boxesToTest}"/>
|
||||
<ant antfile="src/dev/ant/vagrant-integration-tests.xml"
|
||||
target="vagrant-test-all-boxes"/>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
<profiles>
|
||||
<!-- The following profiles change which boxes are run and whether or
|
||||
not this build depends on the rpm artifact. We only depend on the
|
||||
rpm artifact if this machine is capable of building it and we only
|
||||
test on the rpm based distributions if the rpm is available
|
||||
because the tests require it to be. -->
|
||||
<profile>
|
||||
<!-- Test on all boxes -->
|
||||
<id>all</id>
|
||||
<properties>
|
||||
<debBoxes>${allDebBoxes}</debBoxes>
|
||||
<rpmBoxes>${allRpmBoxes}</rpmBoxes>
|
||||
</properties>
|
||||
</profile>
|
||||
<profile>
|
||||
<!-- Enable the rpm artifact and rpm-boxes because we're on an
|
||||
rpm-based distribution. -->
|
||||
<id>rpm</id>
|
||||
<activation>
|
||||
<file>
|
||||
<exists>${packaging.rpm.rpmbuild}</exists>
|
||||
</file>
|
||||
</activation>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<artifactId>maven-dependency-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>copy-rpm-to-testroot</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>copy</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<outputDirectory>${project.build.directory}/testroot</outputDirectory>
|
||||
<artifactItems>
|
||||
<artifactItem>
|
||||
<groupId>org.elasticsearch.distribution.rpm</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<version>${elasticsearch.version}</version>
|
||||
<type>rpm</type>
|
||||
</artifactItem>
|
||||
</artifactItems>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.elasticsearch.distribution</groupId>
|
||||
<artifactId>elasticsearch-rpm</artifactId>
|
||||
<version>${elasticsearch.version}</version>
|
||||
<type>rpm</type>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
<properties>
|
||||
<proposedBoxesToTest>${debBoxes}, ${rpmBoxes}</proposedBoxesToTest>
|
||||
</properties>
|
||||
</profile>
|
||||
<profile>
|
||||
<!-- Enable the rpm artifact and rpm-boxes because we're on an
|
||||
rpm-based distribution. -->
|
||||
<id>rpm-via-homebrew</id>
|
||||
<activation>
|
||||
<file>
|
||||
<exists>/usr/local/bin/rpmbuild</exists>
|
||||
</file>
|
||||
</activation>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<artifactId>maven-dependency-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>copy-rpm-to-testroot</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>copy</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<outputDirectory>${project.build.directory}/testroot</outputDirectory>
|
||||
<artifactItems>
|
||||
<artifactItem>
|
||||
<groupId>org.elasticsearch.distribution.rpm</groupId>
|
||||
<artifactId>elasticsearch</artifactId>
|
||||
<version>${elasticsearch.version}</version>
|
||||
<type>rpm</type>
|
||||
</artifactItem>
|
||||
</artifactItems>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
<properties>
|
||||
<proposedBoxesToTest>${debBoxes}, ${rpmBoxes}</proposedBoxesToTest>
|
||||
</properties>
|
||||
</profile>
|
||||
<profile>
|
||||
<!-- Only set boxesToTest if it hasn't been set on the command
|
||||
line. -->
|
||||
<id>set-boxes-to-test</id>
|
||||
<activation>
|
||||
<property>
|
||||
<name>!boxesToTest</name>
|
||||
</property>
|
||||
</activation>
|
||||
<properties>
|
||||
<boxesToTest>${proposedBoxesToTest}</boxesToTest>
|
||||
</properties>
|
||||
</profile>
|
||||
|
||||
<!-- This profile manipulates what is run. -->
|
||||
<profile>
|
||||
<!-- Smoke tests the VMs but doesn't actually run the bats tests -->
|
||||
<id>smoke-vms</id>
|
||||
<properties>
|
||||
<testCommand>echo skipping tests</testCommand>
|
||||
</properties>
|
||||
</profile>
|
||||
</profiles>
|
||||
</project>
|
|
@ -0,0 +1,74 @@
|
|||
<?xml version="1.0"?>
|
||||
<project name="elasticsearch-integration-tests">
|
||||
<target name="vagrant-test-all-boxes">
|
||||
<foreach list="${boxesToTest}" trim="true" param="box"
|
||||
target="vagrant-test" inheritall="true" inheritrefs="true"/>
|
||||
</target>
|
||||
|
||||
<target name="vagrant-test" depends="vagrant-up">
|
||||
<trycatch>
|
||||
<try>
|
||||
<exec executable="vagrant" failonerror="true">
|
||||
<arg value="ssh"/>
|
||||
<arg value="${box}"/>
|
||||
<arg value="--command"/>
|
||||
<arg value="
|
||||
set -o pipefail;
|
||||
cd $TESTROOT;
|
||||
${testCommand} | sed -ue 's/^/${box}: /'
|
||||
"/>
|
||||
</exec>
|
||||
</try>
|
||||
<finally>
|
||||
<exec executable="vagrant" failonerror="true">
|
||||
<arg value="halt"/>
|
||||
<arg value="${box}"/>
|
||||
</exec>
|
||||
</finally>
|
||||
</trycatch>
|
||||
</target>
|
||||
|
||||
<target name="vagrant-up">
|
||||
<exec executable="vagrant" failonerror="true">
|
||||
<arg value="up"/>
|
||||
<arg value="${box}"/>
|
||||
<!-- Its important that we try to reprovision the box even if it already
|
||||
exists. That way updates to the vagrant configuration take automatically.
|
||||
That isn't to say that the updates will always be compatible. Its ok to
|
||||
just destroy the boxes if they get busted. -->
|
||||
<arg value="--provision"/>
|
||||
</exec>
|
||||
</target>
|
||||
|
||||
<target name="check-vagrant-version">
|
||||
<check-version executable="vagrant" ok="^1\.[789]\..+$"
|
||||
message="Only known to work with Vagrant 1.7+"/>
|
||||
</target>
|
||||
|
||||
<macrodef name="check-version">
|
||||
<attribute name="executable" description="The executable to check."/>
|
||||
<attribute name="rewrite" default="(?:\S*\s)*(.+)"
|
||||
description="Regex extracting the version from the output of the executable. Defaults to everything after the last space."/>
|
||||
<attribute name="ok" description="The regex to check the version against."/>
|
||||
<attribute name="message" description="The message to report on failure."/>
|
||||
<sequential>
|
||||
<exec executable="@{executable}" failonerror="true"
|
||||
outputproperty="versionOutput">
|
||||
<arg value="--version" />
|
||||
</exec>
|
||||
<propertyregex property="version" input="${versionOutput}"
|
||||
regexp="@{rewrite}" select="\1" />
|
||||
<echo message="The @{executable} version is ${version}"/>
|
||||
<fail message="@{message}">
|
||||
<condition>
|
||||
<not>
|
||||
<!-- Very simple version checking.... -->
|
||||
<matches string="${version}" pattern="@{ok}"/>
|
||||
</not>
|
||||
</condition>
|
||||
</fail>
|
||||
</sequential>
|
||||
</macrodef>
|
||||
|
||||
|
||||
</project>
|
|
@ -50,6 +50,7 @@ setup() {
|
|||
# Install plugins with a tar archive
|
||||
##################################
|
||||
@test "[TAR] install shield plugin" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the archive
|
||||
install_archive
|
||||
|
@ -90,6 +91,7 @@ setup() {
|
|||
}
|
||||
|
||||
@test "[TAR] install shield plugin with a custom path.plugins" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the archive
|
||||
install_archive
|
||||
|
@ -143,6 +145,7 @@ setup() {
|
|||
}
|
||||
|
||||
@test "[TAR] install shield plugin with a custom CONFIG_DIR" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the archive
|
||||
install_archive
|
||||
|
@ -199,6 +202,7 @@ setup() {
|
|||
}
|
||||
|
||||
@test "[TAR] install shield plugin with a custom ES_JAVA_OPTS" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the archive
|
||||
install_archive
|
||||
|
@ -259,6 +263,8 @@ setup() {
|
|||
}
|
||||
|
||||
@test "[TAR] install shield plugin to elasticsearch directory with a space" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
export ES_DIR="/tmp/elastic search"
|
||||
|
||||
# Install the archive
|
||||
|
@ -307,6 +313,7 @@ setup() {
|
|||
}
|
||||
|
||||
@test "[TAR] install shield plugin from a directory with a space" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
export SHIELD_ZIP_WITH_SPACE="/tmp/plugins with space/shield.zip"
|
||||
|
|
@ -63,6 +63,7 @@ install_package() {
|
|||
# Install plugins with DEB/RPM package
|
||||
##################################
|
||||
@test "[PLUGINS] install shield plugin" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the package
|
||||
install_package
|
||||
|
@ -103,6 +104,7 @@ install_package() {
|
|||
}
|
||||
|
||||
@test "[PLUGINS] install shield plugin with a custom path.plugins" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the package
|
||||
install_package
|
||||
|
@ -160,6 +162,7 @@ install_package() {
|
|||
}
|
||||
|
||||
@test "[PLUGINS] install shield plugin with a custom CONFIG_DIR" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the package
|
||||
install_package
|
||||
|
@ -227,6 +230,7 @@ install_package() {
|
|||
}
|
||||
|
||||
@test "[PLUGINS] install shield plugin with a custom ES_JAVA_OPTS" {
|
||||
skip "awaits public release of shield for 2.0"
|
||||
|
||||
# Install the package
|
||||
install_package
|
|
@ -97,7 +97,8 @@ setup() {
|
|||
skip_not_sysvinit
|
||||
|
||||
run service elasticsearch status
|
||||
[ "$status" -eq 3 ]
|
||||
# precise returns 4, trusty 3
|
||||
[ "$status" -eq 3 ] || [ "$status" -eq 4 ]
|
||||
}
|
||||
|
||||
# Simulates the behavior of a system restart:
|
||||
|
@ -120,4 +121,4 @@ setup() {
|
|||
|
||||
run service elasticsearch stop
|
||||
[ "$status" -eq 0 ]
|
||||
}
|
||||
}
|
Loading…
Reference in New Issue